Weird behaviour on log files
Hi.
Just trying to add more log members.
In order to DROP logfile group, I need to make sure it is INACTIVE. But each log group is ACTIVE / CURRENT:
SQL> SELECT GROUP#, STATUS FROM V$LOG;
GROUP# STATUS
1 CURRENT
2 ACTIVE
3 ACTIVE
So I added another logfile group to see, and hey presto:
SQL> ALTER DATABASE ADD LOGFILE GROUP 4
2 ('/u02/oradata/=/redo04a.log',
3 '/u02/oradata/redo04b.log',
4 '/u02/oradata/redo04c.log') SIZE 52M;
Database altered.
SQL> SELECT GROUP#, STATUS FROM V$LOG;
GROUP# STATUS
1 CURRENT
2 ACTIVE
3 ACTIVE
4 UNUSED
SQL> alter system switch logfile;
System altered.
SQL> SELECT GROUP#, STATUS FROM V$LOG;
GROUP# STATUS
1 ACTIVE
2 ACTIVE
3 ACTIVE
4 CURRENT
Now again all are being active / current!
How can I drop any of these to get on with my work?
(ie):
ALTER DATABASE DROP LOGFILE GROUP 1;
Re-create the group
ALTER DATABASE ADD LOGFILE GROUP 1
('/u02/oradata/redo05a.log',
'/u02/oradata/redo05b.log') SIZE 50M;
Oracle Database 10g Release 10.2.0.3.0 - 64bit Production
LINUX
Thanks in advance,
DA
alter system checkpoint; was the answer!
Edited by: Dan A on Oct 22, 2009 9:59 AM
Hi,
I tried the same but this doesn't happen to me. Which version of JDeveloper do you use ?
Frank
Similar Messages
-
Weird behaviour at log creation and cleaning.
Hi All,
One of our ReplicatedEnvironment is creating very small log files with very small % of utilization.
The tree first columns represents the DbSpace output; where as the next columns represents part of the "ls -lh <envHome>/*.jdb" result. All has been extracted from Master Environment.
000000e4 12 70 12K Apr 14 12:28 /bdb/AuditingBO/000000e4.jdb
000000e3 44 77 45K Apr 13 17:48 /bdb/AuditingBO/000000e3.jdb
000000e2 27 64 27K Apr 12 22:55 /bdb/AuditingBO/000000e2.jdb
000000e1 63 66 63K Apr 11 22:53 /bdb/AuditingBO/000000e1.jdb
000000e0 28 13 29K Apr 10 00:30 /bdb/AuditingBO/000000e0.jdb
000000df 0 0 69 Apr 9 00:30 /bdb/AuditingBO/000000df.jdb
000000de 23 74 24K Apr 8 15:22 /bdb/AuditingBO/000000de.jdb
000000dd 49 34 50K Apr 7 16:45 /bdb/AuditingBO/000000dd.jdb
000000dc 25 72 26K Apr 6 19:05 /bdb/AuditingBO/000000dc.jdb
000000db 10 74 10K Apr 5 20:03 /bdb/AuditingBO/000000db.jdb
000000da 186 39 187K Apr 5 12:55 /bdb/AuditingBO/000000da.jdb
000000d9 0 0 69 Apr 3 00:30 /bdb/AuditingBO/000000d9.jdb
000000d8 0 0 69 Apr 2 00:30 /bdb/AuditingBO/000000d8.jdb
000000d7 64 37 64K Apr 1 22:46 /bdb/AuditingBO/000000d7.jdb
000000d6 14 63 14K Mar 31 17:44 /bdb/AuditingBO/000000d6.jdb
000000d5 80 44 81K Mar 30 19:32 /bdb/AuditingBO/000000d5.jdb
000000d4 9 74 9.6K Mar 29 20:31 /bdb/AuditingBO/000000d4.jdb
000000d3 15 73 16K Mar 28 19:07 /bdb/AuditingBO/000000d3.jdb
000000d2 27 7 28K Mar 27 00:30 /bdb/AuditingBO/000000d2.jdb
000000d1 1 63 1.2K Mar 26 11:40 /bdb/AuditingBO/000000d1.jdb
000000d0 19 61 19K Mar 25 20:41 /bdb/AuditingBO/000000d0.jdb
000000cf 76 40 76K Mar 24 19:19 /bdb/AuditingBO/000000cf.jdb
000000ce 92 34 93K Mar 22 23:13 /bdb/AuditingBO/000000ce.jdb
000000cd 30 25 30K Mar 21 20:56 /bdb/AuditingBO/000000cd.jdb
000000cc 0 0 69 Mar 20 00:30 /bdb/AuditingBO/000000cc.jdb
000000cb 0 0 69 Mar 19 00:30 /bdb/AuditingBO/000000cb.jdb
000000ca 34 23 35K Mar 18 22:07 /bdb/AuditingBO/000000ca.jdb
000000c9 8 73 8.3K Mar 17 22:46 /bdb/AuditingBO/000000c9.jdb
000000c8 12 62 13K Mar 16 20:01 /bdb/AuditingBO/000000c8.jdb
000000c7 54 13 55K Mar 15 21:11 /bdb/AuditingBO/000000c7.jdb
000000c6 492 36 492K Mar 15 00:30 /bdb/AuditingBO/000000c6.jdb
000000c5 136 29 136K Feb 18 22:37 /bdb/AuditingBO/000000c5.jdb
000000c4 19 60 19K Feb 17 18:52 /bdb/AuditingBO/000000c4.jdb
000000c3 34 19 35K Feb 16 19:26 /bdb/AuditingBO/000000c3.jdb
000000c2 15 75 16K Feb 15 20:14 /bdb/AuditingBO/000000c2.jdb
000000c1 19 62 19K Feb 14 18:41 /bdb/AuditingBO/000000c1.jdb
000000c0 21 7 22K Feb 13 00:30 /bdb/AuditingBO/000000c0.jdb
000000bf 0 0 69 Feb 12 00:30 /bdb/AuditingBO/000000bf.jdb
000000be 12 62 12K Feb 11 18:47 /bdb/AuditingBO/000000be.jdb
000000bd 52 35 52K Feb 10 22:50 /bdb/AuditingBO/000000bd.jdb
000000bc 18 62 18K Feb 9 20:37 /bdb/AuditingBO/000000bc.jdb
000000bb 55 48 55K Feb 8 19:03 /bdb/AuditingBO/000000bb.jdb
000000ba 103 18 103K Feb 7 22:08 /bdb/AuditingBO/000000ba.jdb
000000b9 29 5 29K Feb 6 00:30 /bdb/AuditingBO/000000b9.jdb
000000b8 1 70 1.5K Feb 5 19:00 /bdb/AuditingBO/000000b8.jdb
000000b7 20 71 21K Feb 4 21:25 /bdb/AuditingBO/000000b7.jdb
000000b6 58 36 59K Feb 3 19:22 /bdb/AuditingBO/000000b6.jdb
000000b5 21 71 21K Feb 2 18:35 /bdb/AuditingBO/000000b5.jdb
000000b4 25 69 25K Feb 1 20:31 /bdb/AuditingBO/000000b4.jdb
000000b3 53 19 54K Jan 31 18:41 /bdb/AuditingBO/000000b3.jdb
000000b2 0 0 69 Jan 30 00:30 /bdb/AuditingBO/000000b2.jdb
000000b1 26 5 27K Jan 29 00:30 /bdb/AuditingBO/000000b1.jdb
000000b0 11 75 11K Jan 28 23:36 /bdb/AuditingBO/000000b0.jdb
000000af 14 74 15K Jan 27 21:10 /bdb/AuditingBO/000000af.jdb
000000ae 48 31 48K Jan 26 23:29 /bdb/AuditingBO/000000ae.jdb
000000ad 18 73 19K Jan 25 22:55 /bdb/AuditingBO/000000ad.jdb
000000ac 20 68 20K Jan 24 22:17 /bdb/AuditingBO/000000ac.jdb
000000ab 21 4 22K Jan 23 00:30 /bdb/AuditingBO/000000ab.jdb
000000aa 0 0 69 Jan 22 00:30 /bdb/AuditingBO/000000aa.jdb
000000a9 18 66 18K Jan 21 18:43 /bdb/AuditingBO/000000a9.jdb
000000a8 95 15 96K Jan 20 19:53 /bdb/AuditingBO/000000a8.jdb
000000a7 48 33 49K Jan 19 19:51 /bdb/AuditingBO/000000a7.jdb
000000a6 16 65 16K Jan 18 21:50 /bdb/AuditingBO/000000a6.jdb
000000a5 37 45 38K Jan 17 19:18 /bdb/AuditingBO/000000a5.jdb
000000a4 2 68 2.2K Jan 16 13:58 /bdb/AuditingBO/000000a4.jdb
000000a3 1 67 1.8K Jan 15 17:06 /bdb/AuditingBO/000000a3.jdb
000000a2 117 15 117K Jan 15 00:00 /bdb/AuditingBO/000000a2.jdb
000000a1 39 31 40K Jan 13 20:53 /bdb/AuditingBO/000000a1.jdb
000000a0 16 65 16K Jan 12 19:25 /bdb/AuditingBO/000000a0.jdb
0000009f 31 61 32K Jan 11 23:49 /bdb/AuditingBO/0000009f.jdb
0000009e 56 41 56K Jan 10 17:52 /bdb/AuditingBO/0000009e.jdb
0000009d 0 0 69 Jan 9 00:30 /bdb/AuditingBO/0000009d.jdb
0000009c 0 0 69 Jan 8 00:30 /bdb/AuditingBO/0000009c.jdb
0000009b 32 28 33K Jan 7 18:00 /bdb/AuditingBO/0000009b.jdb
0000009a 0 58 883 Jan 6 12:31 /bdb/AuditingBO/0000009a.jdb
00000099 29 70 30K Jan 5 19:59 /bdb/AuditingBO/00000099.jdb
00000098 201 19 202K Jan 4 23:23 /bdb/AuditingBO/00000098.jdb
00000097 0 0 69 Jan 2 00:30 /bdb/AuditingBO/00000097.jdb
00000096 17 8 18K Jan 1 00:30 /bdb/AuditingBO/00000096.jdb
00000095 3 71 3.4K Dec 31 20:23 /bdb/AuditingBO/00000095.jdb
00000094 5 71 5.4K Dec 30 21:23 /bdb/AuditingBO/00000094.jdb
00000093 65 26 66K Dec 29 22:45 /bdb/AuditingBO/00000093.jdb
00000092 34 29 35K Dec 28 23:05 /bdb/AuditingBO/00000092.jdb
00000091 18 66 19K Dec 27 22:34 /bdb/AuditingBO/00000091.jdb
00000090 14 4 15K Dec 26 00:30 /bdb/AuditingBO/00000090.jdb
0000008f 0 0 69 Dec 25 00:30 /bdb/AuditingBO/0000008f.jdb
0000008e 6 74 6.7K Dec 24 15:15 /bdb/AuditingBO/0000008e.jdb
0000008d 254 20 255K Dec 23 22:21 /bdb/AuditingBO/0000008d.jdb
0000008c 27 43 28K Dec 22 20:03 /bdb/AuditingBO/0000008c.jdb
0000008b 497 19 497K Dec 22 00:00 /bdb/AuditingBO/0000008b.jdb
0000008a 0 0 69 Dec 9 00:30 /bdb/AuditingBO/0000008a.jdb
00000089 114 6 115K Dec 8 15:58 /bdb/AuditingBO/00000089.jdb
00000088 4 38 5.0K Dec 7 23:45 /bdb/AuditingBO/00000088.jdb
00000087 0 0 69 Dec 6 00:32 /bdb/AuditingBO/00000087.jdb
00000086 37 5 38K Dec 5 00:30 /bdb/AuditingBO/00000086.jdb
00000085 0 0 69 Dec 4 00:30 /bdb/AuditingBO/00000085.jdb
00000084 14 41 15K Dec 3 23:50 /bdb/AuditingBO/00000084.jdb
00000083 79 20 79K Dec 2 23:52 /bdb/AuditingBO/00000083.jdb
00000082 14 41 14K Dec 1 23:48 /bdb/AuditingBO/00000082.jdb
00000081 31 35 32K Dec 1 00:30 /bdb/AuditingBO/00000081.jdb
00000080 46 17 46K Nov 29 23:58 /bdb/AuditingBO/00000080.jdb
0000007f 0 0 69 Nov 28 00:30 /bdb/AuditingBO/0000007f.jdb
0000007e 0 0 69 Nov 27 00:30 /bdb/AuditingBO/0000007e.jdb
0000007d 71 18 71K Nov 26 23:51 /bdb/AuditingBO/0000007d.jdb
0000007c 28 37 28K Nov 25 23:49 /bdb/AuditingBO/0000007c.jdb
0000007b 92 35 92K Nov 24 23:50 /bdb/AuditingBO/0000007b.jdb
0000007a 95 25 95K Nov 23 23:52 /bdb/AuditingBO/0000007a.jdb
00000079 34 38 34K Nov 22 23:51 /bdb/AuditingBO/00000079.jdb
00000078 29 7 30K Nov 21 00:30 /bdb/AuditingBO/00000078.jdb
00000077 0 0 69 Nov 20 00:30 /bdb/AuditingBO/00000077.jdb
00000076 16 36 16K Nov 19 23:50 /bdb/AuditingBO/00000076.jdb
00000075 189 20 189K Nov 18 23:49 /bdb/AuditingBO/00000075.jdb
00000074 14 35 15K Nov 16 23:52 /bdb/AuditingBO/00000074.jdb
00000073 24 38 25K Nov 15 23:49 /bdb/AuditingBO/00000073.jdb
00000072 71 16 72K Nov 15 00:30 /bdb/AuditingBO/00000072.jdb
00000071 11 28 12K Nov 12 23:52 /bdb/AuditingBO/00000071.jdb
00000070 571 32 571K Nov 12 00:30 /bdb/AuditingBO/00000070.jdb
0000006f 43 16 44K Nov 10 23:52 /bdb/AuditingBO/0000006f.jdb
0000006e 2 41 3.0K Nov 9 19:54 /bdb/AuditingBO/0000006e.jdb
0000006d 15 41 16K Nov 8 23:57 /bdb/AuditingBO/0000006d.jdb
0000006c 47 17 47K Nov 7 19:45 /bdb/AuditingBO/0000006c.jdb
0000006b 2 40 2.4K Nov 6 15:55 /bdb/AuditingBO/0000006b.jdb
0000006a 20 33 21K Nov 6 00:05 /bdb/AuditingBO/0000006a.jdb
00000069 98 24 99K Nov 4 23:41 /bdb/AuditingBO/00000069.jdb
00000068 21 37 21K Nov 3 23:48 /bdb/AuditingBO/00000068.jdb
00000067 61 33 62K Nov 2 23:53 /bdb/AuditingBO/00000067.jdb
00000066 0 0 69 Nov 1 00:30 /bdb/AuditingBO/00000066.jdb
00000065 0 0 69 Oct 31 00:30 /bdb/AuditingBO/00000065.jdb
00000064 34 9 35K Oct 30 00:30 /bdb/AuditingBO/00000064.jdb
00000063 13 41 14K Oct 29 23:43 /bdb/AuditingBO/00000063.jdb
00000062 98 35 99K Oct 28 23:51 /bdb/AuditingBO/00000062.jdb
00000061 49 23 49K Oct 28 00:00 /bdb/AuditingBO/00000061.jdb
00000060 13 33 13K Oct 26 23:54 /bdb/AuditingBO/00000060.jdb
0000005f 4 41 4.1K Oct 26 11:06 /bdb/AuditingBO/0000005f.jdb
0000005e 183 1 183K Oct 25 23:47 /bdb/AuditingBO/0000005e.jdb
0000005d 65 19 65K Oct 20 23:48 /bdb/AuditingBO/0000005d.jdb
0000005c 72 36 72K Oct 19 23:48 /bdb/AuditingBO/0000005c.jdb
0000005b 67 23 67K Oct 18 23:48 /bdb/AuditingBO/0000005b.jdb
0000005a 0 0 69 Oct 17 00:30 /bdb/AuditingBO/0000005a.jdb
00000059 0 39 664 Oct 16 2010 /bdb/AuditingBO/00000059.jdb
00000058 83 17 83K Oct 15 2010 /bdb/AuditingBO/00000058.jdb
00000057 89 27 89K Oct 14 2010 /bdb/AuditingBO/00000057.jdb
00000056 27 42 27K Oct 13 2010 /bdb/AuditingBO/00000056.jdb
00000055 20 4 21K Oct 12 2010 /bdb/AuditingBO/00000055.jdb
00000054 13 42 14K Oct 11 2010 /bdb/AuditingBO/00000054.jdb
00000053 0 0 51K Oct 10 2010 /bdb/AuditingBO/00000052.jdb
00000052 50 4 69 Oct 10 2010 /bdb/AuditingBO/00000053.jdb
00000051 41 15 42K Oct 8 2010 /bdb/AuditingBO/00000051.jdb
00000050 42 31 43K Oct 7 2010 /bdb/AuditingBO/00000050.jdb
0000004f 74 37 75K Oct 6 2010 /bdb/AuditingBO/0000004f.jdb
0000004e 82 26 83K Oct 5 2010 /bdb/AuditingBO/0000004e.jdb
0000004d 73 39 74K Oct 4 2010 /bdb/AuditingBO/0000004d.jdb
0000004c 29 9 30K Oct 4 2010 /bdb/AuditingBO/0000004c.jdb
0000004b 0 39 665 Oct 2 2010 /bdb/AuditingBO/0000004b.jdb
0000004a 40 40 41K Oct 1 2010 /bdb/AuditingBO/0000004a.jdb
00000049 466 27 467K Sep 30 2010 /bdb/AuditingBO/00000049.jdb
00000048 49 16 50K Sep 22 2010 /bdb/AuditingBO/00000048.jdb
00000047 38 39 39K Sep 21 2010 /bdb/AuditingBO/00000047.jdb
00000046 40 27 40K Sep 20 2010 /bdb/AuditingBO/00000046.jdb
00000045 0 39 664 Sep 19 2010 /bdb/AuditingBO/00000045.jdb
00000044 0 0 69 Sep 18 2010 /bdb/AuditingBO/00000044.jdb
00000043 72 14 72K Sep 17 2010 /bdb/AuditingBO/00000043.jdb
00000042 26 34 26K Sep 16 2010 /bdb/AuditingBO/00000042.jdb
00000041 14 40 15K Sep 15 2010 /bdb/AuditingBO/00000041.jdb
00000040 116 5 117K Sep 14 2010 /bdb/AuditingBO/00000040.jdb
0000003f 55 5 56K Sep 13 2010 /bdb/AuditingBO/0000003f.jdb
0000003e 67 0 68K Sep 11 2010 /bdb/AuditingBO/0000003e.jdb
0000003d 105 5 106K Sep 9 2010 /bdb/AuditingBO/0000003d.jdb
0000003c 69 5 70K Sep 8 2010 /bdb/AuditingBO/0000003c.jdb
0000003b 206 8 206K Sep 7 2010 /bdb/AuditingBO/0000003b.jdb
0000003a 0 0 69 Sep 7 2010 /bdb/AuditingBO/0000003a.jdb
00000039 63 25 64K Sep 7 2010 /bdb/AuditingBO/00000039.jdb
00000038 0 0 69 Sep 4 2010 /bdb/AuditingBO/00000038.jdb
00000037 84 3 85K Sep 4 2010 /bdb/AuditingBO/00000037.jdb
00000036 70 14 70K Sep 2 2010 /bdb/AuditingBO/00000036.jdb
00000035 19 36 20K Sep 1 2010 /bdb/AuditingBO/00000035.jdb
00000034 52 21 52K Aug 31 2010 /bdb/AuditingBO/00000034.jdb
00000033 28 33 28K Aug 30 2010 /bdb/AuditingBO/00000033.jdb
00000032 33 3 33K Aug 30 2010 /bdb/AuditingBO/00000032.jdb
00000031 80 21 81K Aug 27 2010 /bdb/AuditingBO/00000031.jdb
00000030 23 39 24K Aug 26 2010 /bdb/AuditingBO/00000030.jdb
0000002f 118 14 118K Aug 25 2010 /bdb/AuditingBO/0000002f.jdb
0000002e 0 39 659 Aug 22 2010 /bdb/AuditingBO/0000002e.jdb
0000002d 108 11 109K Aug 21 2010 /bdb/AuditingBO/0000002d.jdb
0000002c 308 18 308K Aug 18 2010 /bdb/AuditingBO/0000002c.jdb
0000002b 0 0 69 Aug 7 2010 /bdb/AuditingBO/0000002b.jdb
0000002a 15 42 16K Aug 6 2010 /bdb/AuditingBO/0000002a.jdb
00000029 78 20 78K Aug 6 2010 /bdb/AuditingBO/00000029.jdb
00000028 81 22 82K Aug 4 2010 /bdb/AuditingBO/00000028.jdb
00000027 49 38 50K Aug 2 2010 /bdb/AuditingBO/00000027.jdb
00000026 121 16 122K Aug 1 2010 /bdb/AuditingBO/00000026.jdb
00000025 27 42 28K Jul 29 2010 /bdb/AuditingBO/00000025.jdb
00000024 23 36 23K Jul 28 2010 /bdb/AuditingBO/00000024.jdb
00000023 77 10 78K Jul 28 2010 /bdb/AuditingBO/00000023.jdb
00000022 48 23 48K Jul 27 2010 /bdb/AuditingBO/00000022.jdb
00000021 19 44 19K Jul 26 2010 /bdb/AuditingBO/00000021.jdb
00000020 0 0 69 Jul 25 2010 /bdb/AuditingBO/00000020.jdb
0000001f 25 14 26K Jul 24 2010 /bdb/AuditingBO/0000001f.jdb
0000001e 21 40 22K Jul 23 2010 /bdb/AuditingBO/0000001e.jdb
0000001d 77 35 77K Jul 23 2010 /bdb/AuditingBO/0000001d.jdb
0000001c 283 26 283K Jul 22 2010 /bdb/AuditingBO/0000001c.jdb
0000001b 30 40 30K Jul 15 2010 /bdb/AuditingBO/0000001b.jdb
0000001a 53 27 54K Jul 13 2010 /bdb/AuditingBO/0000001a.jdb
00000019 33 43 34K Jul 12 2010 /bdb/AuditingBO/00000019.jdb
00000018 26 18 26K Jul 11 2010 /bdb/AuditingBO/00000018.jdb
00000017 1 41 1.8K Jul 10 2010 /bdb/AuditingBO/00000017.jdb
00000016 34 36 35K Jul 9 2010 /bdb/AuditingBO/00000016.jdb
00000015 164 25 165K Jul 8 2010 /bdb/AuditingBO/00000015.jdb
00000014 98 36 99K Jul 7 2010 /bdb/AuditingBO/00000014.jdb
00000013 112 15 113K Jul 7 2010 /bdb/AuditingBO/00000013.jdb
00000012 39 40 40K Jul 5 2010 /bdb/AuditingBO/00000012.jdb
00000011 16 15 16K Jul 4 2010 /bdb/AuditingBO/00000011.jdb
00000010 1 41 1.2K Jul 3 2010 /bdb/AuditingBO/00000010.jdb
0000000f 59 39 60K Jul 2 2010 /bdb/AuditingBO/0000000f.jdb
0000000e 138 26 138K Jul 1 2010 /bdb/AuditingBO/0000000e.jdb
0000000d 51 37 52K Jun 30 2010 /bdb/AuditingBO/0000000d.jdb
0000000c 79 31 79K Jun 29 2010 /bdb/AuditingBO/0000000c.jdb
0000000b 80 38 80K Jun 28 2010 /bdb/AuditingBO/0000000b.jdb
0000000a 42 23 42K Jun 28 2010 /bdb/AuditingBO/0000000a.jdb
00000009 18 38 18K Jun 26 2010 /bdb/AuditingBO/00000009.jdb
00000008 96 40 96K Jun 25 2010 /bdb/AuditingBO/00000008.jdb
00000007 125 28 125K Jun 24 2010 /bdb/AuditingBO/00000007.jdb
00000006 290 22 290K Jun 24 2010 /bdb/AuditingBO/00000006.jdb
00000005 181 17 181K Jun 22 2010 /bdb/AuditingBO/00000005.jdb
00000004 16 39 16K Jun 21 2010 /bdb/AuditingBO/00000004.jdb
00000003 31 0 31K Jun 20 2010 /bdb/AuditingBO/00000003.jdb
00000002 23 15 24K Jun 20 2010 /bdb/AuditingBO/00000002.jdb
00000001 0 0 69 Jun 20 2010 /bdb/AuditingBO/00000001.jdb
00000000 94 18 95K Jun 20 2010 /bdb/AuditingBO/00000000.jdbWhy there are so many files with 0 size and 0% utilizacion?
Why there are new files when the previous file has not reach 10MB size?
Why there is a 26% of total utilization?
At 00:30 is performed a backup with DbBackup.
Custom JE Properties:
je.maxMemory=134217728
je.maxMemoryPercent=75
je.sharedCache=true
Environment: Solaris 10 + JRE6u17 64 bits + JE 4.1.7
Thanks in advance,
/Cesar.Why there are so many files with 0 size and 0% utilizacion?The cleaner and checkpointer apparently haven't caught up to cleaning these files, or deleting them at checkpoint time. Take a look at your checkpoints to see when they're happening.
Why there are new files when the previous file has not reach 10MB size?Each backup starts a new file, see DbBackup.startBackup.
Why there is a 26% of total utilization?Apparently you don't have enough cleaner threads or are not doing checkpoints frequently enough. See
http://download.oracle.com/docs/cd/E17277_02/html/GettingStartedGuide/logfilesrevealed.html
Looks like you have a very low write rate. Perhaps checkpoints are not occurring because enough data hasn't yet been written.
--mark -
Garbage Collector blocks - weird behaviour
Hi!
After launching an application we ran into weird problems with the garbage collector. It's a combo of Jetty/JGroups/Helma-Application-Server and various other libraries on a RedHat 7.3 box. The problem occurs on a machine with relativly high load (~10 requests/sec) but doesn't come along with high load automatically.
After some hours uptime the garbage collector blocks for increasingly longer intervals until the application is reachable only for seconds every few minutes. In concerns exclusivly minor GC in the new space. Each stop lasts for just 1/10000 seconds but that in a loop of 1000s times back-to-back.
The logfile created by the -Xloggc option looks like this:
20085.784: [GC 20085.785: [ParNew
Desired survivor size 32768 bytes, new threshold 0 (max 0)
: 71552K->0K(71616K), 0.0634410 secs] 380477K->312523K(511936K), 0.0639970 secs]
Total time for which application threads were stopped: 0.0652310 seconds
Application time: 8.7885840 seconds
Total time for which application threads were stopped: 0.0005810 seconds
Application time: 0.0005620 seconds
Total time for which application threads were stopped: 0.0006080 seconds
Application time: 0.0002630 seconds
Total time for which application threads were stopped: 0.0004410 seconds
Application time: 0.0001790 seconds
Total time for which application threads were stopped: 0.0005480 seconds
Application time: 0.0001670 seconds
Total time for which application threads were stopped: 0.0003440 seconds
Application time: 0.0001210 seconds
Total time for which application threads were stopped: 0.0004450 seconds
Application time: 0.0001590 seconds
Total time for which application threads were stopped: 0.0004220 seconds
Application time: 0.0002180 seconds
20095.265: [GC 20095.265: [ParNew
Desired survivor size 32768 bytes, new threshold 0 (max 0)
: 71552K->0K(71616K), 0.0753260 secs] 384282K->317322K(511936K), 0.0759450 secs]
Total time for which application threads were stopped: 0.0767720 seconds
Application time: 0.3346050 seconds
While the "Total time.." lines were printed, the app was unreachable. "Application time" usually marks the time the app was running between two garbage collections. It seems as if the garbage collector tries to stop the app, can't do it for whatever reason and tries again a moment later.
We've tested any suitable garbage collector, we've tried out j2sdk1.4.2_02, j2sdk1.4.2_07, jdk1.5.0_01, jdk1.5.0_02, we've tried different machines to exclude hardware failure. It is hard to reproduce in a test environment but we've seen a few lines like above on a windows box too (no longer blocking times, tough). There aren't any OutOfMemoryErrors, the heap management looks fine. After GC about 3/4 of the heap are freed even while the above problem occurs, so we're ruling out a memory leak.
Maybe someone here has stumbled across that problem or has any ideas what could trigger such a behaviour? After two weeks of debugging I've run out of ideas on where to look for a bug.
Yours remotely,
Stefanhi there
Can u add -XX:+PrintHeapAtGC -XX:+PrintGC and show the portion of the log files?
You are using the -XX:+ParNewGC? Any -XX:+UseConcMarkSweepGC?
Also, add -XX:+DisableExplicitGC
Hope this helps. -
WebLogic 9.2 : Log files are not rotating properly
Hello,
In Weblogic 9.2, i have mentioned the log archive directory to rotate log files on the basis of size (2 MB) and also checked the flag to rotate the file on startup of server so there are only two possibilities of rotation that is
1. Either reach file size upto 2 MB
2. On startup of server
Lets take an example step by step
1. I started server, a file e.g. running.out00142 is created.
2. Now running.out size is reached again upto 2 MB then a new file named running.out00143 should be created.
3. yes the file is created into the archive folder but every time when next file is created the first file running.out00142 is getting increase and exists till server restarts
4. Total file contains is 15 but the first file is getting increased and exists till server restart.
Can anyone help me
Thanks in advance
[email protected]Hi,
That's a weird behaviour...
I had a problem with a non rotating log once and I found out that the domain and the server log were pointing to the same file and then they were locking each other.
Non-rotating logs are usually caused by weblogic being unable to rename the old file, either because of locking or file/directory rights.
Hope that helps.
Cheers,
Vlad
Give points - it is good etiquette to reward an answerer points (5 - helpful; 10 - correct) for their post if they answer your question. If you think this is helpful, please consider giving points -
DATE fields and LOG files in context with external tables
I am facing two problems when dealing with the external tables feature in Oracle 9i.
I created an External Table with some fileds with the DATE data type . There were no issues during the creation part. But when i query the table, the DATE fields are not properly selected though the data is there in the files. Is there any ideas to deal with this ?
My next question is regarding the log files. The contents in the log file seems to be growing when querying the external tables. Is there a way to control this behaviour?
Suggestions / Advices on the above two issues are welcome.
Thanks
LakshminarayananHi
If you have date datatypes than:
select
greatest(TABCASER1.CASERRECIEVEDDATE, EVCASERS.FINALEVDATES, EVCASERS.PUBLICATIONDATE, EVCASERS.PUBLICATIONDATE, TABCASER.COMPAREACCEPDATE)
from TABCASER, TABCASER1, EVCASERS
where ...-- join and other conditions
1. greatest is good enough
2. to_date creates date dataype from string with the format of format string ('mm/dd/yyyy')
3. decode(a, b, c, d) is a function: if a = b than return c else d. NULL means that there is no data in the cell of the table.
6. to format the date for display use to_char function with format modell as in the to_date function.
Ott Karesz
http://www.trendo-kft.hu -
Cannot publish get error message - log file not being created
When trying to publish a FlashHelp project, I get an error
message window that says "Publishing has been cancelled. Failed to
create file: (project name).log "
When I click okay in the message window, the publishing
process stops. However, if I look in the SSL folder, I see the log
file. It is a text file.
I had this problem in January 09 but it seemed to be an issue
with the password and path in the FTP command window. I fixed it
and it worked fine. However, I haven't published since the end of
January. Now, when I try to publish, it is giving me the same error
message. I checked and reviewed the FTP window fields and they are
fine. But I'm still getting the error message and can't publish.
Why?
I need to get this problem fixed ASAP and ensure that it
doesn't occur again. What's strange is that I've got 3 other
projects and this is the only 1 that gets this error message.Yes, the generation worked. I checked the log file that
worked from the time it worked before and it seems to be the same
as the log file that is generated when I get the error message.
I created a new FlashHelp layout and got the same error
message. What's really weird is there is a log file in the SSL
folder but when you click OK in the error message, it stops the
publish function.
Last time I had to blow away the cpd file as if this was a
corrupt project. But that gets to be painful. As I use templates to
put change dates in the footers of topics and templates get lost
when you blow away the cpd.
Any other thoughts? -
Can anyone make sense of these log files?
Hey Guys,
I'm getting ~100MB log files every day in private/var/log/DiagnosticMessages. Mostly it seems to be something resembling the following over and over and over again:
{And here is where the log file gets eaten every time I try to post it - mostly messages from com.apple.message.signature, com.apple.message.domain0, my-Computer-4mDNSResponder, com.apple.message.uuid, and com.apple.mDNSResponder.autotunnel.domainstatus}
Can anyone help based on the above? Is there any way to post log code with lots of weird binary characters without it getting eaten? Or maybe a screenshot of the log code?
These logs do not appear when I am out of town, which makes me think that it's one of my Airport Extremes, but I've done factory restores on both of them and the logs are still being generated. I've also had to reinstall my OS due to an unrelated issue and the logs are still being generated.
Any ideas of what might be generating these giant logs would be really helpful. Thanks!
Message was edited by: loudguitars81I see seven logs in private/var/log, which IS normal. There isn't anything repeating in those beyond the aforementioned Epson thing, which was not what was causing the logs to be generated in private/var/log/DiagnosticMessages. The Epson errors in system.log and its daily predecessors stopped when I got rid of a bunch of Epson cruft, but the giant log files in the DiagnosticMessages logs continue.
The private/var/log/DiagnosticMessages logs don't seem to be clearing out - I had them going all the way back to when I first installed Snow Leopard before I reinstalled the OS recently (the reinstall wiped out the old logs, obviously). This was actually how I discovered this weird logging problem in the first place - I couldn't figure out what was taking up so much space on my drive and I ran DiskUtilityX to find I had 10 gigs of log files in the DiagnosticMessages folder dating back from my move on 7/3.
Everything before my move was well under 1MB daily. Everything after my move (and when my computer wasn't staying somewhere besides my own house) was between 75-100MB daily.
Even after I reinstalled my OS 2/16, the private/var/log/DiagnosticMessages logs are not clearing out - until I ran @LincDavis's terminal commands, I had logs going back to that date, which was obviously more than 7 days ago.
Does any of that detail help you guys in trying to pinpoint what's happening here? -
Error and weird behaviour in executable launch
Hello folks,
This post is regarding a weird behaviour i am experiencing with an executable i built.
LABVIEW version(Includes Runtime engine) LV2012 SP1 f3
DAQmx: 9.6.1
The behaviour is listed below in detail.
In a nutshell, the exectuable runs on the development computer but does not run on the Target computer. Also, Irresepctive of which PC i run the executable on, i cannot access the block diagram even after enabling debugging everywhere.
On the target PC, the app fires up but does not run further, no error codes appear on the screen, it's like the app freezes after firing up. And to add to the misery, i cannot access the block diagram to debug and know what's going on.
Also, I have tried including the dynamic vis to my build script. No bueno.
What I see on running the app is addressed below:
TARGET COMPUTER:
DAQmx 9.7 and LV2012 SP1 f4 RTE have been installed manually.
App does not run: No broken run button, the app launches but does nothing when the vi is run. No error messages.
The block diagram is still inaccessible, even after selecting the “Enable debugging” option in the build specifications.
DEVELOPMENT COMPUTER:
The app launches and runs perfectly.
The block diagram is still inaccessible, even after selecting the “Enable debugging” option in the build specifications.
DAQmx 9.7 and LV2012 SP1 f4 RTE were not installed as the app recognized the already installed Labview environment.
Additional steps that I have tried,
Created and ran only an executable on the target PC, the attempt was unsuccessful. The vi showed similar characteristics as mentioned above in the target PC section.
Created and ran an installer with additional install options(LV2012 f4 RTE and DAQmx 9.7)on the target PC, the attempt was again unsuccessful. The vi showed similar characteristics as mentioned above in the target PC section. No error messages.
Tried both the steps mentioned above on the development computer and the attempts were successful. .
To the best of my knowledge, I believe, the issue here is with the environment I am creating for the executable and the installer to run with/off of. After having carefully followed the installation procedure for the Run-Time Engine and the DAQmx drivers, I still do not know what I am missing.
Please advise.
Thanks guys,
RP.Hey guys,
So, got the application to work. Almost.
The problem was that the executable was missing the hardware config from the Device.
Now, the new issue is as following:
The goal of the vi is to generate a report of the test conducted. So, the way the vi works is that, the second the vi is run, an empty word file is created with only the company logo,
Field headings, which are populated after the test is conducted.
The logo is a .jpg file, which has a relative path into the executable.
The field heading are String constants wired into a 'concatenate strings.vi' which are in turn wired to into the report generation vis.
What's happening is that when i run the app on the target pc, Only the logo appears on the word template. Even when i conduct the whole test and stop the vi, the results aren't populated in the word file. Which is a little weird.
Does any one know whats doing that?
Please refer to the attached word files.
Right - It is the file format desired.
Wrong - It is the file format achieved.
Please advise.
Thanks,
RP.
Attachments:
Right.docx 17 KB
Wrong.docx 16 KB -
Question on redo log files at the standby
Oracle version: 10.2.0.5
Platform : AIX
We have 2 node RAC primary with 2 node RAC standby
Primary Instance1 named as cmapcp1
Primary Instance2 named as cmapcp2
Standby Instance1 named as cmapcp3
Standby Instance2 named as cmapcp4At standby side
SQL> show parameter log_file_name_convert
NAME TYPE VALUE
log_file_name_conver string cmapcp1, cmapcp3, cmapcp2, cmapcp4
Despite the value set for log_file_name_convert, I don't see any change in names of Online and Standby redo logs at the Standby site.
-- From primary
SQL> select member,type from v$logfile;
MEMBER TYPE
+CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf STANDBY
16 rows selected.-- From standby
SQL> select member,type from v$logfile;
MEMBER TYPE
+CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf ONLINE
+CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf STANDBY
+CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf STANDBY
16 rows selected.--- Another thing I noticed, v$log doesn't list Standby Redo Logs. This is expected behaviour , I guess
Below is the output from Primary and Standby (it is the same)
set linesize 200
set pagesize 50
col member for a50
break on INST SKIP PAGE on GROUP# SKIP 1
select l.thread# inst, l.group#,lf.member, lf.type
from v$log l , v$logfile lf
where l.group# = lf.group#
order by 1,2 ;
INST GROUP# MEMBER TYPE
1 1 +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf ONLINE
2 +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf ONLINE
3 +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf ONLINE
INST GROUP# MEMBER TYPE
2 4 +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf ONLINE
5 +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf ONLINE
6 +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf ONLINEJohn_75 wrote:
Thank you ckpt, mseberg.
I think log_file_name_convert is set wrongly as you've mentioned. But If I don't want to any change to name of Online or standby redo log files in standby, I don't have to set log_file_name_convert at all. Right ?From Same link
If you specify an odd number of strings (the last string has no corresponding replacement string), an error is signalled during startup. If the filename being converted matches more than one pattern in the pattern/replace string list, the first matched pattern takes effect. There is no limit on the number of pairs that you can specify in this parameter (other than the hard limit of the maximum length of multivalue parameters). -
"recover database until cancel" asks for archive log file that do not exist
Hello,
Oracle Release : Oracle 10.2.0.2.0
Last week we performed, a restore and then an Oracle recovery using the recover database until cancel command. (we didn't use backup control files) .It worked fine and we able to restart the SAP instances. However, I still have questions about Oracle behaviour using this command.
First we restored, an online backup.
We tried to restart the database, but got ORA-01113,ORA-01110 errors :
sr3usr.data1 needed media recovery.
Then we performed the recovery :
According Oracel documentation, "recover database until cancel recovery" proceeds by prompting you with the suggested filenames of archived redo log files.
The probleme is it prompts for archive log file that do not exist.
As you can see below, it asked for SMAarch1_10420_610186861.dbf that has never been created. Therefore, I cancelled manually the recovery, and restarted the database. We never got the message "media recovery complete"
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10417_61018686
Fri Sep 7 14:09:45 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf'
Fri Sep 7 14:09:45 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10418_610186861.dbf
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10418_61018686
Fri Sep 7 14:10:03 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf'
Fri Sep 7 14:10:03 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10419_610186861.dbf
ORA-279 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10419_61018686
Fri Sep 7 14:10:13 2007
ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf'
Fri Sep 7 14:10:13 2007
Media Recovery Log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
Errors with log /oracle/SMA/oraarch/SMAarch1_10420_610186861.dbf
ORA-308 signalled during: ALTER DATABASE RECOVER LOGFILE '/oracle/SMA/oraarch/SMAarch1_10420_61018686
Fri Sep 7 14:15:19 2007
ALTER DATABASE RECOVER CANCEL
Fri Sep 7 14:15:20 2007
ORA-1013 signalled during: ALTER DATABASE RECOVER CANCEL ...
Fri Sep 7 14:15:40 2007
Shutting down instance: further logons disabled
When restaring the database we could see that, a recovery of online redo log has been performed automatically, is it the normal behaviour of a recovery using "recover database until cancel" command ?
Started redo application at
Thread 1: logseq 10416, block 482
Fri Sep 7 14:24:55 2007
Recovery of Online Redo Log: Thread 1 Group 4 Seq 10416 Reading mem 0
Mem# 0 errs 0: /oracle/SMA/origlogB/log_g14m1.dbf
Mem# 1 errs 0: /oracle/SMA/mirrlogB/log_g14m2.dbf
Fri Sep 7 14:24:55 2007
Completed redo application
Fri Sep 7 14:24:55 2007
Completed crash recovery at
Thread 1: logseq 10416, block 525, scn 105140074
0 data blocks read, 0 data blocks written, 43 redo blocks read
Thank you very much for your help.
Frod.Hi,
Let me answer your query.
=======================
Your question: While performing the recovery, is it possible to locate which online redolog is needed, and then to apply the changes in these logs
1. When you have current controlfile and need complete data (no data loss),
then do not go for until cancel recovery.
2. Oracle will apply all the redologs (including current redolog) while recovery
process is on.
3. During the recovery you need to have all the redologs which are listed in the view V$RECOVERY_LOG and all the unarchived and current redolog. By querying V$RECOVERY_LOG you can find out about the redologs required.
4. If the required sequence is not there in the archive destination, and if recovery process asks for that sequence you can query V$LOG to see whether requested sequence is part of the online redologs. If yes you can mention the path of the online redolog to complete the recovery.
Hope this information helps.
Regards,
Madhukar -
SMS_NOTIFICATION_SERVER process Active Transaction preventing SQL log file backup
Hello,
I have been working on adding a few thousand machines into our SCCM 2012 R2 environment. Recently after attaching several of these systems there was a spike in activity in the transaction log due to the communication and inventory of these new
machines. The log file would fill up quickly but the log file backup would not function and allow the reuse of the log file. Upon investigation by my DB Admin we noticed that the SMS_NOTIFICATION_SERVER process was holding open an Active Transaction
that would last 1 hour and then restart at the end of the hour. This process was essentially preventing the backup of the log file. In a test, I briefly turned off the SMS_NOTIFICATION_SERVER process and we noticed the transaction log file functioning
correctly. I have included a screen shot of the process in the SQL Activity Monitor. Has anyone experienced this issue and resolved it? Is there anyway to reduce the 1 hour time frame or change the behaviour so that the process releases the
log file for backup if the log is getting full?
Regards,
DaveWe had it in Simple only briefly yesterday when working on the issue. It is in Full recovery mode.
-
App Builder not creating log file (CDK.EnableLog=True)
Hi All,
I am busy trying to get a small to medium size project to build in LV 8.6.1.
During the build I get all kinds of weird errors (1503, 1357), even with a trivial app that uses a typedef that has all the classes bundled together.
As mentioned, the app uses LVOOP, and also lots of similarly named VIs (in different library namespaces though), and I am 99% sure LV is struggling to resolve the same VI name issue.
In order to debug the build process I have tried inserting the CDK.EnableLog=True key into LabVIEW.ini, but no log file is produced during any of my builds (good or bad!). I have tried using TRUE, true, True for the key but none of these seem to work, when I look in %TEMP% there are files related to the build name_log.txt, but they contain only very basic information like the name and OS etc.
Any ideas how to get the build log file to appear???
nrp
CLAHi,
The build log process is shown here: http://digital.ni.com/public.nsf/allkb/2E19F4E72C29CF5C862570D2004FC604?OpenDocument
However this only shows detailed information when creating an installer, is this an executable that you are attempting to distribute when the errors mentioned appear or do the errors occur during the creation of just the executable? (This could be why you get a log with very little information)
Kind Regards,
Applications Engineer -
Data Log File Refnum Type Def Bug??
Hello,
I just found some quirky behaviour (LV 7.1.1):
1. In the attached LLB, open "RefnumVI.vi"
2. Select the Data Log File Refnum control and open it for editing (Edit - Customize Control ... from the menu)
3. Close "RefnumVI.vi" but leave "Refnum.ctl" open
4. Select the enum inside the refnum container, and open it
5. Select File - Save As ... and save the enum as "RefnumEnum2.ctl"
6. Close the enum
7. Save "Refnum.ctl", and close it
8. Reopen "RefnumVI.vi" and display its hierarchy (Browse - Show VI Hierarchy from the menu)
Notice that "RefnumVI.vi" still has a link to "RefnumEnum.ctl", even though we saved this as "RefnumEnum2.ctl" earlier.
If you go back to the VI, right click on the refnum, and replace it with itself (i.e. select "Refnum.ctl"), the link disappears.
This behaviour does not happen if I use a Cluster instead of a Data Log File Refnum. I imagine the difference is that the calling VI needs to know about the structure of the data log file in ways it doesn't need to know about the structure of a cluster, but this still is very counter-intuitive behaviour. Is this really expected? Or is it a bug? Is there any other way to remove the link?
Cheers,
Jaegen
Attachments:
RefnumEnumBug.llb 22 KBNathan,
Thanks for your response - I have 8.2 and am in the process of evaluating how/when to upgrade.
Does this mean that the compiler/linker is behaving differently depending on where you open a type def from? The reason I'm asking is that I've seen similar behavior when editing a hierarchy of type defs; depending on how I open the low-level type def I'm actually editing, changes will or won't get propagated to other instances properly.
Regarding this actual problem, the issue I had is that the data log file refnum type def exists on many VIs, and thus the incorrect link now exists on many VIs, and I don't see any way of correcting the problem without manually replacing the type def with itself in every location (given there's no "Replace All" feature in LV 7.1.1 ). However, the hierarchy I'm dealing with was only created for testing, so I don't actually need to fix it . I'll just know to avoid causing this problem in the first place in the future.
Jaegen -
Incosistencies between Analyzer Server Console and stout.log file
Hi,<BR><BR>In stout.log file of application server file there is a record:<BR>"Setting Current User Count To: 2 Users. Maximum Concurrent Licensed User Count Is: 10 Users." So 2 licences are ured, but checking "Analyer Server Console" there is only one user connected.<BR><BR>After restarting computer "Analyzer Server Console" and stout.log number of users are sinhronized. But I don't know what happens, but after some time this two parameters are not sinhronized anymore.<BR><BR>My problem: I have to show the number of user licences used and I am reading info rom stout.log. But something is not correct - it looks like stout.log doesn't show correct values?<BR><BR>Do I need to specify some setting or is there a bug in code?<BR><BR><BR>My system:<BR>Hyperion Analytic Server version 7.1.0<BR>Hyperion Analyzer Server version 7.0.1.8.01830<BR>IBM DB2 Workgroup Edition version 8 fixpack 9<BR>Tomcat version 4<BR>Windows 2003 Server
Hi grofaty.<BR><BR>We use 7.0.0.0.01472 and I had experienced the same behaviour, <BR>Analyzer Server Console shows one more session than stdout.log.<BR><BR>If this difference 1 is a static value than you can assume it as an systematic bug...and do your license counting on it...<BR><BR>But again the Analyzer Server Console is not good as it should be for productive usage because all the information is only logged online til the next application restart. E.g. it is not helpful in using it for user tracking purposes. Do you use the stdout.log in such a way or have an idea how to grep measures for session logging analysis:<BR> - Session ID <BR> - User ID<BR> - Client ID <BR> - Total Number of Requests <BR> - Average Response (sec) <BR> - Login Time<BR> - Number of concurrent sessions<BR><BR>?
-
I have a custom log file. It is for a client/server app, (so the application keeps running indefinitely). I output lots of data to this file. I was wondering what the best way is to do this.
- 1) keep the log file open indefinitely and just keep writing to it. Lots of data is sent, so I should not keep opening and closing it.
- 2) Keep opening and closing it as needed, so the file does not become locked.
- 3) Keep some kind of buffer and when it reaches a certain size, open and close the file. But this may be tricky since the app runs indefinitely, If an error occurs somewhere, or the app hangs, I may have something in the buffer that didn't get written to a file.
- 4) ??
public void writeToFile(String text)
try
PrintWriter out = new PrintWriter(
new BufferedWriter(new FileWriter("theFile.txt", true))
out.println(text); // append text to the end of the file
out.close();
catch (IOException e)
System.err.println(e.toString());
}or
public void alreadyOpen(String text)
out.println( text );
out.flush();Hmmm ok that's weird. I didn't try FTP though on a folder where a log file could be, but by opening it through the standard Windows file explorer it works fine.
So maybe FTP is trying to get a lock on a file whenever you try to open it, even in read-only mode, which looks bad... Maybe there's some configuration to be done on your FTP client ?
In that case you're kind of stuck, you have to release the lock each time.
OR you could write the logs each time in a temporary file, and then copy the contents to the "accessible through FTP" file once/twice a day or the like. But still you wouldn't be able to see what's going on in the log file at realtime...
Well, that FTP constraint puzzles me.
Maybe you are looking for
-
Sharing iphoto library on network drive
Hello, I'd like to use the same iPhoto library on two Macs. Therefore, I intend to use my time capsule as a network drive. I learned from other posts that I can easily move my iPhoto library to the network share. Using two laptop on the same photo li
-
Safari 4.0.3 - Apple Home Page takes ages to load
Hi Everyone, Whenever I start up Safari at its default home page (www.Apple.com/starpage) it takes a couple of minutes for the page to load completely. Once the page has (finally) loaded, I get the below message in the status bar at bottom of the Saf
-
How to install adobe in mac book pro
i need help installing adobe on mac book pro
-
Portfolio Display Error: Try closing and reopening..
Hi, I'm using Acrobat X 10.1.4, Windows 7 I have created a sample portfolio to share with my colleagues for demonstration purposes. The portfolio consists of 12 different PDFs: 10 created from word docs (converted to pdfs) and 2 are .xlsx files.
-
Hello, Cannot install Adobe Application Manager. I get error message: "Installer failed to initialize". What can I do about this?