No of events in 1231/1131AG log file
Can anyone tell me how to increase the number of events on the log file of the 1231/1131AGs. It seem to be maxing out at 30, and I am unable to go back in the history to view past logs.
Probably your best bet would be to send the events to a logging machine.
Get a copy of Kiwi syslog (it's free) and install it on a PC.
In the WebGUI, bring up the Services page and enable syslog and give it the addres of the PC running Kiwi.
Memory/storage and processor is limited on the APs, it's a good thing to offload as many processes from the APs as possible, especially if you have a heavier traffic load or are situated in an area with a lot of interference.
The text logs created by Kiwi are pretty dense ... you may also want to shop around for a log parser / interpreter.
Kiwi also give you the ability to save the log to a database (mysql, mssql2k & others) and even give you some templates to create the tables.
I use MSSQL2K and report from that with Crystal Reports ... it might be easier to to find a log parser if you're not up on databases.
Good Luck
Scott
Similar Messages
-
Cannot Find Errors in Log File
I can't seem to find where the Errors are stored in the Log File, I am trying to write a DataPlugin in VBS, and I'm getting runtime errors at the end of my script. It says to refer to the log file, but there isn't anything in the log file about my error. Where is the error log that this is referring to? Someone help.....
Hi Garett,
You can view the log file directly in DIAdem-SCRIPT, it's the bottom-most window with the gray background. I always have to scroll up a few lines to see the error, and usually there are two or three error events stored in the log file each time the DataPlugin fails to load. The last event, located at the bottom, almost always contains no useful information for DataPlugin debugging. You have to find the first event by scrolling up to find a reference to a line number in the DataPlugin which caused the error.
Also note that the DataPlugin will throw an error if it runs through just fine but does not load any data. In other words, if you are building up your DataPlugin step by step (as you should), and are just at the early stages where you parse a few items in the file and look at their values, you can easily forget to create at least 1 group or channel. If the DataPlugin does not create at least 1 group or channel when it runs through, it will throw an error that the data loading failed, when really there is no error in the DataPlugin beyond that it loaded nothing.
I like to output metadata values I'm parsing from the binary or header file as group properties when I'm building up the DataPlugin step by step, like this:
Set ThisGroup = Root.ChannelGroups.Add(ThisGroupName)
ThisGroup.Properties.Add MetaDataName, MetaDataValue
Then I can look at a long list of metadata values in the Data Portal, and I never forget to load at least 1 group or channel in order to avoid that pesky error that nothing was loaded by the DataPlugin. You can also do the same thing with channel properties, of course:
Set ThisChannel = ThisGroup.Channels.AddImplicitChannel(Name, 0, 1, 2, eI32)
ThisChannel.Properties.Add MetaDataName, MetaDataValue
Let us know if this doesn't help with your current issue,
Brad Turpin
DIAdem Product Support Engineer
National Instruments -
OC4J:
If I have multiple applications app1 and app2 on single OC4J server instance bound to diffrent urls How do I create diffrent log files (STDOUT and STDERR )separate for each application
to log messages .
Satish JuwareAs per my knowledge you cannot specify different STDOUT and STDERR for each application. it's one per the server. however you can specify a log file for each deployed application and it registers all events, exceptions in that log file. this log file is by default created in the $J2EE_HOME/j2ee/home/application-deployments/<application-name> directory.
If you want different STDERR and STDOUT for each application may be you can create different instances of Oc4J for each application.
regards
Debu -
Performance Issue: Wait event "log file sync" and "Execute to Parse %"
In one of our test environments users are complaining about slow response.
In statspack report folowing are the top-5 wait events
Event Waits Time (cs) Wt Time
log file parallel write 1,046 988 37.71
log file sync 775 774 29.54
db file scattered read 4,946 248 9.47
db file parallel write 66 248 9.47
control file parallel write 188 152 5.80
And after runing the same application 4 times, we are geting Execute to Parse % = 0.10. Cursor sharing is forced and query rewrite is enabled
When I view v$sql, following command is parsed frequently
EXECUTIONS PARSE_CALLS
SQL_TEXT
93380 93380
select SEQ_ORDO_PRC.nextval from DUAL
Please suggest what should be the method to troubleshoot this and if I need to check some more information
Regards,
Sudhanshu BhandariWell, of course, you probably can't eliminate this sort of thing entirely: a setup such as yours is inevitably a compromise. What you can do is make sure your log buffer is a good size (say 10MB or so); that your redo logs are large (at least 100MB each, and preferably large enough to hold one hour or so of redo produced at the busiest time for your database without filling up); and finally set ARCHIVE_LAG_TARGET to something like 1800 seconds or more to ensure a regular, routine, predictable log switch.
It won't cure every ill, but that sort of setup often means the redo subsystem ceases to be a regular driver of foreground waits. -
Delete Log File: Correspondence in Training and Event Management ( t77vp)
Is there a standard way of deleting the Log File: Correspondence in Training and Event Management ( T77VP ) from the system.
Thanks for your help.
AndiHi Niladri,
Please open a new discussion for this as it's a different question. Not only is this stated in the guidelines and makes it easier for other members to search for the right things, but it also increases your chances of getting the right answers, because users know you are looking at LSO rather than TEM and because many users, sadly, are driven by points primarily for giving answers and know you could not mark their answer as correct, because it's not your post.
Please also give context info: which correspondence solution are you using (Smartforms, Adobe forms, SAPscript) and which version of LSO. -
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
How to see events in /var/cluster/logs/eventlog file?
Hello,
I want to see events in /var/cluster/logs/eventlog file.
I read about showev4 but I have two problems:
* this executable is only for SPARC system and we have x86 systems
* I transfer the file to a SPARC system to try to analyse but I get a core.
Best regards,
Emilio J.Transferring a SPARC binary to an x64 system will not work. They are completely different chip architectures. I think the eventlog file is for support staff only. It is not documented in the SC manuals as far as I can tell.
Tim
--- -
Wait Events "log file parallel write" / "log file sync" during CREATE INDEX
Hello guys,
at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
To get some performance values, that i can compare i just built up a normal oracle database in the first step.
Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
And now take a look at these values from the AWR
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 10,019 .0 132 13 33.5
log file sync 293 .7 4 15 1.0
......How can this be possible?
Regarding to the documentation
-> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
Do you have any idea how these values come about?
Any thoughts/ideas are welcome.
Thanks and RegardsSurachart Opun (HunterX) wrote:
Thank you for Nice Idea.
In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
Two points on nologging, though:
<ul>
it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
</ul>
Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Hi All
In Message monitoring(RWB) in adapter engine i am getting the following error
SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
Can any one suggest me what might be the problem
Thanks
Jayaraman
Edited by: Jayaraman P on May 20, 2010 4:27 PM
Edited by: Jayaraman P on May 20, 2010 4:28 PM>
Jayaraman P wrote:
> SOAP: response message contains an error XIAdapter/PARSING/ADAPTER.SOAP_EXCEPTION - soap fault: Server was unable to process request. ---> The event log file is full
this is because of a problem at the WS server (it might mostly be a windows server).
You can request the WS team to have a look into this issue. it is not a PI problem. -
Check Event Alert failed with error - No errors in the log file.
Hi All,
I am developing a simple event based alert on PO_HEADERS table. I want to send alerts when a PO is created.
I did all the steps according to the metalink note How To Send An Email In A Simple Periodic Or Event Alert? [ID 1162153.1]
When i create the PO, the alert is triggering, and Check Event Alert concurrent program is running. But the program completes with error.
Checking the output file (empty) log file (no errors)
What can i do here to find out what is the problem? There is nothing in the Alert Manager - History form also. I have kept 7 days as days to keep.
Thanks!
MCan you find any details about the error from the "View Detail" button (the same window where you check the log and output files)?
I found the Workflow logs, I am not sure what I am looking for, but i am not seeing any errors reported.The event viewer is supposed to send an email, so do you see anything in the logs that could be related?
Thanks,
Hussein -
Maximum number of events per audit log file must be greater than 0.
BOE-XI (R2)
Windows Server 2003
Running AUDIT features on all services.
Report Application Server (RAS) keeps giving the following error in the Windows Application Event Log.
Maximum number of events per audit log file must be greater than 0. Defaulting to 500.
I am assuming that this is because the RAS is not being used by anyone at this time - and there is nothing in the local-audit-log to be copied to the AUDIT database.
Is there any way to suppress this error...?
Thanks in advance for the advice!A couple more reboots after applying service pack 3 seemed to fix the issue.
Also had to go to IIS and set the BusinessObjects and CrystalEnterprise11 web sites to use ASP .NET 1.1 instead of 2. -
Windows Server 2012r2 Failover Cluster Event Trace Log files
Hi
The only documentation I can find regarding event trace log files (Diagnostic.etl.*) for Failover Clustering relate to Server 2008/2008r2, which state that the etl files should be in C:\Windows\System32\winevt\Logs.
I have been exploring a clustering lab for Server 2012r2 and cannot find these files in that folder.
Strangely the PS cmdlet Get-ClusterLog still works!
Where are the etl files?
TIAHi,
Please check if the log is available in C:\ProgramData\Microsoft\Windows\WER\ReportQueue\.
If not, you can use Get-ClusterLog with the Destination parameter to get the log file.
Destination
Specifies the location to copy the cluster log(s) to. To copy to the current folder use "-Destination ." for this parameter.
http://technet.microsoft.com/en-us/library/ee461045.aspx
Thanks.
Jeremy Wu
TechNet Community Support -
Microsoft sql server extended event log file
Dears
Sorry for my below questions if it is very beginner level.
In my implementation I have cluster SQL 2012 on Windows 2012; I am using MountPoints since I have many Clustered Disks.
My MountPoint Size is only 3 GB; My Extended event log are growing fast and it is storing in the MountPoint Drive directly (Path: F:\MSSQL11.MSSQLSERVER\MSSQL\Log).
What is the best practice to work with it? (is it to keep all Extended events? or recirculate? or to shrink? or to store in DB?)
Is there any relation between SQL truncate and limiting the size of Extended event logs?
How can I recirculate this Extended Events?
How can I change the default path?
How can I stop it?
and in case I stop it, does this means to stop storing SQL event in Windows event Viewer?
Thank youAfter a lot of checking, I have found below:
My Case:
I am having SQL Failover Cluster Instances "FCI" and I am using Mount-Points to store my Instances.
I am having 2 Passive Copies for each FCI.
In my configuration I choose to store the Root Instance which include the logs on Mount-Point.
My Mount Point is 2 GB Only, which became full after few days of deployment.
Light Technical Information:
The Extended Event Logs files are generated Coz I have FCI, in single SQL Installation you will not find this files.
The File Maximum size will be 100 MB.
The Files start circulating after it become 10 Full Files.
If you have the FCI installed as 1 Active 2 Passive, and you are doing failover between the nodes, then you will expect to see around 14 - 30 copy of this file.
Based on above information you will need to have around 100 MB * 10 Files Per Instance copy * 3 Since in my case I have 1 Active and 2 passive instances which will = 3000 MB
So in my case My Mount-Point was 2 GB, which become full coz of this SQLDIAG Logs.
Solution:
I extended my mount point by 3 GB coz I am storing this logs on it.
In case you will need to change SQLDIAG Extended Logs Size to 50 MB for example and place to F:\Logs, then you will need below commands:
ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG OFF;
ALTER SERVER CONFIGURATION
SET DIAGNOSTICS LOG MAX_SIZE = 50 MB;
ALTER SERVER CONFIGURATION
SET DIAGNOSTICS LOG PATH = 'F:\logs';
ALTER SERVER CONFIGURATION SET DIAGNOSTICS LOG ON;
After that you will need to restart the FCI from SQL Server Configuration Manager or Failover Cluster Manager.
I wish you will find this information helpful if it is your case.
Regards -
Hi all,
We are using Oracle 9.2.0.4 on SUSE Linux 10. In Our statspack report one of the Top timed event is log file sysnc we are getting.We are not using any storage.IS this a bug of 9.2.0.4 or what is the solution of it
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
ai 1495142514 ai 1 9.2.0.4.0 NO ai-oracle
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 241 03-Sep-09 12:17:17 255 63.2
End Snap: 242 03-Sep-09 12:48:50 257 63.4
Elapsed: 31.55 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,280M Std Block Size: 8K
Shared Pool Size: 160M Log Buffer: 1,024K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 7,881.17 8,673.87
Logical reads: 14,016.10 15,425.86
Block changes: 44.55 49.04
Physical reads: 3,421.71 3,765.87
Physical writes: 8.97 9.88
User calls: 254.50 280.10
Parses: 27.08 29.81
Hard parses: 0.46 0.50
Sorts: 8.54 9.40
Logons: 0.12 0.13
Executes: 139.47 153.50
Transactions: 0.91
% Blocks changed per Read: 0.32 Recursive Call %: 42.75
Rollback per transaction %: 13.66 Rows per Sort: 120.84
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 75.59 In-memory Sort %: 99.99
Library Hit %: 99.55 Soft Parse %: 98.31
Execute to Parse %: 80.58 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 67.17 % Non-Parse CPU: 99.10
Shared Pool Statistics Begin End
Memory Usage %: 95.32 96.78
% SQL with executions>1: 74.91 74.37
% Memory for SQL w/exec>1: 68.59 69.14
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
log file sync 11,558 10,488 67.52
db file sequential read 611,828 3,214 20.69
control file parallel write 436 541 3.48
buffer busy waits 626 522 3.36
CPU time 395 2.54
^LWait Events for DB: ai Instance: ai Snaps: 241 -242
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file sync 11,558 9,981 10,488 907 6.7
db file sequential read 611,828 0 3,214 5 355.7
control file parallel write 436 0 541 1241 0.3
buffer busy waits 626 518 522 834 0.4
control file sequential read 661 0 159 241 0.4
BFILE read 734 0 110 151 0.4
db file scattered read 595,462 0 81 0 346.2
enqueue 15 5 19 1266 0.0
latch free 109 22 1 8 0.1
db file parallel read 102 0 1 6 0.1
log file parallel write 1,498 1,497 1 0 0.9
BFILE get length 166 0 0 3 0.1
SQL*Net break/reset to clien 199 0 0 1 0.1
SQL*Net more data to client 5,139 0 0 0 3.0
BFILE open 76 0 0 0 0.0
row cache lock 5 0 0 0 0.0
BFILE internal seek 734 0 0 0 0.4
BFILE closure 76 0 0 0 0.0
db file parallel write 173 0 0 0 0.1
direct path read 18 0 0 0 0.0
direct path write 4 0 0 0 0.0
SQL*Net message from client 480,888 0 284,247 591 279.6
virtual circuit status 64 64 1,861 29072 0.0
wakeup time manager 59 59 1,757 29781 0.0Your elapsed time is roughly 2000 seconds (31:55 rounded up) - and your log file sync time is roughly 10,000 - which is 5 seconds per second for the duration. Alternatively your session count is roughly 250 at start and end of snapshot - so if we assume that the number of sessions was steady for the duration, every session has suffered 40 seconds of log file sync in the interval. You've recorded roughly 1,500 transactions in the interval (0.91 per second, of which about 13% were rollbacks) - so your log file sync time has averaged more than 6.5 seconds per commit.
Whichever way you look at it, this suggests that either the log file sync figures are wrong, or you have had a temporary hardware failure. Given that you've had a few buffer busy waits and control file write waits of about 900 m/s each, the hardware failure seems likely.
Check log file parallel write times to see if this helps to confirm the hypothesis. (Unfortunately some platforms don't report liog file parallel wriite times correctly for earlier versions of 9.2 - so this may not help.)
You also have 15 enqueue waits averaging 1.2 seconds - check the enqueue stats section of the report to see which enqueue this was: if it was (e.g. CF - control file) then this also helps to confirm the hardware hypothesis.
It's possible that you had a couple of hardware resets or something of that sort in the interval that stopped your system quite dramatically for a minute or two.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Log file sequential read and RFS ping/write - among Top 5 event
I have situation here to discuss. In a 3-node RAC setup which is Logical standby DB; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from
client 24,171 1,087 45 6.5 Network
Db file sequential read 130,939 453 3 2.7 User I/O
Findings:-
On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
Environment :- (Oracle- 10.2.0.4.0, O/S - AIX .3)
1)other node awr shows "log file sync" - is it due to oversized log buffer?
2)Network wait events can be reduced by tweaking SDU & TDU values based on MDU.
3) Why ARCH processes taking much to archives filled redo logs; is it issue with slow disk I/O?
Regards
WORKLOAD REPOSITORY report for<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<DB Name DB Id Instance Inst Num Release RAC Host
XXXPDB 4123595889 XXX2p2 2 10.2.0.4.0 YES sipd207
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1053 04-Apr-11 18:00:02 59 7.4
End Snap: 1055 04-Apr-11 20:00:35 56 7.5
Elapsed: 120.55 (mins)
DB Time: 233.08 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 3,728M 3,728M Std Block Size: 8K
Shared Pool Size: 4,080M 4,080M Log Buffer: 14,332K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 245,392.33 10,042.66
Logical reads: 9,080.80 371.63
Block changes: 1,518.12 62.13
Physical reads: 7.50 0.31
Physical writes: 44.00 1.80
User calls: 36.44 1.49
Parses: 25.84 1.06
Hard parses: 0.59 0.02
Sorts: 12.06 0.49
Logons: 0.05 0.00
Executes: 295.91 12.11
Transactions: 24.43
% Blocks changed per Read: 16.72 Recursive Call %: 94.18
Rollback per transaction %: 4.15 Rows per Sort: 53.31
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.92 In-memory Sort %: 100.00
Library Hit %: 99.83 Soft Parse %: 97.71
Execute to Parse %: 91.27 Latch Hit %: 99.79
Parse CPU to Parse Elapsd %: 15.69 % Non-Parse CPU: 99.95
Shared Pool Statistics Begin End
Memory Usage %: 83.60 84.67
% SQL with executions>1: 97.49 97.19
% Memory for SQL w/exec>1: 97.10 96.67
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 4,503 32.2
RFS ping 168 4,275 25449 30.6 Other
log file sequential read 183,537 4,173 23 29.8 System I/O
SQL*Net more data from client 21,371 1,009 47 7.2 Network
RFS write 25,438 343 13 2.5 System I/O
RAC Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
Begin End
Number of Instances: 3 3
Global Cache Load Profile
~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
Global Cache blocks received: 0.78 0.03
Global Cache blocks served: 1.18 0.05
GCS/GES messages received: 131.69 5.39
GCS/GES messages sent: 139.26 5.70
DBWR Fusion writes: 0.06 0.00
Estd Interconnect traffic (KB) 68.60
Global Cache Efficiency Percentages (Target local+remote 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer access - local cache %: 99.91
Buffer access - remote cache %: 0.01
Buffer access - disk %: 0.08
Global Cache and Enqueue Services - Workload Characteristics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg global enqueue get time (ms): 0.5
Avg global cache cr block receive time (ms): 0.9
Avg global cache current block receive time (ms): 1.0
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.1
Global cache log flushes for cr blocks served %: 2.9
Avg global cache cr block flush time (ms): 4.6
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.1
Global cache log flushes for current blocks served %: 0.1
Avg global cache current block flush time (ms): 5.0
Global Cache and Enqueue Services - Messaging Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 0.6
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.1
% of direct sent messages: 31.57
% of indirect sent messages: 5.17
% of flow controlled messages: 63.26
Time Model Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> Total time in database user-calls (DB Time): 13984.6s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 7,270.6 52.0
DB CPU 4,503.1 32.2
parse time elapsed 506.7 3.6
hard parse elapsed time 497.8 3.6
sequence load elapsed time 152.4 1.1
failed parse elapsed time 19.5 .1
repeated bind elapsed time 3.4 .0
PL/SQL execution elapsed time 0.7 .0
hard parse (sharing criteria) elapsed time 0.3 .0
connection management call elapsed time 0.3 .0
hard parse (bind mismatch) elapsed time 0.0 .0
DB time 13,984.6 N/A
background elapsed time 869.1 N/A
background cpu time 276.6 N/A
Wait Class DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
System I/O 529,934 .0 4,980 9 3.0
Other 582,349 37.4 4,611 8 3.3
Network 279,858 .0 1,009 4 1.6
User I/O 54,899 .0 317 6 0.3
Concurrency 136,907 .1 58 0 0.8
Cluster 60,300 .0 41 1 0.3
Commit 80 .0 10 130 0.0
Application 6,707 .0 3 0 0.0
Configuration 17,528 98.5 1 0 0.1
Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
RFS ping 168 .0 4,275 25449 0.0
log file sequential read 183,537 .0 4,173 23 1.0
SQL*Net more data from clien 21,371 .0 1,009 47 0.1
RFS write 25,438 .0 343 13 0.1
db file sequential read 54,680 .0 316 6 0.3
DFS lock handle 97,149 .0 214 2 0.5
log file parallel write 104,808 .0 157 2 0.6
db file parallel write 143,905 .0 149 1 0.8
RFS random i/o 25,438 .0 86 3 0.1
RFS dispatch 25,610 .0 56 2 0.1
control file sequential read 39,309 .0 55 1 0.2
row cache lock 130,665 .0 47 0 0.7
gc current grant 2-way 35,498 .0 23 1 0.2
wait for scn ack 50,872 .0 20 0 0.3
enq: WL - contention 6,156 .0 14 2 0.0
gc cr grant 2-way 16,917 .0 11 1 0.1
log file sync 80 .0 10 130 0.0
Log archive I/O 3,986 .0 9 2 0.0
control file parallel write 3,493 .0 8 2 0.0
latch free 2,356 .0 6 2 0.0
ksxr poll remote instances 278,473 49.4 6 0 1.6
enq: XR - database force log 2,890 .0 4 1 0.0
enq: TX - index contention 325 .0 3 11 0.0
buffer busy waits 4,371 .0 3 1 0.0
gc current block 2-way 3,002 .0 3 1 0.0
LGWR wait for redo copy 9,601 .2 2 0 0.1
SQL*Net break/reset to clien 6,438 .0 2 0 0.0
latch: ges resource hash lis 23,223 .0 2 0 0.1
enq: WF - contention 32 6.3 2 62 0.0
enq: FB - contention 660 .0 2 2 0.0
enq: PS - contention 1,088 .0 2 1 0.0
library cache lock 869 .0 1 2 0.0
enq: CF - contention 671 .1 1 2 0.0
gc current grant busy 1,488 .0 1 1 0.0
gc current multi block reque 1,072 .0 1 1 0.0
reliable message 618 .0 1 2 0.0
CGS wait for IPC msg 62,402 100.0 1 0 0.4
gc current block 3-way 998 .0 1 1 0.0
name-service call wait 18 .0 1 57 0.0
cursor: pin S wait on X 78 100.0 1 11 0.0
os thread startup 16 .0 1 53 0.0
enq: RO - fast object reuse 193 .0 1 3 0.0
IPC send completion sync 652 99.2 1 1 0.0
local write wait 194 .0 1 3 0.0
gc cr block 2-way 534 .0 0 1 0.0
log file switch completion 17 .0 0 20 0.0
SQL*Net message to client 258,483 .0 0 0 1.5
undo segment extension 17,282 99.9 0 0 0.1
gc cr block 3-way 286 .7 0 1 0.0
enq: TM - contention 76 .0 0 4 0.0
PX Deq: reap credit 15,246 95.6 0 0 0.1
kksfbc child completion 5 100.0 0 49 0.0
enq: TT - contention 141 .0 0 2 0.0
enq: HW - contention 203 .0 0 1 0.0
RFS create 2 .0 0 115 0.0
rdbms ipc reply 339 .0 0 1 0.0
PX Deq Credit: send blkd 452 20.1 0 0 0.0
gcs log flush sync 128 32.8 0 2 0.0
latch: cache buffers chains 128 .0 0 1 0.0
library cache pin 441 .0 0 0 0.0
Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)We only apply on one node in a cluster so I would expect that the node running SQL Apply would have much higher usage and waits. Is this what you are asking?
Larry
Maybe you are looking for
-
Consignment stock as on past date
Hi friends, Is there a way to view the consignment stock per customer location as on a previous date? MB51 does not help and MB5B gives the total stock but not by customer (apart from its layout being hardly friendly). We need to reconcile stocks of
-
Hi all I know the variable name should not be same as the name of table columns used in the pl/sql block. Then why following query is working: declare empno number(6); begin select empno into empno from emp where ename = 'KING'; dbms_output.put_line(
-
HT5035 can i send gift card money to another account once its already been entered?
can i send itunes gift card money to another account once it has already been entered?
-
Pages file reappears on separate desktops
In case you're wondering yes these are the same files. I've checked on my friends macs and theirs doesnt seem to do it. When I open a pages files and have multiple "desktops" open, the pages file reappears on other desktops. I would like for it to st
-
What is the @List function?
I see this in a business rule but I cannot figure out what it is doing...