Log file parallel write
Hi,
on 11g R2. I have the following :
SQL> select total_waits, time_waited from v$system_event where event='log file parallel write';
TOTAL_WAITS TIME_WAITED
74144 28100Is it too much or not ? These valuses , to what should be compared ?
Thank you.
Hi,
thanks to all.
Here is checkpoint frequency from my alertlog :
Wed May 9 23:01:14 2012
Thread 1 advanced to log sequence 14 (LGWR switch)
Current log# 2 seq# 14 mem# 0: /index01/bases/MYDB/redo02.log
Thu May 10 02:14:27 2012
Thread 1 cannot allocate new log, sequence 15
Private strand flush not complete
Current log# 2 seq# 14 mem# 0: /index01/bases/MYDB/redo02.log
Thu May 10 02:14:29 2012
Thread 1 advanced to log sequence 15 (LGWR switch)
Current log# 3 seq# 15 mem# 0: /index01/bases/MYDB/redo03.log
Thu May 10 02:14:29 2012
ALTER SYSTEM ARCHIVE LOG
Thu May 10 02:14:29 2012
Thread 1 advanced to log sequence 16 (LGWR switch)
Current log# 1 seq# 16 mem# 0: /index01/bases/MYDB/redo01.log No I do not know log advisor. Where is it ?
Karan,
thank for your interesting remarks.
I usually have created DBs without enough attention to REDOLOG file size. I decided to verify on the most recent of my DBs. I found that query and issued it. For me it is not enough but I wanted to understand.
Regards.
Similar Messages
-
Wait Events "log file parallel write" / "log file sync" during CREATE INDEX
Hello guys,
at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
To get some performance values, that i can compare i just built up a normal oracle database in the first step.
Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
And now take a look at these values from the AWR
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
log file parallel write 10,019 .0 132 13 33.5
log file sync 293 .7 4 15 1.0
......How can this be possible?
Regarding to the documentation
-> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
Do you have any idea how these values come about?
Any thoughts/ideas are welcome.
Thanks and RegardsSurachart Opun (HunterX) wrote:
Thank you for Nice Idea.
In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
Two points on nologging, though:
<ul>
it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
</ul>
Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Log file sync vs log file parallel write probably not bug 2669566
This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
Version : 9.2.0.8
Platform : Solaris
Application : Oracle Apps
The number of commits per second ranges between 10 and 30.
When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
Below just 2 samples where the ratio is even about 20.
"snap_time" " log file parallel write avg" "log file sync avg" "ratio
11/05/2008 10:38:26 8,142 156,343 19.20
11/05/2008 10:08:23 8,434 201,915 23.94
So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
First I thought that I was hitting bug 2669566.
But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
And I think that it proves that I am NOT hitting this bug.
Below is a sample of the output for the log writer.
-- End of snap 3
HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
In the example above 781036 + 210432 = 991468 micro seconds.
This is the case for all the snaps taken by snapper.
So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
Any clues?Yes that is true!
But that is the way I calculate the average wait time = total wait time / total waits
So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
I use the query below:
select snap_id
, snap_time
, event
, time_waited_micro
, (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
, total_waits
, (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
, trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
from (
select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
row_number() over (partition by event order by sn.snap_id) r
from perfstat.stats$system_event se, perfstat.stats$snapshot sn
where se.SNAP_ID = sn.SNAP_ID
and se.EVENT = 'log file sync'
order by snap_id, event
where time_waited_micro - p_time_waited_micro > 0
order by snap_id desc; -
Performance issues - Log file parallel write
Hi there,
Since a few months I have big performance issues with my Oracle 11.2.0.1.0.
If I look in the Enterprise manager (in blocking sessions) I see al lot of "log file paralles writes" and a lot of "log file sync" .
We have configured an active data guard environment and are using ASM.
We are not stressing out the database with heavy queries or commits or something, but sometimes during the day this happens on non specific times...
We've investigated everything (performance to SAN / heavy queries / oracle problems etc etc) and we really don't know what to do anymore so i thought.. let's try a post on the Forum.....
Perhaps someone had similar things?
Thanks,
BR
Markmwevromans wrote:
See blow a tail of alertlog.
Tue Apr 24 15:12:17 2012
Thread 1 cannot allocate new log, sequence 194085
Checkpoint not complete
Current log# 1 seq# 194084 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
Current log# 1 seq# 194084 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
LGWR: Standby redo logfile selected to archive thread 1 sequence 194085
LGWR: Standby redo logfile selected for thread 1 sequence 194085 for destination LOG_ARCHIVE_DEST_2
Thread 1 advanced to log sequence 194085 (LGWR switch)
Current log# 2 seq# 194085 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
Current log# 2 seq# 194085 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
Tue Apr 24 15:12:21 2012
Archived Log entry 388061 added for thread 1 sequence 194084 ID 0x90d7aa62 dest 1:
Tue Apr 24 15:14:09 2012
Thread 1 cannot allocate new log, sequence 194086
Checkpoint not complete
Current log# 2 seq# 194085 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
Current log# 2 seq# 194085 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
LGWR: Standby redo logfile selected to archive thread 1 sequence 194086
LGWR: Standby redo logfile selected for thread 1 sequence 194086 for destination LOG_ARCHIVE_DEST_2
Thread 1 advanced to log sequence 194086 (LGWR switch)
Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
Tue Apr 24 15:14:14 2012
Archived Log entry 388063 added for thread 1 sequence 194085 ID 0x90d7aa62 dest 1:
Tue Apr 24 15:16:46 2012
Thread 1 cannot allocate new log, sequence 194087
Checkpoint not complete
Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
Thread 1 cannot allocate new log, sequence 194087
Private strand flush not complete
Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
LGWR: Standby redo logfile selected to archive thread 1 sequence 194087
LGWR: Standby redo logfile selected for thread 1 sequence 194087 for destination LOG_ARCHIVE_DEST_2
Thread 1 advanced to log sequence 194087 (LGWR switch)
Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
Tue Apr 24 15:16:54 2012
Archived Log entry 388065 added for thread 1 sequence 194086 ID 0x90d7aa62 dest 1:
Tue Apr 24 15:18:59 2012
Thread 1 cannot allocate new log, sequence 194088
Checkpoint not complete
Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
Thread 1 cannot allocate new log, sequence 194088
Private strand flush not complete
Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
LGWR: Standby redo logfile selected to archive thread 1 sequence 194088
LGWR: Standby redo logfile selected for thread 1 sequence 194088 for destination LOG_ARCHIVE_DEST_2
Thread 1 advanced to log sequence 194088 (LGWR switch)
Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
Tue Apr 24 15:19:06 2012
Archived Log entry 388067 added for thread 1 sequence 194087 ID 0x90d7aa62 dest 1:
Tue Apr 24 15:22:00 2012
Thread 1 cannot allocate new log, sequence 194089
Checkpoint not complete
Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
Thread 1 cannot allocate new log, sequence 194089
Private strand flush not complete
Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
LGWR: Standby redo logfile selected to archive thread 1 sequence 194089
LGWR: Standby redo logfile selected for thread 1 sequence 194089 for destination LOG_ARCHIVE_DEST_2
Thread 1 advanced to log sequence 194089 (LGWR switch)
Current log# 3 seq# 194089 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
Current log# 3 seq# 194089 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
Tue Apr 24 15:19:06 2012
Archived Log entry 388069 added for thread 1 sequence 194088 ID 0x90d7aa62 dest 1:Hi
1st switch time ==> Tue Apr 24 15:18:59 2012
2nd switch time ==> Tue Apr 24 15:19:06 2012
3rd switch time ==> Tue Apr 24 15:19:06 2012
Redo log file switch has good impact on the performance of the database. Frequent log switches may lead to the slowness of the database . Oracle documents suggests to resize the redolog files so that log switches happen more like every 15-30 min (roughly depending on the architecture and recovery requirements).
AS i check the alertlog file and find that the log are switchinh very fequent which is one of the reason that you are getting checkpoint not complete message . i have face this issue many times and i generally increase the size of the logfile and set the archive_lag_time parameter as i have suggested above . If you further want to go root cause and more details then above guys will help you more because i don't have much experience in database tunning . If you looking for aworkarounf then you must go through it .
Good Luck
--neeraj -
'log file sync' versus 'log file prallel write'
I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
SELECT time_waited,
total_waits
INTO wait_start_lgwr,
wait_start_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
SELECT time_waited,
total_waits
INTO wait_end_lgwr,
wait_end_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
I did the same thing for LFS.
What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
Can anybody tell me what I am missing?
P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s.I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
SELECT time_waited,
total_waits
INTO wait_start_lgwr,
wait_start_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
SELECT time_waited,
total_waits
INTO wait_end_lgwr,
wait_end_lgwr_c
FROM v$system_event e
WHERE event LIKE 'log%parallel%';
I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
I did the same thing for LFS.
What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
Can anybody tell me what I am missing?
P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s. -
Hi,
From my statspack report one of the top wait events is control file parallel write.
Wait Time
Event Time Percentage Avg. wait
Control file 11000 3.61% 11.68
parallel write
How can I tune the control file parallel write event?
Right now for this instance I have control file multiplexed onto
3 different drives L. M. N
ThanksIf you are doing excessive log swaps, you could be generating too many checkpoints. See how many log swaps you have done in v$loghist. It is also possible to reduce the number of writes to log files by adding the /*+ APPEND */ hint and the "nologging" to insert statements to reduce the amount of log files filled.
I have also combined update and delete statements to generate fewer writes to the log files.
You can recreate the log files larger with :
alter database drop logfile group 3;
ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '/oracle/oradata/CURACAO9/redo03.log' size 500m reuse;
ALTER DATABASE ADD LOGFILE member '/oracle/oradata/CURACAO9/redo03b.log' reuse to GROUP 3;
alter system switch logfile;
alter system checkpoint;
alter database drop logfile group 2; -- and do group 2 etc. -
Log file sequential read and RFS ping/write - among Top 5 event
I have situation here to discuss. In a 3-node RAC setup which is Logical standby DB; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from
client 24,171 1,087 45 6.5 Network
Db file sequential read 130,939 453 3 2.7 User I/O
Findings:-
On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
Environment :- (Oracle- 10.2.0.4.0, O/S - AIX .3)
1)other node awr shows "log file sync" - is it due to oversized log buffer?
2)Network wait events can be reduced by tweaking SDU & TDU values based on MDU.
3) Why ARCH processes taking much to archives filled redo logs; is it issue with slow disk I/O?
Regards
WORKLOAD REPOSITORY report for<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<DB Name DB Id Instance Inst Num Release RAC Host
XXXPDB 4123595889 XXX2p2 2 10.2.0.4.0 YES sipd207
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 1053 04-Apr-11 18:00:02 59 7.4
End Snap: 1055 04-Apr-11 20:00:35 56 7.5
Elapsed: 120.55 (mins)
DB Time: 233.08 (mins)
Cache Sizes
~~~~~~~~~~~ Begin End
Buffer Cache: 3,728M 3,728M Std Block Size: 8K
Shared Pool Size: 4,080M 4,080M Log Buffer: 14,332K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 245,392.33 10,042.66
Logical reads: 9,080.80 371.63
Block changes: 1,518.12 62.13
Physical reads: 7.50 0.31
Physical writes: 44.00 1.80
User calls: 36.44 1.49
Parses: 25.84 1.06
Hard parses: 0.59 0.02
Sorts: 12.06 0.49
Logons: 0.05 0.00
Executes: 295.91 12.11
Transactions: 24.43
% Blocks changed per Read: 16.72 Recursive Call %: 94.18
Rollback per transaction %: 4.15 Rows per Sort: 53.31
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.92 In-memory Sort %: 100.00
Library Hit %: 99.83 Soft Parse %: 97.71
Execute to Parse %: 91.27 Latch Hit %: 99.79
Parse CPU to Parse Elapsd %: 15.69 % Non-Parse CPU: 99.95
Shared Pool Statistics Begin End
Memory Usage %: 83.60 84.67
% SQL with executions>1: 97.49 97.19
% Memory for SQL w/exec>1: 97.10 96.67
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
CPU time 4,503 32.2
RFS ping 168 4,275 25449 30.6 Other
log file sequential read 183,537 4,173 23 29.8 System I/O
SQL*Net more data from client 21,371 1,009 47 7.2 Network
RFS write 25,438 343 13 2.5 System I/O
RAC Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
Begin End
Number of Instances: 3 3
Global Cache Load Profile
~~~~~~~~~~~~~~~~~~~~~~~~~ Per Second Per Transaction
Global Cache blocks received: 0.78 0.03
Global Cache blocks served: 1.18 0.05
GCS/GES messages received: 131.69 5.39
GCS/GES messages sent: 139.26 5.70
DBWR Fusion writes: 0.06 0.00
Estd Interconnect traffic (KB) 68.60
Global Cache Efficiency Percentages (Target local+remote 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer access - local cache %: 99.91
Buffer access - remote cache %: 0.01
Buffer access - disk %: 0.08
Global Cache and Enqueue Services - Workload Characteristics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg global enqueue get time (ms): 0.5
Avg global cache cr block receive time (ms): 0.9
Avg global cache current block receive time (ms): 1.0
Avg global cache cr block build time (ms): 0.0
Avg global cache cr block send time (ms): 0.1
Global cache log flushes for cr blocks served %: 2.9
Avg global cache cr block flush time (ms): 4.6
Avg global cache current block pin time (ms): 0.0
Avg global cache current block send time (ms): 0.1
Global cache log flushes for current blocks served %: 0.1
Avg global cache current block flush time (ms): 5.0
Global Cache and Enqueue Services - Messaging Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg message sent queue time (ms): 0.1
Avg message sent queue time on ksxp (ms): 0.6
Avg message received queue time (ms): 0.0
Avg GCS message process time (ms): 0.0
Avg GES message process time (ms): 0.1
% of direct sent messages: 31.57
% of indirect sent messages: 5.17
% of flow controlled messages: 63.26
Time Model Statistics DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> Total time in database user-calls (DB Time): 13984.6s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 7,270.6 52.0
DB CPU 4,503.1 32.2
parse time elapsed 506.7 3.6
hard parse elapsed time 497.8 3.6
sequence load elapsed time 152.4 1.1
failed parse elapsed time 19.5 .1
repeated bind elapsed time 3.4 .0
PL/SQL execution elapsed time 0.7 .0
hard parse (sharing criteria) elapsed time 0.3 .0
connection management call elapsed time 0.3 .0
hard parse (bind mismatch) elapsed time 0.0 .0
DB time 13,984.6 N/A
background elapsed time 869.1 N/A
background cpu time 276.6 N/A
Wait Class DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc
Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
System I/O 529,934 .0 4,980 9 3.0
Other 582,349 37.4 4,611 8 3.3
Network 279,858 .0 1,009 4 1.6
User I/O 54,899 .0 317 6 0.3
Concurrency 136,907 .1 58 0 0.8
Cluster 60,300 .0 41 1 0.3
Commit 80 .0 10 130 0.0
Application 6,707 .0 3 0 0.0
Configuration 17,528 98.5 1 0 0.1
Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
RFS ping 168 .0 4,275 25449 0.0
log file sequential read 183,537 .0 4,173 23 1.0
SQL*Net more data from clien 21,371 .0 1,009 47 0.1
RFS write 25,438 .0 343 13 0.1
db file sequential read 54,680 .0 316 6 0.3
DFS lock handle 97,149 .0 214 2 0.5
log file parallel write 104,808 .0 157 2 0.6
db file parallel write 143,905 .0 149 1 0.8
RFS random i/o 25,438 .0 86 3 0.1
RFS dispatch 25,610 .0 56 2 0.1
control file sequential read 39,309 .0 55 1 0.2
row cache lock 130,665 .0 47 0 0.7
gc current grant 2-way 35,498 .0 23 1 0.2
wait for scn ack 50,872 .0 20 0 0.3
enq: WL - contention 6,156 .0 14 2 0.0
gc cr grant 2-way 16,917 .0 11 1 0.1
log file sync 80 .0 10 130 0.0
Log archive I/O 3,986 .0 9 2 0.0
control file parallel write 3,493 .0 8 2 0.0
latch free 2,356 .0 6 2 0.0
ksxr poll remote instances 278,473 49.4 6 0 1.6
enq: XR - database force log 2,890 .0 4 1 0.0
enq: TX - index contention 325 .0 3 11 0.0
buffer busy waits 4,371 .0 3 1 0.0
gc current block 2-way 3,002 .0 3 1 0.0
LGWR wait for redo copy 9,601 .2 2 0 0.1
SQL*Net break/reset to clien 6,438 .0 2 0 0.0
latch: ges resource hash lis 23,223 .0 2 0 0.1
enq: WF - contention 32 6.3 2 62 0.0
enq: FB - contention 660 .0 2 2 0.0
enq: PS - contention 1,088 .0 2 1 0.0
library cache lock 869 .0 1 2 0.0
enq: CF - contention 671 .1 1 2 0.0
gc current grant busy 1,488 .0 1 1 0.0
gc current multi block reque 1,072 .0 1 1 0.0
reliable message 618 .0 1 2 0.0
CGS wait for IPC msg 62,402 100.0 1 0 0.4
gc current block 3-way 998 .0 1 1 0.0
name-service call wait 18 .0 1 57 0.0
cursor: pin S wait on X 78 100.0 1 11 0.0
os thread startup 16 .0 1 53 0.0
enq: RO - fast object reuse 193 .0 1 3 0.0
IPC send completion sync 652 99.2 1 1 0.0
local write wait 194 .0 1 3 0.0
gc cr block 2-way 534 .0 0 1 0.0
log file switch completion 17 .0 0 20 0.0
SQL*Net message to client 258,483 .0 0 0 1.5
undo segment extension 17,282 99.9 0 0 0.1
gc cr block 3-way 286 .7 0 1 0.0
enq: TM - contention 76 .0 0 4 0.0
PX Deq: reap credit 15,246 95.6 0 0 0.1
kksfbc child completion 5 100.0 0 49 0.0
enq: TT - contention 141 .0 0 2 0.0
enq: HW - contention 203 .0 0 1 0.0
RFS create 2 .0 0 115 0.0
rdbms ipc reply 339 .0 0 1 0.0
PX Deq Credit: send blkd 452 20.1 0 0 0.0
gcs log flush sync 128 32.8 0 2 0.0
latch: cache buffers chains 128 .0 0 1 0.0
library cache pin 441 .0 0 0 0.0
Wait Events DB/Inst: UDAS2PDB/udas2p2 Snaps: 1053-1055
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)We only apply on one node in a cluster so I would expect that the node running SQL Apply would have much higher usage and waits. Is this what you are asking?
Larry -
Performance Issue: Wait event "log file sync" and "Execute to Parse %"
In one of our test environments users are complaining about slow response.
In statspack report folowing are the top-5 wait events
Event Waits Time (cs) Wt Time
log file parallel write 1,046 988 37.71
log file sync 775 774 29.54
db file scattered read 4,946 248 9.47
db file parallel write 66 248 9.47
control file parallel write 188 152 5.80
And after runing the same application 4 times, we are geting Execute to Parse % = 0.10. Cursor sharing is forced and query rewrite is enabled
When I view v$sql, following command is parsed frequently
EXECUTIONS PARSE_CALLS
SQL_TEXT
93380 93380
select SEQ_ORDO_PRC.nextval from DUAL
Please suggest what should be the method to troubleshoot this and if I need to check some more information
Regards,
Sudhanshu BhandariWell, of course, you probably can't eliminate this sort of thing entirely: a setup such as yours is inevitably a compromise. What you can do is make sure your log buffer is a good size (say 10MB or so); that your redo logs are large (at least 100MB each, and preferably large enough to hold one hour or so of redo produced at the busiest time for your database without filling up); and finally set ARCHIVE_LAG_TARGET to something like 1800 seconds or more to ensure a regular, routine, predictable log switch.
It won't cure every ill, but that sort of setup often means the redo subsystem ceases to be a regular driver of foreground waits. -
Log file sync top event during performance test -av 36ms
Hi,
During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 208,327 7,406 36 46.6 Commit
direct path write 646,833 3,604 6 22.7 User I/O
DB CPU 1,599 10.1
direct path read temp 1,321,596 619 0 3.9 User I/O
log buffer space 4,161 558 134 3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
I am not able to figure out why "log file sync" is having such slow response.
Below is the snapshot from the load profile.
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108127 16-May-13 20:15:22 105 6.5
End Snap: 108140 16-May-13 23:30:29 156 8.9
Elapsed: 195.11 (mins)
DB Time: 265.09 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,136M Std Block Size: 8K
Shared Pool Size: 1,120M 1,168M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 1.4 0.1 0.02 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 607,512.1 33,092.1
Logical reads: 3,900.4 212.5
Block changes: 1,381.4 75.3
Physical reads: 134.5 7.3
Physical writes: 134.0 7.3
User calls: 145.5 7.9
Parses: 24.6 1.3
Hard parses: 7.9 0.4
W/A MB processed: 915,418.7 49,864.2
Logons: 0.1 0.0
Executes: 85.2 4.6
Rollbacks: 0.0 0.0
Transactions: 18.4Some of the top background wait events:
^LBackground Wait Events DB/Inst: Snaps: 108127-108140
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 208,563 0 2,528 12 1.0 66.4
db file parallel write 4,264 0 785 184 0.0 20.6
Backup: sbtbackup 1 0 516 516177 0.0 13.6
control file parallel writ 4,436 0 97 22 0.0 2.6
log file sequential read 6,922 0 95 14 0.0 2.5
Log archive I/O 6,820 0 48 7 0.0 1.3
os thread startup 432 0 26 60 0.0 .7
Backup: sbtclose2 1 0 10 10094 0.0 .3
db file sequential read 2,585 0 8 3 0.0 .2
db file single write 560 0 3 6 0.0 .1
log file sync 28 0 1 53 0.0 .0
control file sequential re 36,326 0 1 0 0.2 .0
log file switch completion 4 0 1 207 0.0 .0
buffer busy waits 5 0 1 116 0.0 .0
LGWR wait for redo copy 924 0 1 1 0.0 .0
log file single write 56 0 1 9 0.0 .0
Backup: sbtinfo2 1 0 1 500 0.0 .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
{code}
Workload Comparison
~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
DB time: 0.78 1.36 74.36 0.02 0.07 250.00
CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
Parses: 7.28 24.55 237.23 0.19 1.34 605.26
Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
Transactions: 37.99 18.36 -51.67
First Second Diff
1st 2nd
Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
(ms) %DB time
SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
35.6 46.57
CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
5.6 22.66
log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
12.1 15.90
log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
N/A 10.06
SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
84.0 4.93
-direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
0.0 1.76
-db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
0.0 0.41
{code}
*To sum it sup:
1. Why is the IO response getting such an hit during the new perf test? Please suggest*
2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
{code}
select *from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for HPUX: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production
{code}
Please let me know if you would like to see any other stats.
Edited by: Kunwar on May 18, 2013 2:20 PM1. A snapshot interval of 3 hours always generates meaningless results
Below are some details from the 1 hour interval AWR report.
Platform CPUs Cores Sockets Memory(GB)
HP-UX IA (64-bit) 4 4 3 31.95
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 108129 16-May-13 20:45:32 140 8.0
End Snap: 108133 16-May-13 21:45:53 150 8.8
Elapsed: 60.35 (mins)
DB Time: 140.49 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 1,168M 1,168M Std Block Size: 8K
Shared Pool Size: 1,120M 1,120M Log Buffer: 16,640K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 2.3 0.1 0.03 0.01
DB CPU(s): 0.1 0.0 0.00 0.00
Redo size: 719,553.5 34,374.6
Logical reads: 4,017.4 191.9
Block changes: 1,521.1 72.7
Physical reads: 136.9 6.5
Physical writes: 158.3 7.6
User calls: 167.0 8.0
Parses: 25.8 1.2
Hard parses: 8.9 0.4
W/A MB processed: 406,220.0 19,406.0
Logons: 0.1 0.0
Executes: 88.4 4.2
Rollbacks: 0.0 0.0
Transactions: 20.9
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
log file sync 73,761 6,740 91 80.0 Commit
log buffer space 3,581 541 151 6.4 Configurat
DB CPU 348 4.1
direct path write 238,962 241 1 2.9 User I/O
direct path read temp 487,874 174 0 2.1 User I/O
Background Wait Events DB/Inst: Snaps: 108129-108133
-> ordered by wait time desc, waits desc (idle events last)
-> Only events with Total Wait Time (s) >= .001 are shown
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
Avg
%Time Total Wait wait Waits % bg
Event Waits -outs Time (s) (ms) /txn time
log file parallel write 61,049 0 1,891 31 0.8 87.8
db file parallel write 1,590 0 251 158 0.0 11.6
control file parallel writ 1,372 0 56 41 0.0 2.6
log file sequential read 2,473 0 50 20 0.0 2.3
Log archive I/O 2,436 0 20 8 0.0 .9
os thread startup 135 0 8 60 0.0 .4
db file sequential read 668 0 4 6 0.0 .2
db file single write 200 0 2 9 0.0 .1
log file sync 8 0 1 152 0.0 .1
log file single write 20 0 0 21 0.0 .0
control file sequential re 11,218 0 0 0 0.1 .0
buffer busy waits 2 0 0 161 0.0 .0
direct path write 6 0 0 37 0.0 .0
LGWR wait for redo copy 380 0 0 0 0.0 .0
log buffer space 1 0 0 89 0.0 .0
latch: cache buffers lru c 3 0 0 1 0.0 .0 2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
We don't know anything about your online redo log configuration
Below is my redo log configuration.
GROUP# STATUS TYPE MEMBER IS_
1 ONLINE /oradata/fs01/PERFDB1/redo_1a.log NO
1 ONLINE /oradata/fs02/PERFDB1/redo_1b.log NO
2 ONLINE /oradata/fs01/PERFDB1/redo_2a.log NO
2 ONLINE /oradata/fs02/PERFDB1/redo_2b.log NO
3 ONLINE /oradata/fs01/PERFDB1/redo_3a.log NO
3 ONLINE /oradata/fs02/PERFDB1/redo_3b.log NO
6 rows selected.
04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
04:13:26 perf_monitor@PERFDB1> select *from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
1 1 40689 524288000 2 YES INACTIVE 13026185905545 18-MAY-13 01:00
2 1 40690 524288000 2 YES INACTIVE 13026185931010 18-MAY-13 03:32
3 1 40691 524288000 2 NO CURRENT 13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM -
45 min long session of log file sync waits between 5000 and 20000 ms
45 min long log file sync waits between 5000 and 20000 ms
Encountering a rather unusual performance issue. Once every 4 hours I am seeing a 45 minute long log file sync wait event being reported using Spotlight on Oracle. For the first 30 minutes the event wait is for approx 5000 ms, followed by an increase to around 20000 ms for the next 15 min before rapidly dropping off and normal operation continues for the next 3 hours and 15 minutes before the cycle repeats itself. The issue appears to maintain it's schedule independently of restarting the database. Statspack reports do not show an increase in commits or executions or any new sql running during the time the issue is occuring. We have two production environments both running identicle applications with similar usage and we do not see the issue on the other system. I am leaning towards this being a hardware issue, but the 4 hour interval regardless of load on the database has me baffled. If it were a disk or controller cache issue one would expect to see the interval change with database load.
I cycle my redo logs and archive them just fine with log file switches every 15-20 minutes. Even during this unusally long and high session of log file sync waits I can see that the redo log files are still switching and are being archived.
The redo logs are on a RAID 10, we have 4 redo logs at 1 GB each.
I've run statspack reports on hourly intervals around this event:
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 756,729 2,538,034 88.47
db file sequential read 208,851 153,276 5.34
log file parallel write 636,648 129,981 4.53
enqueue 810 21,423 .75
log file sequential read 65,540 14,480 .50
And here is a sample while not encountering the issue:
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 953,037 195,513 53.43
log file parallel write 875,783 83,119 22.72
db file sequential read 221,815 63,944 17.48
log file sequential read 98,310 18,848 5.15
db file scattered read 67,584 2,427 .66
Yes I know I am already tight on I/O for my redo even during normal operations yet, my redo and archiving works just fine for 3 hours and 15 minutes (11 to 15 log file switches). These normal switches result in a log file sync wait of about 5000 ms for about 45 seconds while the 1GB redo log is being written and then archived.
I welcome any and all feedback.
Message was edited by:
acyoung1
Message was edited by:
acyoung1Lee,
log_buffer = 1048576 we use a standard of 1 MB for our buffer cache, we've not altered the setting. It is my understanding that Oracle typically recommends that you not exceed 1MB for the log_buffer, stating that a larger buffer normally does not increase performance.
I would agree that tuning the log_buffer parameter may be a place to consider; however, this issue last for ~45 minutes once every 4 hours regardless of database load. So for 3 hours and 15 minutes during both peak usage and low usage the buffer cache, redo log and archival processes run just fine.
A bit more information from statspack reports:
Here is a sample while the issue is occuring.
Snap Id Snap Time Sessions
Begin Snap: 661 24-Mar-06 12:45:08 87
End Snap: 671 24-Mar-06 13:41:29 87
Elapsed: 56.35 (mins)
Cache Sizes
~~~~~~~~~~~
db_block_buffers: 196608 log_buffer: 1048576
db_block_size: 8192 shared_pool_size: 67108864
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 615,141.44 2,780.83
Logical reads: 13,241.59 59.86
Block changes: 2,255.51 10.20
Physical reads: 144.56 0.65
Physical writes: 61.56 0.28
User calls: 1,318.50 5.96
Parses: 210.25 0.95
Hard parses: 8.31 0.04
Sorts: 16.97 0.08
Logons: 0.14 0.00
Executes: 574.32 2.60
Transactions: 221.21
% Blocks changed per Read: 17.03 Recursive Call %: 26.09
Rollback per transaction %: 0.03 Rows per Sort: 46.87
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 98.91 In-memory Sort %: 100.00
Library Hit %: 98.89 Soft Parse %: 96.05
Execute to Parse %: 63.39 Latch Hit %: 99.87
Parse CPU to Parse Elapsd %: 90.05 % Non-Parse CPU: 85.05
Shared Pool Statistics Begin End
Memory Usage %: 89.96 92.20
% SQL with executions>1: 76.39 67.76
% Memory for SQL w/exec>1: 72.53 63.71
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 756,729 2,538,034 88.47
db file sequential read 208,851 153,276 5.34
log file parallel write 636,648 129,981 4.53
enqueue 810 21,423 .75
log file sequential read 65,540 14,480 .50
And this is a sample during "normal" operation.
Snap Id Snap Time Sessions
Begin Snap: 671 24-Mar-06 13:41:29 88
End Snap: 681 24-Mar-06 14:42:57 88
Elapsed: 61.47 (mins)
Cache Sizes
~~~~~~~~~~~
db_block_buffers: 196608 log_buffer: 1048576
db_block_size: 8192 shared_pool_size: 67108864
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 716,776.44 2,787.81
Logical reads: 13,154.06 51.16
Block changes: 2,627.16 10.22
Physical reads: 129.47 0.50
Physical writes: 67.97 0.26
User calls: 1,493.74 5.81
Parses: 243.45 0.95
Hard parses: 9.23 0.04
Sorts: 18.27 0.07
Logons: 0.16 0.00
Executes: 664.05 2.58
Transactions: 257.11
% Blocks changed per Read: 19.97 Recursive Call %: 25.87
Rollback per transaction %: 0.02 Rows per Sort: 46.85
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.02 In-memory Sort %: 100.00
Library Hit %: 98.95 Soft Parse %: 96.21
Execute to Parse %: 63.34 Latch Hit %: 99.90
Parse CPU to Parse Elapsd %: 96.60 % Non-Parse CPU: 84.06
Shared Pool Statistics Begin End
Memory Usage %: 92.20 88.73
% SQL with executions>1: 67.76 75.40
% Memory for SQL w/exec>1: 63.71 68.28
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
log file sync 953,037 195,513 53.43
log file parallel write 875,783 83,119 22.72
db file sequential read 221,815 63,944 17.48
log file sequential read 98,310 18,848 5.15
db file scattered read 67,584 2,427 .66 -
10.2.0.2 aix 5.3 64bit archivelog mode.
I'm going to attempt to describe the system first and then outline the issue: The database is about 1Gb in size of which only about 400Mb is application data. There is only one table in the schema that is very active with all transactions inserting and or updating a row to log the user activity. The rest of the tables are used primarily for reads by the users and periodically updated by the application administrator with application code. There's about 1.2G of archive logs generated per day, from 3 50Mb redo logs all on the same filesystem.
The problem: We randomly have issues with users being kicked out of the application or hung up for a period of time. This application is used at a remote site and many times we can attribute the users issues to network delays or problems with a terminal server they are logging into. Today however they called and I noticed an abnormally high amount of 'log file sync' waits.
I asked the application admin if there could have been more activity during that time frame and more frequent commits than normal, but he says there was not. My next thought was that there might be an issue with the IO sub-system that the logs are on. So I went to our aix admin to find out the activity of that file system during that time frame. She had an nmon report generated that shows the RAID-1 disk group peak activity during that time was only 10%.
Now I took two awr reports and compared some of the metrics to see if indeed there was the same amount of activity, and it does look like the load was the same. With the same amount of activity & commits during both time periods wouldn't that lead to it being time spent waiting on writes to the disk that the redo logs are on? If so, why wouldn't the nmon report show a higher percentage of disk activity?
I can provide more values from the awr reports if needed.
per sec per trx
Redo size: 31,226.81 2,334.25
Logical reads: 646.11 48.30
Block changes: 190.80 14.26
Physical reads: 0.65 0.05
Physical writes: 3.19 0.24
User calls: 69.61 5.20
Parses: 34.34 2.57
Hard parses: 19.45 1.45
Sorts: 14.36 1.07
Logons: 0.01 0.00
Executes: 36.49 2.73
Transactions: 13.38
Redo size: 33,639.71 2,347.93
Logical reads: 697.58 48.69
Block changes: 215.83 15.06
Physical reads: 0.86 0.06
Physical writes: 3.26 0.23
User calls: 71.06 4.96
Parses: 36.78 2.57
Hard parses: 21.03 1.47
Sorts: 15.85 1.11
Logons: 0.01 0.00
Executes: 39.53 2.76
Transactions: 14.33
Total Per sec Per Trx
redo blocks written 252,046 70.52 5.27
redo buffer allocation retries 7 0.00 0.00
redo entries 167,349 46.82 3.50
redo log space requests 7 0.00 0.00
redo log space wait time 49 0.01 0.00
redo ordering marks 2,765 0.77 0.06
redo size 111,612,156 31,226.81 2,334.25
redo subscn max counts 5,443 1.52 0.11
redo synch time 47,910 13.40 1.00
redo synch writes 64,433 18.03 1.35
redo wastage 13,535,756 3,787.03 283.09
redo write time 27,642 7.73 0.58
redo writer latching time 2 0.00 0.00
redo writes 48,507 13.57 1.01
user commits 47,815 13.38 1.00
user rollbacks 0 0.00 0.00
redo blocks written 273,363 76.17 5.32
redo buffer allocation retries 6 0.00 0.00
redo entries 179,992 50.15 3.50
redo log space requests 6 0.00 0.00
redo log space wait time 18 0.01 0.00
redo ordering marks 2,997 0.84 0.06
redo size 120,725,932 33,639.71 2,347.93
redo subscn max counts 5,816 1.62 0.11
redo synch time 12,977 3.62 0.25
redo synch writes 66,985 18.67 1.30
redo wastage 14,665,132 4,086.37 285.21
redo write time 11,358 3.16 0.22
redo writer latching time 6 0.00 0.00
redo writes 52,521 14.63 1.02
user commits 51,418 14.33 1.00
user rollbacks 0 0.00 0.00Edited by: PktAces on Oct 1, 2008 1:45 PMMr Lewis,
Here's the results from the histogram query, the two sets of values were gathered about 15 minutes apart, during a slower than normal activity time.
105 log file parallel write 1 714394
105 log file parallel write 2 289538
105 log file parallel write 4 279550
105 log file parallel write 8 58805
105 log file parallel write 16 28132
105 log file parallel write 32 10851
105 log file parallel write 64 3833
105 log file parallel write 128 1126
105 log file parallel write 256 316
105 log file parallel write 512 192
105 log file parallel write 1024 78
105 log file parallel write 2048 49
105 log file parallel write 4096 31
105 log file parallel write 8192 35
105 log file parallel write 16384 41
105 log file parallel write 32768 9
105 log file parallel write 65536 1
105 log file parallel write 1 722787
105 log file parallel write 2 295607
105 log file parallel write 4 284524
105 log file parallel write 8 59671
105 log file parallel write 16 28412
105 log file parallel write 32 10976
105 log file parallel write 64 3850
105 log file parallel write 128 1131
105 log file parallel write 256 316
105 log file parallel write 512 192
105 log file parallel write 1024 78
105 log file parallel write 2048 49
105 log file parallel write 4096 31
105 log file parallel write 8192 35
105 log file parallel write 16384 41
105 log file parallel write 32768 9
105 log file parallel write 65536 1 -
Hi all,
We are using Oracle 9.2.0.4 on SUSE Linux 10. In Our statspack report one of the Top timed event is log file sysnc we are getting.We are not using any storage.IS this a bug of 9.2.0.4 or what is the solution of it
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
ai 1495142514 ai 1 9.2.0.4.0 NO ai-oracle
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 241 03-Sep-09 12:17:17 255 63.2
End Snap: 242 03-Sep-09 12:48:50 257 63.4
Elapsed: 31.55 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,280M Std Block Size: 8K
Shared Pool Size: 160M Log Buffer: 1,024K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 7,881.17 8,673.87
Logical reads: 14,016.10 15,425.86
Block changes: 44.55 49.04
Physical reads: 3,421.71 3,765.87
Physical writes: 8.97 9.88
User calls: 254.50 280.10
Parses: 27.08 29.81
Hard parses: 0.46 0.50
Sorts: 8.54 9.40
Logons: 0.12 0.13
Executes: 139.47 153.50
Transactions: 0.91
% Blocks changed per Read: 0.32 Recursive Call %: 42.75
Rollback per transaction %: 13.66 Rows per Sort: 120.84
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 75.59 In-memory Sort %: 99.99
Library Hit %: 99.55 Soft Parse %: 98.31
Execute to Parse %: 80.58 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 67.17 % Non-Parse CPU: 99.10
Shared Pool Statistics Begin End
Memory Usage %: 95.32 96.78
% SQL with executions>1: 74.91 74.37
% Memory for SQL w/exec>1: 68.59 69.14
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
log file sync 11,558 10,488 67.52
db file sequential read 611,828 3,214 20.69
control file parallel write 436 541 3.48
buffer busy waits 626 522 3.36
CPU time 395 2.54
^LWait Events for DB: ai Instance: ai Snaps: 241 -242
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file sync 11,558 9,981 10,488 907 6.7
db file sequential read 611,828 0 3,214 5 355.7
control file parallel write 436 0 541 1241 0.3
buffer busy waits 626 518 522 834 0.4
control file sequential read 661 0 159 241 0.4
BFILE read 734 0 110 151 0.4
db file scattered read 595,462 0 81 0 346.2
enqueue 15 5 19 1266 0.0
latch free 109 22 1 8 0.1
db file parallel read 102 0 1 6 0.1
log file parallel write 1,498 1,497 1 0 0.9
BFILE get length 166 0 0 3 0.1
SQL*Net break/reset to clien 199 0 0 1 0.1
SQL*Net more data to client 5,139 0 0 0 3.0
BFILE open 76 0 0 0 0.0
row cache lock 5 0 0 0 0.0
BFILE internal seek 734 0 0 0 0.4
BFILE closure 76 0 0 0 0.0
db file parallel write 173 0 0 0 0.1
direct path read 18 0 0 0 0.0
direct path write 4 0 0 0 0.0
SQL*Net message from client 480,888 0 284,247 591 279.6
virtual circuit status 64 64 1,861 29072 0.0
wakeup time manager 59 59 1,757 29781 0.0Your elapsed time is roughly 2000 seconds (31:55 rounded up) - and your log file sync time is roughly 10,000 - which is 5 seconds per second for the duration. Alternatively your session count is roughly 250 at start and end of snapshot - so if we assume that the number of sessions was steady for the duration, every session has suffered 40 seconds of log file sync in the interval. You've recorded roughly 1,500 transactions in the interval (0.91 per second, of which about 13% were rollbacks) - so your log file sync time has averaged more than 6.5 seconds per commit.
Whichever way you look at it, this suggests that either the log file sync figures are wrong, or you have had a temporary hardware failure. Given that you've had a few buffer busy waits and control file write waits of about 900 m/s each, the hardware failure seems likely.
Check log file parallel write times to see if this helps to confirm the hypothesis. (Unfortunately some platforms don't report liog file parallel wriite times correctly for earlier versions of 9.2 - so this may not help.)
You also have 15 enqueue waits averaging 1.2 seconds - check the enqueue stats section of the report to see which enqueue this was: if it was (e.g. CF - control file) then this also helps to confirm the hardware hypothesis.
It's possible that you had a couple of hardware resets or something of that sort in the interval that stopped your system quite dramatically for a minute or two.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Log file switch (checkpoint not complete)
HI,
I am using Oracle 9.2 on rhel
IN the statspack report I am getting one of the event I.e log file switch (checkpoint not complete).Statspack duration is about 1.5 hrs...any suggestion
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.97 Redo NoWait %: 98.31
Buffer Hit %: 95.84 In-memory Sort %: 100.00
Library Hit %: 99.57 Soft Parse %: 98.51
Execute to Parse %: 72.70 Latch Hit %: 99.71
Parse CPU to Parse Elapsd %: 53.15 % Non-Parse CPU: 99.10
Shared Pool Statistics Begin End
Memory Usage %: 93.66 93.74
% SQL with executions>1: 60.41 60.94
% Memory for SQL w/exec>1: 60.89 61.66
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
log file switch (checkpoint incomplete) 35,936 35,100 42.81
enqueue 6,144 16,684 20.35
buffer busy waits 17,190 13,346 16.28
wait for a undo record 51,967 4,931 6.01
ARCH wait on SENDREQ 877 4,813 5.87
-------------------------------------------------------------Please find the whole stats[pack report
{code}
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
ICAI 1504443695 icai 1 9.2.0.8.0 NO icaidb.icai.
org
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 70 04-Aug-10 14:27:14 162 34.7
End Snap: 73 04-Aug-10 15:30:43 254 55.4
Elapsed: 63.48 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 4,928M Std Block Size: 8K
Shared Pool Size: 1,312M Log Buffer: 1,024K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 96,260.76 53,769.94
Logical reads: 13,998.20 7,819.20
Block changes: 1,227.83 685.85
Physical reads: 592.13 330.76
Physical writes: 19.93 11.13
User calls: 313.12 174.91
Parses: 31.41 17.55
Hard parses: 0.47 0.26
Sorts: 11.61 6.49
Logons: 0.11 0.06
Executes: 115.04 64.26
Transactions: 1.79
% Blocks changed per Read: 8.77 Recursive Call %: 26.28
Rollback per transaction %: 5.43 Rows per Sort: 472.17
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.97 Redo NoWait %: 98.31
Buffer Hit %: 95.84 In-memory Sort %: 100.00
Library Hit %: 99.57 Soft Parse %: 98.51
Execute to Parse %: 72.70 Latch Hit %: 99.71
Parse CPU to Parse Elapsd %: 53.15 % Non-Parse CPU: 99.10
Shared Pool Statistics Begin End
Memory Usage %: 93.66 93.74
% SQL with executions>1: 60.41 60.94
% Memory for SQL w/exec>1: 60.89 61.66
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
log file switch (checkpoint incomplete) 35,936 35,100 42.81
enqueue 6,144 16,684 20.35
buffer busy waits 17,190 13,346 16.28
wait for a undo record 51,967 4,931 6.01
ARCH wait on SENDREQ 877 4,813 5.87
Wait Events for DB: ICAI Instance: icai Snaps: 70 -73
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file switch (checkpoint 35,936 35,886 35,100 977 5.3
enqueue 6,144 5,660 16,684 2716 0.9
buffer busy waits 17,190 5,325 13,346 776 2.5
wait for a undo record 51,967 49,137 4,931 95 7.6
ARCH wait on SENDREQ 877 0 4,813 5489 0.1
log file switch (archiving n 3,705 3,653 3,600 972 0.5
db file sequential read 600,718 0 621 1 88.1
log file sync 6,826 140 561 82 1.0
log file parallel write 7,052 0 421 60 1.0
log buffer space 1,361 16 230 169 0.2
db file scattered read 289,115 0 212 1 42.4
switch logfile command 116 23 160 1377 0.0
wait for stopper event to be 1,924 1,111 123 64 0.3
control file parallel write 1,355 0 63 46 0.2
PX Deq: Txn Recovery Start 1,253 0 36 29 0.2
SQL*Net break/reset to clien 560 0 20 36 0.1
local write wait 18 15 17 918 0.0
log file switch completion 21 7 9 442 0.0
control file sequential read 237,021 0 6 0 34.8
log file sequential read 437 0 6 13 0.1
BFILE get length 297 0 2 7 0.0
latch free 485 67 2 4 0.1
BFILE read 1,023 0 1 1 0.2
log file single write 18 0 0 16 0.0
SQL*Net more data to client 13,785 0 0 0 2.0
process startup 10 0 0 9 0.0
control file single write 10 0 0 4 0.0
row cache lock 34 0 0 0 0.0
db file single write 1 0 0 14 0.0
LGWR wait for redo copy 89 0 0 0 0.0
PX Deq: Signal ACK 3 0 0 4 0.0
PX Deq: Join ACK 5 0 0 1 0.0
BFILE open 106 0 0 0 0.0
db file parallel read 25 0 0 0 0.0
async disk IO 1,383 0 0 0 0.2
db file parallel write 255 0 0 0 0.0
BFILE internal seek 1,023 0 0 0 0.2
direct path read 843 0 0 0 0.1
BFILE closure 106 0 0 0 0.0
undo segment extension 844 844 0 0 0.1
direct path write 96 0 0 0 0.0
SQL*Net message from client 1,188,764 0 445,926 375 174.3
virtual circuit status 125 125 3,660 29277 0.0
wakeup time manager 86 86 2,451 28506 0.0
PX Idle Wait 755 750 1,466 1941 0.1
jobq slave wait 60 60 176 2930 0.0
SQL*Net more data from clien 3,035 0 1 0 0.4
SQL*Net message to client 1,188,882 0 1 0 174.3
Wait Events for DB: ICAI Instance: icai Snaps: 70 -73
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
Background Wait Events for DB: ICAI Instance: icai Snaps: 70 -73
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
ARCH wait on SENDREQ 877 0 4,813 5489 0.1
buffer busy waits 2,164 1,176 1,184 547 0.3
log file parallel write 7,052 0 421 60 1.0
wait for stopper event to be 1,924 1,111 123 64 0.3
control file parallel write 1,301 0 57 44 0.2
enqueue 393 0 10 26 0.1
control file sequential read 234,452 0 6 0 34.4
log file sequential read 431 0 6 13 0.1
db file scattered read 124 0 1 10 0.0
log buffer space 40 0 1 13 0.0
log file single write 15 0 0 20 0.0
db file sequential read 20 0 0 9 0.0
process startup 7 0 0 8 0.0
latch free 19 3 0 2 0.0
LGWR wait for redo copy 89 0 0 0 0.0
PX Deq: Signal ACK 3 0 0 4 0.0
PX Deq: Join ACK 5 0 0 1 0.0
db file parallel write 255 0 0 0 0.0
async disk IO 793 0 0 0 0.1
rdbms ipc reply 1 0 0 0 0.0
direct path read 88 0 0 0 0.0
direct path write 88 0 0 0 0.0
rdbms ipc message 50,357 3,296 16,467 327 7.4
pmon timer 1,271 1,251 3,653 2874 0.2
smon timer 64 0 2,302 35963 0.0
SQL ordered by Gets for DB: ICAI Instance: icai Snaps: 70 -73
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
5,652,833 456 12,396.6 10.6 20.04 1062.67 285625578
SELECT /*+ INDEX(OT_DAK_ENTRY_DETL IDM_DED_NAME) */DEH_TXN_CODE
,DEH_NO FROM OT_DAK_ENTRY_HEAD,OT_DAK_ENTRY_DETL WHERE DEH_SY
S_ID = DED_DEH_SYS_ID AND TRUNC(DEH_APPLICATION_DT) = :b1 AND
DED_DAK_CODE = :b2 AND DED_NAME LIKE LTRIM(RTRIM(:b3)) || '%'
AND NVL(DED_INSTR_NO,'XXXXXX') = NVL(:b4,'XXXXXX') AND TRUNC(
5,096,348 189 26,964.8 9.6 23.64 23.25 1772835295
select decode(level,1,'',2,' ',3,' ',4,' ',5,'
', ' ') || decode(:1,'ENG',menu_option_desc,menu_opt
ion_desc_bl) "OPTION", menu_parent_id "PARENT", menu_action_type
"TYPE",menu_action "ACTION", decode(level,1,'',2,' ',3,' ',4,'
',5,' ', ' ') ||decode(menu_action_type, 'M', '+', 'o'
4,894,185 96 50,981.1 9.2 7.89 10.11 23088203
INSERT INTO OT_MEM_FEE_COL_DETL(MFCD_FEE_TYPE,MFCD_CONDON_FEE_YN
,MFCD_EXCESS_USED_YN,MFCD_CONDN_CODE,MFCD_PM_CODE,MFCD_CURR_CODE
,MFCD_INSTR_NO,MFCD_INSTR_DT,MFCD_AMT,MFCD_BANK_CODE,MFCD_INSTR_
TYPE,MFCD_BRANCH,MFCD_COLLECTION,MFCD_FM_DT,MFCD_TO_DT,MFCD_RES_
CODE,MFCD_CR_UID,MFCD_CR_DT,MFCD_UPD_UID,MFCD_UPD_DT,MFCD_CONDON
4,885,684 152 32,142.7 9.2 7.64 8.87 1007886847
SELECT MIN(MFCH_NO) FROM OT_MEM_FEE_COL_HEAD, OT_MEM_FEE_COL_DET
L WHERE MFCH_SYS_ID = MFCD_MFCH_SYS_ID AND MFCH_REF_NO = :B4 AND
MFCH_REF_TXN_CODE = :B3 AND MFCD_INSTR_NO = :B2 AND MFCD_BANK_C
ODE = :B1 AND MFCD_AMT > 0
2,680,356 446 6,009.8 5.0 95.57 125.49 197211170
SELECT /*+ INDEX(OT_STUDENT_FEE_COL_IPCC_HEAD OT_STUDENT_FEE_COL
IPCCHEAD_UK01) */SFCH_SYS_ID,SFCH_DT,DECODE(NVL(SSTN_SRN,SFCH
TEMPREF_NO),SSTN_SRN, NULL ,SFCH_TEMP_REF_NO) SFCH_TEMP_REF_NO
,NVL(SFCH_STUD_SRN,SSTN_SRN) SFCH_STUD_SRN,SFCH_COURSE_CODE,SFCH
SCHEMECODE,SFCH_EXMP_STUD_YN,SFCH_EXMP_STUD_REASON,DEH_APPLICA
2,288,204 1 2,288,204.0 4.3 54.31 59.36 3103356680
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN BEGIN /*Quest SOO PPCM job */ qu
est_ppcm_snapshot_pkg.take_snapshot; END; :mydate := next_date;
IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
2,253,877 1 2,253,877.0 4.2 52.70 56.21 579012758
DELETE FROM QUEST_PPCM_SQL_TEXT TXT WHERE INSTANCE_ID >= 0 AND N
OT EXISTS (SELECT 1 FROM QUEST_PPCM_SQL_SNAPSHOT SNAP WHERE SNAP
.SNAPSHOT_ID > 0 AND SNAP.INSTANCE_ID= TXT.INSTANCE_ID AND SNAP.
SQL_ID = TXT.SQL_ID)
1,656,006 24 69,000.3 3.1 10.00 24.26 4081782417
SELECT PISH_COURSE_CODE FROM OV_STU_PAYINSLIP_IPCC_DTL WHERE
PISH_BANK_CODE = :b1 AND PISH_NO BETWEEN :b2 AND :b3 AND PISD_
CURR_CODE = :b4 AND PISH_REGION_CODE = :b5 ORDER BY 1
SQL ordered by Gets for DB: ICAI Instance: icai Snaps: 70 -73
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
1,567,946 12 130,662.2 2.9 21.99 47.39 1585476974
SELECT NVL(TO_CHAR(A.DEH_APPLICATION_DT,'DD/MM/RRRR'), NULL ),NV
L(TO_CHAR(A.DEH_DT,'DD/MM/RRRR'), NULL ) FROM OT_DAK_ENTRY_HEA
D A,OT_DAK_ENTRY_DETL B,OT_FIRM_NAME_APPR_HEAD C WHERE A.DEH_SY
S_ID = B.DED_DEH_SYS_ID AND B.DED_DAK_SYS_ID = C.FNAH_DED_SYS_I
D AND C.FNAH_SYS_ID = (SELECT MAX(B.FNAH_SYS_ID) FROM OT_FIR
1,216,226 4 304,056.5 2.3 9.90 54.25 937031003
SELECT TRIM(STUD_SRN)
Q1_REGNO, TR
IM(STUD_TEMP_REF_NO)
Q1_TEMPNO, STUD_TITLE
1,138,801 178 6,397.8 2.1 18.13 1009.80 1617597
SELECT SRN,ACTIVITYDESCRIPTION,STATUS,DOCUMENTNO,DOCUMENTDATE FR
OM OV_ART_TRANS_STATUS WHERE (SRN=:1) order by DOCUMENTDATE
1,029,221 230 4,474.9 1.9 20.27 20.76 1838125769
SELECT MRH_DT,MRH_FIRST_NAME,MRH_MIDDLE_NAME,MRH_SUR_NAME,MRH_ST
ATUS FROM OM_MEM_REG_HEAD WHERE DECODE(:b1,1,MRH_MRN,MRH_MFCH
TEMPREF_NO) = :b2
778,949 52 14,979.8 1.5 20.32 21.85 4142254844
SELECT LTRIM(RTRIM(DECODE(TIT_NAME,'MR.','CA.','MS.','CA.','MRS.
','CA.') || ' ' || MRH_FIRST_NAME || ' ' || MRH_MIDDLE_NAME
|| ' ' || MRH_SUR_NAME || ' ' || DECODE(MRH_APPR_UID, NULL ,
NULL ,DECODE(MRH_MEM_STATUS,2, NULL ,DECODE(MRH_FELLOW_STATUS_YN
,'Y','FCA','ACA'))) || DECODE(MRH_RESI_STATUS,'A','
755,893 517 1,462.1 1.4 90.43 89.07 1033584013
SELECT DECODE(MFCD_FEE_TYPE,'M08',1,'M05',2,'M06',3,'M09',4,'M10
',5,'M11',6,'M12',7,'M07',8,'M13',9,'M14',10,'M15',11,'M04',12,'
M03',13,'M02',14,'M01',15,'M21',16,'M22',17,'M23',18,'EXCESS',19
,20) FEE_SEQ,MFCD_FEE_TYPE FEE_TYPE,SUM(MFCD_AMT) AMOUNT FROM
OT_MEM_FEE_COL_HEAD,OT_MEM_FEE_COL_DETL WHERE MFCH_SYS_ID = MFC
751,010 1,090 689.0 1.4 28.61 31.41 1734754400
SELECT ROWID,PIIPD_ICAI_EXAM_APPEARED,PIIPD_REG_NO,PIIPD_MTH,PII
PD_YR,PIIPD_ROLL_NO,PIIPD_EXT_EXAM_APPEARED,PIIPD_EXAM_CODE_1,PI
IPD_SUBJ_CODE_1,PIIPD_GROUP_1,PIIPD_ROLL_NO_1,PIIPD_LAST_PAPER_D
T_1,PIIPD_EXAM_CODE_2,PIIPD_SUBJ_CODE_2,PIIPD_GROUP_2,PIIPD_ROLL
SQL ordered by Reads for DB: ICAI Instance: icai Snaps: 70 -73
-> End Disk Reads Threshold: 1000
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
227,513 2 113,756.5 10.1 22.04 79.36 2837394537
SELECT STUD_SRN,STUD_FIRST_NAME,STUD_MIDDLE_NAME,STUD_MAIDEN_NAM
E,STUD_SURNAME,STUD_FATHER_NAME,STUD_BIRTH_DT,STUD_COMM_CODE,STU
D_SEX,STUD_HANDICAPPED_YN,STUD_HANDICAPPED_REASON,STUD_LANG_CODE
,STUD_NATIONALITY_CODE,STUD_EMAIL,STUD_PERMNT_ADDR_LINE_1,STUD_P
ERMNT_ADDR_LINE_2,STUD_PERMNT_ADDR_LINE_3,STUD_PERMNT_ADDR_LINE_
220,939 12 18,411.6 9.8 21.99 47.39 1585476974
SELECT NVL(TO_CHAR(A.DEH_APPLICATION_DT,'DD/MM/RRRR'), NULL ),NV
L(TO_CHAR(A.DEH_DT,'DD/MM/RRRR'), NULL ) FROM OT_DAK_ENTRY_HEA
D A,OT_DAK_ENTRY_DETL B,OT_FIRM_NAME_APPR_HEAD C WHERE A.DEH_SY
S_ID = B.DED_DEH_SYS_ID AND B.DED_DAK_SYS_ID = C.FNAH_DED_SYS_I
D AND C.FNAH_SYS_ID = (SELECT MAX(B.FNAH_SYS_ID) FROM OT_FIR
198,343 2 99,171.5 8.8 5.82 46.25 1414719916
UPDATE OM_MEM_REG_HEAD SET MRH_MRN=:b1 WHERE MRH_SYS_ID = :b2
198,343 2 99,171.5 8.8 5.81 46.10 1414796677
UPDATE OT_DAK_ACTV_HISTORY SET DAH_REG_NO = :B1 WHERE DAH_REG_NO
= :B3 AND TRUNC(DAH_ACTV_ED_DT ) <= TRUNC(:B2 ) AND DAH_ACTV_ST
ATUS = 'C'
173,892 2 86,946.0 7.7 13.85 16.34 3262067067
SELECT STUD_SRN,STUD_FIRST_NAME,STUD_MIDDLE_NAME,STUD_MAIDEN_NAM
E,STUD_SURNAME,STUD_FATHER_NAME,STUD_BIRTH_DT,STUD_COMM_CODE,STU
D_SEX,STUD_HANDICAPPED_YN,STUD_HANDICAPPED_REASON,STUD_LANG_CODE
,STUD_NATIONALITY_CODE,STUD_EMAIL,STUD_PERMNT_ADDR_LINE_1,STUD_P
ERMNT_ADDR_LINE_2,STUD_PERMNT_ADDR_LINE_3,STUD_PERMNT_ADDR_LINE_
112,038 9 12,448.7 5.0 9.23 10.14 2058267852
SELECT ROWID,STUD_DT,STUD_TXN_CODE,STUD_NO,STUD_AMD_NO,STUD_REF_
FROM,STUD_REF_TXN_CODE,STUD_REF_NO,STUD_TEMP_REF_NO,STUD_SRN,STU
D_TITLE,STUD_STATUS,STUD_FIRST_NAME,STUD_MIDDLE_NAME,STUD_SURNAM
E,STUD_MAIDEN_NAME,STUD_NAME_STATUS,STUD_FATHER_NAME,STUD_NATION
ALITY_CODE,STUD_NATION_PROOF_ENCL_YN,STUD_SEX,STUD_HANDICAPPED_Y
102,583 1 102,583.0 4.5 4.51 285.99 802587273
SELECT ROWID,DEH_DT,DEH_TXN_CODE,DEH_NO,DEH_AMD_NO,DEH_REF_FROM,
DEH_REF_TXN_CODE,DEH_REF_NO,DEH_REF_SYS_ID,DEH_REGION_CODE,DEH_A
PPLICATION_DT,DEH_DOC_STATUS,DEH_STATUS,DEH_PRINT_STATUS,DEH_CLO
STATUS,DEHSYS_ID,DEH_COMP_CODE,DEH_ACNT_YR,DEH_AMD_DT,DEH_AMD_
UID,DEH_AMD_RES_CODE,DEH_REF_FROM_NUM,DEH_CR_UID,DEH_CR_DT,DEH_U
98,825 3 32,941.7 4.4 6.70 6.66 2078892348
/*SELECT STUD_SRN
Q1_RE
GNO, O_GET_OLD_REG_NO(STUD_SRN, :BP_COURSE) OLD_NO, TIT_NAME||'
'||STUD_FIRST_NAME||' '||STUD_MIDDLE_NAME||' '||STUD_SURNAME
Q1_NAME, LTRIM(RTRIM(A.STUD_CORRES_ADDR_LINE_1 ))||DECO
96,187 2 48,093.5 4.3 10.78 16.53 3301514821
SELECT MFCD_PAYIN_SLIP_NO MFCD_PAYIN_SLIP_NO
, DECODE(MFCD_BANK_CODE ,'ICI',1,2) ICI_FIRST
, DECODE(MFCD_INSTR_TYPE,'S',1,'L
SQL ordered by Reads for DB: ICAI Instance: icai Snaps: 70 -73
-> End Disk Reads Threshold: 1000
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
',2,'O',3,4) TYPE_FIRST , MFCD_PAYIN_SLIP_DT , FM_BANK
.BANK_NAME , BAD_ADDR1 ADD1
80,683 4 20,170.8 3.6 9.90 54.25 937031003
SELECT TRIM(STUD_SRN)
Q1_REGNO, TR
IM(STUD_TEMP_REF_NO)
Q1_TEMPNO, STUD_TITLE
77,972 8 9,746.5 3.5 8.92 8.84 2241526944
SELECT ROWID,STUD_DT,STUD_TXN_CODE,STUD_NO,STUD_AMD_NO,STUD_REF_
FROM,STUD_REF_TXN_CODE,STUD_REF_NO,STUD_TEMP_REF_NO,STUD_SRN,STU
D_TITLE,STUD_STATUS,STUD_FIRST_NAME,STUD_MIDDLE_NAME,STUD_SURNAM
E,STUD_MAIDEN_NAME,STUD_NAME_STATUS,STUD_FATHER_NAME,STUD_NATION
ALITY_CODE,STUD_NATION_PROOF_ENCL_YN,STUD_SEX,STUD_HANDICAPPED_Y
75,667 3 25,222.3 3.4 3.34 25.09 3345305231
SELECT DISTINCT SFCH_STUD_SRN FROM OT_STUDENT_FEE_COL_HEAD A,O
T_STUDENT_FEE_COL_DETL B WHERE B.SFCD_SFCH_SYS_ID = A.SFCH_SYS_
ID AND B.SFCD_INSTR_BANK_CODE = :b1 AND B.SFCD_INSTR_NO = :b2
72,658 52 1,397.3 3.2 20.32 21.85 4142254844
SELECT LTRIM(RTRIM(DECODE(TIT_NAME,'MR.','CA.','MS.','CA.','MRS.
','CA.') || ' ' || MRH_FIRST_NAME || ' ' || MRH_MIDDLE_NAME
|| ' ' || MRH_SUR_NAME || ' ' || DECODE(MRH_APPR_UID, NULL ,
NULL ,DECODE(MRH_MEM_STATUS,2, NULL ,DECODE(MRH_FELLOW_STATUS_YN
,'Y','FCA','ACA'))) || DECODE(MRH_RESI_STATUS,'A','
48,619 3 16,206.3 2.2 4.19 4.11 496772197
SELECT ROWID,STUD_DT,STUD_TXN_CODE,STUD_NO,STUD_AMD_NO,STUD_REF_
FROM,STUD_REF_TXN_CODE,STUD_REF_NO,STUD_TEMP_REF_NO,STUD_SRN,STU
D_TITLE,STUD_STATUS,STUD_FIRST_NAME,STUD_MIDDLE_NAME,STUD_SURNAM
E,STUD_MAIDEN_NAME,STUD_NAME_STATUS,STUD_FATHER_NAME,STUD_NATION
ALITY_CODE,STUD_NATION_PROOF_ENCL_YN,STUD_SEX,STUD_HANDICAPPED_Y
48,063 230 209.0 2.1 20.27 20.76 1838125769
SELECT MRH_DT,MRH_FIRST_NAME,MRH_MIDDLE_NAME,MRH_SUR_NAME,MRH_ST
ATUS FROM OM_MEM_REG_HEAD WHERE DECODE(:b1,1,MRH_MRN,MRH_MFCH
SQL ordered by Executions for DB: ICAI Instance: icai Snaps: 70 -73
-> End Executions Threshold: 100
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
38,614 38,614 1.0 0.00 0.00 1741347688
SELECT SYSDATE FROM SYS.DUAL
12,490 12,488 1.0 0.00 0.00 2614155871
SELECT DECODE(:b1,'L',DECODE(:b2,'ENG',STATUS_NAME,STATUS_BL_NAM
E),DECODE(:b2,'ENG',STATUS_SHORT_NAME,STATUS_BL_SHORT_NAME)) STA
TUS_DESC,STATUS_FRZ_FLAG_NUM FROM OM_STATUS WHERE STATUS_CODE
= :b4
8,629 8,628 1.0 0.00 0.00 1644340447
SELECT DECODE(:b1,'L',DECODE(:b2,'ENG',FEE_TYPE_NAME,FEE_TYPE_BL
NAME),DECODE(:b2,'ENG',FEETYPE_SHORT_NAME,FEE_TYPE_BL_SHORT_NA
ME)) FEE_NAME,FEE_TYPE_FRZ_FLAG_NUM FROM OM_FEE_TYPE WHERE FE
E_TYPE_CODE = :b4
7,275 7,272 1.0 0.00 0.34 3716207873
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,
order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:
1
6,293 6,283 1.0 0.00 0.00 2804237544
SELECT DECODE(:b1,'L',DECODE(:b2,'ENG',CITY_NAME,CITY_BL_NAME),D
ECODE(:b2,'ENG',CITY_SHORT_NAME,CITY_BL_SHORT_NAME)) CITY_NAME,C
ITY_TALUK_CODE,CITY_DIST_CODE,CITY_STATE_CODE,CITY_REGION_CODE,C
ITY_FRZ_FLAG_NUM FROM OM_CITY WHERE CITY_CODE = :b4
6,221 6,221 1.0 0.00 0.00 484036617
SELECT DAH_SYS_ID.NEXTVAL FROM DUAL
6,221 6,221 1.0 0.00 0.00 2945494810
SELECT COUNT(DAH_SYS_ID) FROM OT_DAK_ACTV_HISTORY WHERE DAH_ACTV
TYPE = :B2 AND DAHTXN_SYS_ID = :B1
5,979 5,979 1.0 0.00 0.00 35936114
SELECT STUD_DOC_STATUS FROM OM_STUDENT_HEAD WHERE STUD_SYS_ID
= :b1
4,637 4,637 1.0 0.00 0.00 1237293873
SELECT DECODE(:b1,'L',DECODE(:b2,'ENG',COU_NAME,COU_BL_NAME),DEC
ODE(:b2,'ENG',COU_SHORT_NAME,COU_BL_SHORT_NAME)) COU_NAME,COU_FR
Z_FLAG_NUM FROM OM_COUNTRY WHERE COU_CODE = :b4
4,404 1,276 0.3 0.00 0.00 1829426463
SELECT NVL(TAU_FROM_VALUE,0), NVL(TAU_TO_VALUE,0) FROM IM_TXN_AU
TH_USER WHERE TAU_TA_TYPE = :B3 AND TAU_TXN_CODE = :B2 AND TAU_A
UTH_UID = :B1
4,220 4,220 1.0 0.00 0.00 1006906503
UPDATE OT_DAK_ACTV_HISTORY SET DAH_ACTV_ED_DT = :B3 , DAH_ACTV_S
TATUS = 'C' WHERE DAH_ACTV_TYPE = :B2 AND DAH_TXN_SYS_ID = :B1
3,874 66 0.0 0.00 0.00 4284733339
SELECT TIT_NAME ||' '|| AR_FIRST_NAME||' '||AR_MIDDLE_NAME||' '|
|AR_SUR_NAME FROM OT_ARTICLE_REGISTRATION, OM_TITLE WHERE TRIM(A
SQL ordered by Executions for DB: ICAI Instance: icai Snaps: 70 -73
-> End Executions Threshold: 100 -
Sync Services Log file very large
user>library>application support>sync services>local
My Sync Services log file was over 111 Gb. It appears to grow in size every day. I have reset SyncServices per Apple. http://support.apple.com/kb/TS1627
I even deleted the contents of the local folder (not recommended per apple, but seems fine) and still my syncservices.log grows and is taking over my hard drive.
I use Entourage and Sync Services is used to Sync iCal/Address book (then for my iPhone sync).
Any suggestions to stop this out of control log?!?Hi,
Increasing the size of the redo logfile size may reduce the number of switches as it takes time to fill the file up but your system's redo generatin will stil be the same. Reduce the frequent commits.
Use the following notes to further narrow down the possible root cause.
WAITEVENT: "log file sync" Reference Note [ID 34592.1]
WAITEVENT: "log file parallel write" Reference Note [ID 34583.1] -
Too much redo log files...
Hi,
I have a very light application in Oracle 9.2.0.7 in Linux-32bits that is generating 400 logfiles a day. I can´t find why those logs are being generated!
The only thing relevant in that application is a big table that serves only for insert command (1000 per hour) for audit reasons. But this table was created with NOLOGGING option.
Redo Size: 4 groups of 40 Mb each.
The insert statement uses a sequence to generate a unique key. Is this sequence causing my big logfile generation?
Thanks,
Paulo.Here is the statspack:
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
DB 378381468 DB 1 9.2.0.7.0 NO host
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 12 28-Jun-07 11:05:11 26 1,198.7
End Snap: 13 28-Jun-07 12:05:24 29 1,077.2
Elapsed: 60.22 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 512M Std Block Size: 8K
Shared Pool Size: 512M Log Buffer: 5,120K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 281,252.38 2,073.48
Logical reads: 73,113.76 539.02
Block changes: 3,133.29 23.10
Physical reads: 3.24 0.02
Physical writes: 21.39 0.16
User calls: 26.12 0.19
Parses: 145.64 1.07
Hard parses: 0.81 0.01
Sorts: 138.33 1.02
Logons: 0.69 0.01
Executes: 443.27 3.27
Transactions: 135.64
% Blocks changed per Read: 4.29 Recursive Call %: 98.97
Rollback per transaction %: 0.13 Rows per Sort: 17.26
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 100.00 In-memory Sort %: 99.99
Library Hit %: 99.66 Soft Parse %: 99.44
Execute to Parse %: 67.14 Latch Hit %: 99.93
Parse CPU to Parse Elapsd %: 55.03 % Non-Parse CPU: 99.22
Shared Pool Statistics Begin End
Memory Usage %: 91.06 91.23
% SQL with executions>1: 44.54 39.78
% Memory for SQL w/exec>1: 43.09 33.89
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 3,577 84.73
log file parallel write 854,726 359 8.51
row cache lock 56,780 104 2.47
process startup 172 91 2.16
SQL*Net message from dblink 5,001 22 .53
Wait Events for DB: DB Instance: DB Snaps: 12 -13
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 854,726 0 359 0 1.7
row cache lock 56,780 0 104 2 0.1
process startup 172 4 91 530 0.0
SQL*Net message from dblink 5,001 0 22 4 0.0
log file sync 3,015 3 19 6 0.0
enqueue 471 1 9 20 0.0
buffer busy waits 20,290 0 8 0 0.0
db file sequential read 3,853 0 6 2 0.0
SQL*Net more data from dblin 88,584 0 5 0 0.2
control file parallel write 1,704 0 5 3 0.0
latch free 1,404 748 4 3 0.0
single-task message 134 0 4 27 0.0
LGWR wait for redo copy 8,230 1 2 0 0.0
log file switch completion 60 0 2 32 0.0
log file sequential read 1,333 0 2 1 0.0
control file sequential read 4,530 0 1 0 0.0
db file scattered read 246 0 0 1 0.0
SQL*Net more data to client 7,292 0 0 0 0.0
SQL*Net break/reset to clien 72 0 0 1 0.0
db file parallel write 4,568 0 0 0 0.0
log file single write 62 0 0 0 0.0
async disk IO 3,410 0 0 0 0.0
SQL*Net message to dblink 5,001 0 0 0 0.0
direct path read (lob) 84 0 0 0 0.0
direct path read 318 0 0 0 0.0
direct path write 312 0 0 0 0.0
buffer deadlock 115 115 0 0 0.0
SQL*Net message from client 86,475 0 27,758 321 0.2
jobq slave wait 4,594 4,532 13,455 2929 0.0
SQL*Net more data from clien 602 0 1 2 0.0
SQL*Net message to client 86,481 0 0 0 0.2
Background Wait Events for DB: DB Instance: DB Snaps: 12 -13
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
log file parallel write 854,744 0 359 0 1.7
control file parallel write 1,704 0 5 3 0.0
LGWR wait for redo copy 8,230 1 2 0 0.0
log file sequential read 1,333 0 2 1 0.0
control file sequential read 1,849 0 1 1 0.0
db file parallel write 4,567 0 0 0 0.0
latch free 74 0 0 0 0.0
rdbms ipc reply 65 0 0 0 0.0
log file single write 62 0 0 0 0.0
async disk IO 3,410 0 0 0 0.0
db file sequential read 1 0 0 8 0.0
buffer busy waits 5 0 0 0 0.0
direct path read 248 0 0 0 0.0
direct path write 248 0 0 0 0.0
rdbms ipc message 868,357 6,776 30,095 35 1.8
pmon timer 1,204 1,204 3,529 2931 0.0
smon timer 154 0 3,514 22816 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
active txn count during cleanout 2,844 0.8 0.0
background checkpoints completed 31 0.0 0.0
background checkpoints started 31 0.0 0.0
background timeouts 7,956 2.2 0.0
branch node splits 15 0.0 0.0
buffer is not pinned count 324,721,116 89,875.8 662.6
buffer is pinned count 308,901,876 85,497.3 630.3
bytes received via SQL*Net from c 8,048,130 2,227.6 16.4
bytes received via SQL*Net from d 181,575,342 50,256.1 370.5
bytes sent via SQL*Net to client 33,964,494 9,400.6 69.3
bytes sent via SQL*Net to dblink 933,170 258.3 1.9
calls to get snapshot scn: kcmgss 9,900,434 2,740.2 20.2
calls to kcmgas 985,222 272.7 2.0
calls to kcmgcs 11,669 3.2 0.0
change write time 9,910 2.7 0.0
cleanout - number of ktugct calls 18,903 5.2 0.0
cleanouts and rollbacks - consist 33 0.0 0.0
cleanouts only - consistent read 932 0.3 0.0
cluster key scan block gets 289,955 80.3 0.6
cluster key scans 101,840 28.2 0.2
commit cleanout failures: block l 0 0.0 0.0
commit cleanout failures: buffer 113 0.0 0.0
commit cleanout failures: callbac 96 0.0 0.0
commit cleanout failures: cannot 3,095 0.9 0.0
commit cleanouts 1,966,376 544.3 4.0
commit cleanouts successfully com 1,963,072 543.3 4.0
commit txn count during cleanout 309,283 85.6 0.6
consistent changes 5,245,452 1,451.8 10.7
consistent gets 242,967,989 67,248.3 495.8
consistent gets - examination 135,768,580 37,577.8 277.0
CPU used by this session 357,659 99.0 0.7
CPU used when call started 344,951 95.5 0.7
CR blocks created 768 0.2 0.0
current blocks converted for CR 0 0.0 0.0
cursor authentications 886 0.3 0.0
data blocks consistent reads - un 1,760 0.5 0.0
db block changes 11,320,580 3,133.3 23.1
db block gets 21,192,200 5,865.5 43.2
DBWR buffers scanned 0 0.0 0.0
DBWR checkpoint buffers written 69,649 19.3 0.1
DBWR checkpoints 31 0.0 0.0
DBWR free buffers found 0 0.0 0.0
DBWR lru scans 0 0.0 0.0
DBWR make free requests 0 0.0 0.0
DBWR revisited being-written buff 0 0.0 0.0
DBWR summed scan depth 0 0.0 0.0
DBWR transaction table writes 2,070 0.6 0.0
DBWR undo block writes 44,323 12.3 0.1
deferred (CURRENT) block cleanout 745,333 206.3 1.5
dirty buffers inspected 1 0.0 0.0
enqueue conversions 8,193 2.3 0.0
enqueue deadlocks 1 0.0 0.0
enqueue releases 2,002,960 554.4 4.1
enqueue requests 2,002,963 554.4 4.1
enqueue timeouts 3 0.0 0.0
enqueue waits 451 0.1 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
exchange deadlocks 115 0.0 0.0
execute count 1,601,528 443.3 3.3
free buffer inspected 30 0.0 0.0
free buffer requested 1,196,628 331.2 2.4
hot buffers moved to head of LRU 26,707 7.4 0.1
immediate (CR) block cleanout app 965 0.3 0.0
immediate (CURRENT) block cleanou 10,817 3.0 0.0
index fast full scans (full) 0 0.0 0.0
index fetch by key 131,028,270 36,265.8 267.4
index scans kdiixs1 17,868,907 4,945.7 36.5
leaf node splits 4,528 1.3 0.0
leaf node 90-10 splits 3,017 0.8 0.0
logons cumulative 2,499 0.7 0.0
messages received 859,631 237.9 1.8
messages sent 859,631 237.9 1.8
no buffer to keep pinned count 21,253 5.9 0.0
no work - consistent read gets 87,667,752 24,264.5 178.9
opened cursors cumulative 528,984 146.4 1.1
OS Involuntary context switches 0 0.0 0.0
OS Page faults 0 0.0 0.0
OS Page reclaims 0 0.0 0.0
OS System time used 0 0.0 0.0
OS User time used 0 0.0 0.0
OS Voluntary context switches 0 0.0 0.0
parse count (failures) 7 0.0 0.0
parse count (hard) 2,928 0.8 0.0
parse count (total) 526,209 145.6 1.1
parse time cpu 2,778 0.8 0.0
parse time elapsed 5,048 1.4 0.0
physical reads 11,690 3.2 0.0
physical reads direct 6,698 1.9 0.0
physical reads direct (lob) 102 0.0 0.0
physical writes 77,270 21.4 0.2
physical writes direct 7,620 2.1 0.0
physical writes direct (lob) 0 0.0 0.0
physical writes non checkpoint 33,360 9.2 0.1
pinned buffers inspected 0 0.0 0.0
prefetched blocks 799 0.2 0.0
prefetched blocks aged out before 0 0.0 0.0
process last non-idle time 3,630 1.0 0.0
recursive calls 9,053,277 2,505.8 18.5
recursive cpu usage 255,973 70.9 0.5
redo blocks written 2,572,625 712.1 5.3
redo buffer allocation retries 50 0.0 0.0
redo entries 3,074,994 851.1 6.3
redo log space requests 60 0.0 0.0
redo log space wait time 193 0.1 0.0
redo ordering marks 0 0.0 0.0
redo size 1,016,164,852 281,252.4 2,073.5
redo synch time 1,956 0.5 0.0
redo synch writes 5,317 1.5 0.0
redo wastage 259,689,040 71,876.3 529.9
redo write time 37,488 10.4 0.1
redo writer latching time 242 0.1 0.0
redo writes 854,744 236.6 1.7
rollback changes - undo records a 1,098 0.3 0.0
Instance Activity Stats for DB: DB Instance: DB Snaps: 12 -13
Statistic Total per Second per Trans
rollbacks only - consistent read 747 0.2 0.0
rows fetched via callback 117,908,375 32,634.5 240.6
session connect time 0 0.0 0.0
session cursor cache count 16 0.0 0.0
session cursor cache hits 484,372 134.1 1.0
session logical reads 264,160,020 73,113.8 539.0
session pga memory 16,473,320 4,559.5 33.6
session pga memory max 16,914,080 4,681.5 34.5
session uga memory 17,216,514,728 4,765,157.7 35,130.3
session uga memory max 1,865,036,296 516,201.6 3,805.6
shared hash latch upgrades - no w 17,251,803 4,774.9 35.2
shared hash latch upgrades - wait 24,671 6.8 0.1
sorts (disk) 32 0.0 0.0
sorts (memory) 499,747 138.3 1.0
sorts (rows) 8,626,333 2,387.6 17.6
SQL*Net roundtrips to/from client 80,069 22.2 0.2
SQL*Net roundtrips to/from dblink 5,001 1.4 0.0
summed dirty queue length 0 0.0 0.0
switch current to new buffer 1 0.0 0.0
table fetch by rowid 238,882,317 66,117.4 487.4
table fetch continued row 4,436,670 1,228.0 9.1
table scan blocks gotten 5,066,302 1,402.2 10.3
table scan rows gotten 134,679,712 37,276.4 274.8
table scans (direct read) 0 0.0 0.0
table scans (long tables) 447 0.1 0.0
table scans (short tables) 152,382 42.2 0.3
transaction rollbacks 530 0.2 0.0
transaction tables consistent rea 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
user calls 94,382 26.1 0.2
user commits 489,423 135.5 1.0
user rollbacks 653 0.2 0.0
write clones created in backgroun 11 0.0 0.0
write clones created in foregroun 878 0.2 0.0
Tablespace IO Stats for DB: DB Instance: DB Snaps: 12 -13
->ordered by IOs (Reads + Writes) desc
Tablespace
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
T1_UNDO
31 0 0.0 1.0 46,535 13 344 0.4
T1
31 0 0.0 1.0 13,754 4 3,657 0.4
T2
3,308 1 0.8 1.1 2,973 1 0 0.0
T3
31 0 0.0 1.0 5,710 2 16,240 0.4
T4
555 0 4.0 1.0 600 0 0 0.0
SYSTEM
429 0 3.9 2.5 280 0 49 0.2
TEMP
134 0 0.4 48.1 238 0 0 0.0
T1_16K
31 0 0.0 1.0 31 0 0 0.0
T2_16K
31 0 0.0 1.0 31 0 0 0.0
Buffer Pool Statistics for DB: DB Instance: DB Snaps: 12 -13
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
D 49,625 100.0 263,975,320 4,909 69,666 0 0 20,290
16k 7,056 100.0 30 0 0 0 0 0
Instance Recovery Stats for DB: DB Instance: DB Snaps: 12 -13
-> B: Begin snapshot, E: End snapshot
Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
B 0 0 10518 10000 73728 186265 10000
E 0 0 13189 10000 73728 219498 10000
Buffer Pool Advisory for DB: DB Instance: DB End Snap: 13
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
D 32 .1 3,970 205.60 4,726,309,734
D 64 .2 7,940 111.86 2,571,419,284
D 96 .2 11,910 59.99 1,379,092,849
D 128 .3 15,880 32.24 741,224,090
D 160 .4 19,850 16.05 369,050,333
D 192 .5 23,820 1.28 29,352,221
D 224 .6 27,790 1.05 24,077,507
D 256 .6 31,760 1.03 23,723,389
D 288 .7 35,730 1.02 23,518,434
D 320 .8 39,700 1.01 23,328,106
D 352 .9 43,670 1.01 23,193,257
D 384 1.0 47,640 1.00 23,064,957
D 400 1.0 49,625 1.00 22,987,576
D 416 1.0 51,610 1.00 22,927,325
D 448 1.1 55,580 0.99 22,824,032
D 480 1.2 59,550 0.99 22,713,509
D 512 1.3 63,520 0.99 22,649,147
D 544 1.4 67,490 0.98 22,605,489
D 576 1.4 71,460 0.98 22,525,897
D 608 1.5 75,430 0.97 22,407,418
D 640 1.6 79,400 0.96 22,022,381
16k 16 .1 1,008 1.00 139,218,299
16k 32 .3 2,016 1.00 139,211,699
16k 48 .4 3,024 1.00 139,207,678
16k 64 .6 4,032 1.00 139,202,581
16k 80 .7 5,040 1.00 139,198,339
16k 96 .9 6,048 1.00 139,193,448
16k 112 1.0 7,056 1.00 139,188,446
16k 128 1.1 8,064 1.00 139,183,808
16k 144 1.3 9,072 1.00 139,179,598
16k 160 1.4 10,080 1.00 139,175,656
16k 176 1.6 11,088 1.00 139,170,607
16k 192 1.7 12,096 1.00 139,166,491
16k 208 1.9 13,104 1.00 139,162,487
16k 224 2.0 14,112 1.00 139,158,197
16k 240 2.1 15,120 1.00 139,153,797
16k 256 2.3 16,128 1.00 139,149,365
16k 272 2.4 17,136 1.00 139,144,252
16k 288 2.6 18,144 1.00 139,140,121
16k 304 2.7 19,152 1.00 139,135,435
16k 320 2.9 20,160 1.00 139,130,845
Buffer wait Statistics for DB: DB Instance: DB Snaps: 12 -13
-> ordered by wait time desc, waits desc
Tot Wait Avg
Class Waits Time (s) Time (ms)
data block 19,912 8 0
undo header 343 0 0
segment header 34 0 0
undo block 1 0 0
Enqueue activity for DB: DB Instance: DB Snaps: 12 -13
-> Enqueue stats gathered prior to 9i should not be compared with 9i data
-> ordered by Wait Time desc, Waits desc
Avg Wt Wait
Eq Requests Succ Gets Failed Gets Waits Time (ms) Time (s)
TM 981,781 981,773 0 7 1,365.43 10
TX 983,944 983,906 0 412 .59 0
HW 4,645 4,645 0 32 .09 0
Rollback Segment Stats for DB: DB Instance: DB Snaps: 12 -13
->A high value for "Pct Waits" suggests more rollback segments may be required
->RBS stats may not be accurate between begin and end snaps when using Auto Undo
managment, as RBS may be dynamically created and dropped as needed
Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
0 155.0 0.00 0 0 0 0
1 202,561.0 0.00 31,178,710 40 2 3
2 191,044.0 0.00 30,067,156 23 2 6
3 195,891.0 0.00 30,470,548 39 1 3
4 203,928.0 0.00 31,822,638 38 2 5
5 196,386.0 0.00 -4,264,350,168 38 1 3
6 204,125.0 0.00 32,081,200 24 1 7
7 192,169.0 0.00 33,732,012 45 3 6
8 195,819.0 0.00 30,503,550 40 2 2
9 202,905.0 0.00 31,595,438 40 2 4
10 195,796.0 0.00 30,566,652 29 4 9
Rollback Segment Storage for DB: DB Instance: DB Snaps: 12 -13
->Optimal Size should be larger than Avg Active
RBS No Segment Size Avg Active Optimal Size Maximum Size
0 385,024 0 385,024
1 12,705,792 944,176 2,213,732,352
2 11,657,216 1,548,937 2,214,715,392
3 13,754,368 832,465 243,392,512
4 13,754,368 946,902 235,069,440
5 12,705,792 964,352 2,195,374,080
6 20,045,824 1,232,438 2,416,041,984
7 12,705,792 977,490 3,822,182,400
8 10,608,640 875,068 243,392,512
9 11,657,216 878,119 243,392,512
10 18,997,248 1,034,104 2,281,889,792
Undo Segment Summary for DB: DB Instance: DB Snaps: 12 -13
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed
Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/
TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU
1 44,441 ########## 47 2 0 0 0/0/0/0/0/0
Undo Segment Stats for DB: DB Instance: DB Snaps: 12 -13
-> ordered by Time desc
Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/
End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU
28-Jun 11:56 7,111 ######## 47 1 0 0 0/0/0/0/0/0
28-Jun 11:46 10,782 ######## 18 2 0 0 0/0/0/0/0/0
28-Jun 11:36 6,170 ######## 42 1 0 0 0/0/0/0/0/0
28-Jun 11:26 4,966 ######## 13 1 0 0 0/0/0/0/0/0
28-Jun 11:16 6,602 ######## 40 1 0 0 0/0/0/0/0/0
28-Jun 11:06 8,810 ######## 10 1 0 0 0/0/0/0/0/0
Latch Activity for DB: DB Instance: DB Snaps: 12 -13
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
active checkpoint queue 9,585 0.0 0.0 0 0
alert log latch 158 0.0 0 0
archive control 220 0.0 0 0
archive process latch 220 0.5 1.0 0 0
cache buffer handles 264,718 0.0 0.0 0 0
cache buffers chains 416,051,175 0.0 0.0 4 401,018 0.0
cache buffers lru chain 1,285,963 0.0 0.0 0 1,206,550 0.0
channel handle pool latc 4,927 0.0 0 0
channel operations paren 10,788 0.0 0 0
checkpoint queue latch 528,319 0.0 0.0 0 69,506 0.0
child cursor hash table 35,371 0.0 0 0
Consistent RBA 854,833 0.0 0.0 0 0
dml lock allocation 1,963,007 0.9 0.0 0 0
dummy allocation 4,995 0.0 0 0
enqueue hash chains 4,014,593 0.5 0.0 0 0
enqueues 94,666 0.0 0.0 0 0
event group latch 2,340 0.0 0 0
FAL request queue 72 0.0 0 0
FIB s.o chain latch 310 0.0 0 0
FOB s.o list latch 6,769 0.0 0 0
global tx hash mapping 10,388 0.0 0 0
hash table column usage 16 0.0 0 479 0.0
job workq parent latch 0 0 316 0.0
job_queue_processes para 116 0.0 0 0
ktm global data 200 0.0 0 0
lgwr LWN SCN 855,008 0.0 0.0 0 0
library cache 5,836,900 0.4 0.0 0 8,926 0.6
library cache load lock 468 0.0 0 0
library cache pin 3,510,695 0.0 0.0 0 0
library cache pin alloca 1,402,523 0.0 0.0 0 0
list of block allocation 6,115 0.0 0 0
loader state object free 620 0.0 0 0
message pool operations 262 0.0 0 0
messages 2,664,950 0.4 0.0 0 0
mostly latch-free SCN 856,000 0.1 0.0 0 0
multiblock read objects 3,184 0.0 0 0
ncodef allocation latch 57 0.0 0 0
object stats modificatio 8 0.0 0 0
post/wait queue 6,183 0.0 0 3,082 0.0
process allocation 4,677 0.0 0 2,340 0.0
process group creation 4,677 0.0 0 0
redo allocation 4,784,936 0.5 0.0 0 0
redo copy 0 0 3,081,261 0.3
redo writing 2,576,299 0.0 0.2 0 0
row cache enqueue latch 3,017,144 0.0 0.0 0 0
row cache objects 5,049,552 0.8 0.0 0 92 0.0
sequence cache 984,824 0.0 0.1 0 0
session allocation 110,417 0.0 0.0 0 0
session idle bit 205,319 0.0 0 0
session switching 57 0.0 0 0
Latch Activity for DB: DB Instance: DB Snaps: 12 -13
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
session timer 1,204 0.0 0 0
shared pool 2,409,725 0.1 0.1 0 0
simulator hash latch 7,439,429 0.0 0.0 0 0
simulator lru latch 202 0.0 0 128,961 0.2
sort extent pool 1,053 0.0 0 0
SQL memory manager worka 67 0.0 0 0
temp lob duration state 187 0.0 0 0
transaction allocation 7,290 0.0 0 0
transaction branch alloc 5,668 0.0 0 0
undo global data 3,002,808 0.4 0.0 0 0
user lock 8,642 0.0 0 0
Latch Sleep breakdown for DB: DB Instance: DB Snaps: 12 -13
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
cache buffers chains 416,051,175 197,296 750 196776/298/2
15/7/0
row cache objects 5,049,552 42,368 38 42330/38/0/0
/0
redo allocation 4,784,936 24,766 77 24697/61/8/0
/0
library cache 5,836,900 23,477 276 23207/264/6/
0/0
enqueue hash chains 4,014,593 21,061 26 21035/26/0/0
/0
dml lock allocation 1,963,007 17,887 16 17872/14/1/0
/0
undo global data 3,002,808 12,350 8 12342/8/0/0/
0
messages 2,664,950 10,131 5 10126/5/0/0/
0
shared pool 2,409,725 1,362 189 1175/185/2/0
/0
row cache enqueue latch 3,017,144 470 7 463/7/0/0/0
mostly latch-free SCN 856,000 434 1 433/1/0/0/0
library cache pin 3,510,695 345 4 341/4/0/0/0
sequence cache 984,824 53 4 49/4/0/0/0
library cache pin allocati 1,402,523 35 1 34/1/0/0/0
redo writing 2,576,299 5 1 4/1/0/0/0
archive process latch 220 1 1 0/1/0/0/0
Latch Miss Sources for DB: DB Instance: DB Snaps: 12 -13
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
archive process latch kcrrpa 0 1 0
cache buffers chains kcbgtcr: fast path 0 346 188
cache buffers chains kcbgtcr: kslbegin excl 0 163 239
cache buffers chains kcbrls: kslbegin 0 86 170
cache buffers chains kcbget: pin buffer 0 53 49
cache buffers chains kcbgcur: kslbegin 0 44 20
cache buffers chains kcbnlc 0 38 22
cache buffers chains kcbget: exchange 0 8 16
cache buffers chains kcbchg: kslbegin: call CR 0 3 21
cache buffers chains kcbget: exchange rls 0 3 2
cache buffers chains kcbnew 0 3 0
cache buffers chains kcbbxsv 0 2 0
cache buffers chains kcbchg: kslbegin: bufs not 0 1 23
dml lock allocation ktaiam 0 13 1
dml lock allocation ktaidm 0 3 15
enqueue hash chains ksqgtl3 0 22 2
enqueue hash chains ksqrcl 0 4 24
library cache kglic 0 55 4
library cache kglhdgn: child: 0 42 86
library cache kglobpn: child: 0 26 32
library cache kglpndl: child: after proc 0 14 0
library cache kglpndl: child: before pro 0 13 73
library cache kglpin: child: heap proces 0 12 29
library cache kgllkdl: child: cleanup 0 11 4
library cache kglupc: child 0 4 7
library cache kgldti: 2child 0 2 4
library cache kglpnp: child 0 1 4
library cache pin kglpnal: child: alloc spac 0 3 3
library cache pin kglpndl 0 1 1
library cache pin alloca kglpnal 0 1 0
messages ksaamb: after wakeup 0 3 2
messages ksarcv 0 2 2
mostly latch-free SCN kcslcu3 0 1 1
redo allocation kcrfwr 0 74 8
redo allocation kcrfwi: more space 0
Maybe you are looking for
-
Hi, We have a SAP Oracle Database, the size is 1.5 TB. The Hardware details are Application – SAP ERP 6.0 EHP5 SPS6 Database – Oracle 10.2.0.2 OS Platform – AIX 5.3 TL05, SP06 HW – 8 Dual Core CPU's, 256 GB RAM on DB-CI server Test Environment : 32 G
-
I have ? marks on the dock on my eMac now ++
Hi, My eMac had been running a bit slow but also my keyboard may odd sounds (clicking) and typed very slow and I could not open folders or even .pdf files on my desk top, I did make adjustments under System preferences under keyboard and mouse (delay
-
File adapter stops pulling files from ftp
hi, i defined a communication channel in this way: adapter type: File Sender transport protocol: FTP message protocol: file content conversion data connction: passive connect mode: permanently Quality of service: exacly once. somtimes the channel st
-
Large group mailout problem...
I am trying to send a mail to a large group from mail v 2.0.5 but it will not send. I have tried various accounts but it will not go from any. I constantly get a message saying that it cannot be sent with that server after a very long wait. Can anyon
-
Performance : Anyconnect vs. IPSEC
Currently running a pair of 5520 as VPN routers. running 8.0.3, been using only Anyconnect SSL VPN for end users. These boxes do nothing else except serve VPN clients. However, recently we tried testing some IPSEC clients and are realizing that the A