Block file sync

Hi how can we block users from uploading files to cloud storage ?
Murugan

Hi,
The number of commits can be retrieved from v$sysstat and v$sesstat.
There are no commit statements in any trace file,
you need to look at lines starting with XCTEND in the raw trace data.
Also the size of the log_buffer has 0 to do with the log file sync event, and setting big log_buffer is the typical 'more is better' tuning which doesn't help.
You need to investigate the speed of the devices with holding online redolog, do NOT locate online redologs on RAID-5 devices.
Also posting 'Any one ??' when you are not getting response immediately shows you don't seem to understand this is a volunteer forum.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Statspack: High log file sync timeouts and waits

    Hi all,
    Please see an extract from our statpack report:
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    log file sync 349,713 215,674 74.13
    db file sequential read 16,955,622 31,342 10.77
    CPU time 21,787 7.49
    direct path read (lob) 92,762 8,910 3.06
    db file scattered read 4,335,034 4,439 1.53
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sync 349,713 150,785 215,674 617 1.8
    db file sequential read 16,955,622 0 31,342 2 85.9
    I hope the above is readable. I'm concerned with the very high number of Waits and Timeouts, particulary around the log file sync event. From reading around I suspect that the disk our redo log sits on isn't fast enough.
    1) Is this conclusion correct, are these timeouts excessively high (70% seems high...)?
    2) I see high waits on almost every other event (but not timeouts), is this pointing towards an incorrect database database setup (give our very high loads of 160 executes second?
    Any help would be much appreciated.
    Jonathan

    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~ % Total
    Event Waits Time (s) Ela Time
    log file sync 349,713 215,674 74.13
    db file sequential read 16,955,622 31,342 10.77
    CPU time 21,787 7.49
    direct path read (lob) 92,762 8,910 3.06
    db file scattered read 4,335,034 4,439 1.53
    Avg
    Total Wait wait Waits
    Event Waits Timeouts Time (s) (ms) /txn
    log file sync 349,713 150,785 215,674 617 1.8
    db file sequential read 16,955,622 0 31,342 2 85.9What's the time frame of this report on?
    It looks like your disk storage can't keep up with the volume of I/O requests from your database.
    The first few thing need to look at, what're IO intensive SQLs in your database. Are these SQLs doing unnecessary full table scan?
    Find out the hot blocks and the objects they belong.
    Check v$session_wait view.
    Is there any other suspicious activity going on in your Server ? Like other program other than Oracle doing high IO activities? Are there any core dump going on?

  • Offline Files sync gives Access Denied on Windows 8.1 Enterprise

    A small number of our staff have now been issued with Windows 8.1 Enterprise hybrid tablet computers, however there is a problem with using Offline Files on them - when synchronising, it responds "Access Denied".
    The tablets have Windows 8.1 Enterprise with all the latest updates on them. Staff users have a home folder on the network under \\server\staff\homes\departmentname\username which gets mapped to U: and their My Documents is redirected there. The server is currently
    Windows Server 2003 R2 SP2.
    We have tried:
    Resetting the Offline Files cache using the FormatDatabase registry key
    Using Group Policy Objects to force Offline Files synchronisation at logon and logoff
    Clearing the local cached copy of the user's profile from the machine and getting them to log back on to recreate it
    Setting up Offline Files event logging to the event viewer - this provides no useful information as it only logs disconnect/reconnect and logoff/logon events
    Forcing Group Policy update using gpupdate /force
    Forcing synchronisation using PowerShell and https://msdn.microsoft.com/en-us/library/windows/desktop/bb309189%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
    As suggested by http://support.microsoft.com/kb/275461 we gave the All Staff security group Read permissions on F:\Staff (which is the one that is shared as \\server\staff) and then blocked inheritance for folders below that
    We also checked the following:
    The CSC cache has not been relocated
    No error 7023 or event 7023 errors relating to Offline Files are present in the event logs
    The Offline Files service is running
    The OS is already Windows 8.1 Enterprise, so installing the Pro Pack is not applicable
    In HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\UserState\UserStateTechnologies\ConfigurationControls  all the values are set to 0 and not 1
    We do not use System Center Configuration Manager
    No errors were found in the Folder Redirection event logs
    None of these solved the problem, does anyone have any suggestions?
    Here is the error we are seeing:
    Thanks,
    Dan Jackson (Lead ITServices Technician)
    Long Road Sixth Form College
    Cambridge, UK

    Hi,
    Generally speaking, this problem is most probably occurs at File Server Client. 
    Firstly, please check the sharing file Sync Settings.
    Shared file properties\Sharing\Advanced Sharing\Caching 
    Also check shared file user list, make sure these problematic user account have full permission.
    On the other hand, could you able to access to the shared file directly in Windows Explorer?
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Yes, the user can access the shared folder in Windows Explorer. The user has the following permissions:
    Traverse Folder/Execute File
    List Folder/Read Data
    Read Attributes
    Read Extended Attributes
    Create Files/Write Data
    Create Folders/Append Data
    Write Attributes
    Write Extended Attributes
    Delete
    Read Permissions
    Here is a screenshot of how the Caching settings are set up on the top-level Staff share.

  • Log file sync question

    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?

    Tony Hasler wrote:
    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?It depends on what you mean by facts - presumably only the people who wrote the code know what really happens, the rest of us have to guess.
    You're right about point 1 in the MOS note: it should include "or wait for current lgwr write and posts to complete".
    This means, of course, that your session could see its "log file sync" taking twice the "redo write time" because it posted lgwr just after lgwr has started to write - so you have to wait two write and post cycles. Generally the statistical effects will reduce this extreme case.
    You've been pointed to the two best bits of advice on the internet: As Kevin points out, if you have lgwr posting a lot of processes in one go it may stall as they wake up, so the batch of waiting processes has to wait extra time; and as Riyaj points out - there's always dtrace (et al.) if you want to see what's really happening. (Tanel has some similar notes, I think, on LFS).
    If you're stuck with Oracle diagnostics only then:
    redo size / redo synch writes for sessions will tell you the typical "commit size"
    redo size + redo wastage / redo writes for lgwr will tell you the typical redo write size
    If you have a significant number of small processes "commit sizes" per write (more than CPU count, say) then you may be looking at Kevin's storm.
    Watch out for a small number of sessions with large commit sizes running in parallel with a large number of sessions with small commit sizes - this could make all the "small" processes run at the speed of the "large" processes.
    It's always worth looking at the event histogram for the critical wait events to see if their patterns offer any insights.
    Regards
    Jonathan Lewis

  • Log file sync top event during performance test -av 36ms

    Hi,
    During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                       208,327       7,406     36   46.6 Commit
    direct path write                   646,833       3,604      6   22.7 User I/O
    DB CPU                                            1,599          10.1
    direct path read temp             1,321,596         619      0    3.9 User I/O
    log buffer space                      4,161         558    134    3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
    I am not able to figure out why "log file sync" is having such slow response.
    Below is the snapshot from the load profile.
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108127 16-May-13 20:15:22       105       6.5
      End Snap:    108140 16-May-13 23:30:29       156       8.9
       Elapsed:              195.11 (mins)
       DB Time:              265.09 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,136M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,168M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                1.4                0.1       0.02       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          607,512.1           33,092.1
       Logical reads:            3,900.4              212.5
       Block changes:            1,381.4               75.3
      Physical reads:              134.5                7.3
    Physical writes:              134.0                7.3
          User calls:              145.5                7.9
              Parses:               24.6                1.3
         Hard parses:                7.9                0.4
    W/A MB processed:          915,418.7           49,864.2
              Logons:                0.1                0.0
            Executes:               85.2                4.6
           Rollbacks:                0.0                0.0
        Transactions:               18.4Some of the top background wait events:
    ^LBackground Wait Events       DB/Inst: Snaps: 108127-108140
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write         208,563     0      2,528      12      1.0   66.4
    db file parallel write            4,264     0        785     184      0.0   20.6
    Backup: sbtbackup                     1     0        516  516177      0.0   13.6
    control file parallel writ        4,436     0         97      22      0.0    2.6
    log file sequential read          6,922     0         95      14      0.0    2.5
    Log archive I/O                   6,820     0         48       7      0.0    1.3
    os thread startup                   432     0         26      60      0.0     .7
    Backup: sbtclose2                     1     0         10   10094      0.0     .3
    db file sequential read           2,585     0          8       3      0.0     .2
    db file single write                560     0          3       6      0.0     .1
    log file sync                        28     0          1      53      0.0     .0
    control file sequential re       36,326     0          1       0      0.2     .0
    log file switch completion            4     0          1     207      0.0     .0
    buffer busy waits                     5     0          1     116      0.0     .0
    LGWR wait for redo copy             924     0          1       1      0.0     .0
    log file single write                56     0          1       9      0.0     .0
    Backup: sbtinfo2                      1     0          1     500      0.0     .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
    {code}
    Workload Comparison
    ~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
    DB time: 0.78 1.36 74.36 0.02 0.07 250.00
    CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
    Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
    Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
    Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
    Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
    Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
    User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
    Parses: 7.28 24.55 237.23 0.19 1.34 605.26
    Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
    Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
    Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
    Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
    Transactions: 37.99 18.36 -51.67
    First Second Diff
    1st 2nd
    Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
    (ms) %DB time
    SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
    35.6 46.57
    CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
    5.6 22.66
    log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
    12.1 15.90
    log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
    N/A 10.06
    SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
    84.0 4.93
    -direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
    0.0 1.76
    -db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
    0.0 0.41
    {code}
    *To sum it sup:
    1. Why is the IO response getting such an hit during the new perf test? Please suggest*
    2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
    {code}
    select *from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    {code}
    Please let me know if you would like to see any other stats.
    Edited by: Kunwar on May 18, 2013 2:20 PM

    1. A snapshot interval of 3 hours always generates meaningless results
    Below are some details from the 1 hour interval AWR report.
    Platform                         CPUs Cores Sockets Memory(GB)
    HP-UX IA (64-bit)                   4     4       3      31.95
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108129 16-May-13 20:45:32       140       8.0
      End Snap:    108133 16-May-13 21:45:53       150       8.8
       Elapsed:               60.35 (mins)
       DB Time:              140.49 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,168M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,120M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                2.3                0.1       0.03       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          719,553.5           34,374.6
       Logical reads:            4,017.4              191.9
       Block changes:            1,521.1               72.7
      Physical reads:              136.9                6.5
    Physical writes:              158.3                7.6
          User calls:              167.0                8.0
              Parses:               25.8                1.2
         Hard parses:                8.9                0.4
    W/A MB processed:          406,220.0           19,406.0
              Logons:                0.1                0.0
            Executes:               88.4                4.2
           Rollbacks:                0.0                0.0
        Transactions:               20.9
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                        73,761       6,740     91   80.0 Commit
    log buffer space                      3,581         541    151    6.4 Configurat
    DB CPU                                              348           4.1
    direct path write                   238,962         241      1    2.9 User I/O
    direct path read temp               487,874         174      0    2.1 User I/O
    Background Wait Events       DB/Inst: Snaps: 108129-108133
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write          61,049     0      1,891      31      0.8   87.8
    db file parallel write            1,590     0        251     158      0.0   11.6
    control file parallel writ        1,372     0         56      41      0.0    2.6
    log file sequential read          2,473     0         50      20      0.0    2.3
    Log archive I/O                   2,436     0         20       8      0.0     .9
    os thread startup                   135     0          8      60      0.0     .4
    db file sequential read             668     0          4       6      0.0     .2
    db file single write                200     0          2       9      0.0     .1
    log file sync                         8     0          1     152      0.0     .1
    log file single write                20     0          0      21      0.0     .0
    control file sequential re       11,218     0          0       0      0.1     .0
    buffer busy waits                     2     0          0     161      0.0     .0
    direct path write                     6     0          0      37      0.0     .0
    LGWR wait for redo copy             380     0          0       0      0.0     .0
    log buffer space                      1     0          0      89      0.0     .0
    latch: cache buffers lru c            3     0          0       1      0.0     .0     2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
    Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
    We don't know anything about your online redo log configuration
    Below is my redo log configuration.
        GROUP# STATUS  TYPE    MEMBER                                                       IS_
             1         ONLINE  /oradata/fs01/PERFDB1/redo_1a.log                           NO
             1         ONLINE  /oradata/fs02/PERFDB1/redo_1b.log                           NO
             2         ONLINE  /oradata/fs01/PERFDB1/redo_2a.log                           NO
             2         ONLINE  /oradata/fs02/PERFDB1/redo_2b.log                           NO
             3         ONLINE  /oradata/fs01/PERFDB1/redo_3a.log                           NO
             3         ONLINE  /oradata/fs02/PERFDB1/redo_3b.log                           NO
    6 rows selected.
    04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
    04:13:26 perf_monitor@PERFDB1> select *from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS                 FIRST_CHANGE# FIRST_TIME
             1          1      40689  524288000          2 YES INACTIVE              13026185905545 18-MAY-13 01:00
             2          1      40690  524288000          2 YES INACTIVE              13026185931010 18-MAY-13 03:32
             3          1      40691  524288000          2 NO  CURRENT               13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM

  • File-RFC-File Sync-- BPM error..

    Hi All,
    My scenario is File -- RFC -- File, synchronous scenario, The scenario needs to be designed using BPM its clients requirement, and i have used Receive step, BLOCK and in the block Transform, Sync Send, Transform and with in the block i have used control step with cancel process.. and out of block send step .. when i execute the scenario, in sxmb_moni i can able to see the response message and when i see in SXMB_MONI_BPE the flow was blocked in the BLOCK1 Step and i am not getting the response file in my target ftp directory.. so request you to post any links how to debug this issue..
    regards,
    sai

    BLOCK and in the block Transform, Sync Send, Transform and with in the block i have used control step with cancel
    process.
    when i execute the scenario, in sxmb_moni i can able to see the response message and when i see in SXMB_MONI_BPE the flow
    was blocked in the BLOCK1 Step and i am not getting the response file in my target ftp directory.
    The BPM is working perfectly as designed....you have a maintained a Cancel Process step....how in such a case will the BPM proceed? The BPM is bound to stop in side the block itself the BLOCK1.
    What is the reason of having a Block over Transform --> SYNC Send --> Transform?
    Your BPM should be:
    Receive --> Transform --> Sync_Send --> transform --> Send.
    If you want to have a block then have it for the individual steps with an Exception Branch and then in this Exception Branch have a Conntrol Step cancelling the process.
    Regards,
    Abhishek.

  • 45 min long session of log file sync waits between 5000 and 20000 ms

    45 min long log file sync waits between 5000 and 20000 ms
    Encountering a rather unusual performance issue. Once every 4 hours I am seeing a 45 minute long log file sync wait event being reported using Spotlight on Oracle. For the first 30 minutes the event wait is for approx 5000 ms, followed by an increase to around 20000 ms for the next 15 min before rapidly dropping off and normal operation continues for the next 3 hours and 15 minutes before the cycle repeats itself. The issue appears to maintain it's schedule independently of restarting the database. Statspack reports do not show an increase in commits or executions or any new sql running during the time the issue is occuring. We have two production environments both running identicle applications with similar usage and we do not see the issue on the other system. I am leaning towards this being a hardware issue, but the 4 hour interval regardless of load on the database has me baffled. If it were a disk or controller cache issue one would expect to see the interval change with database load.
    I cycle my redo logs and archive them just fine with log file switches every 15-20 minutes. Even during this unusally long and high session of log file sync waits I can see that the redo log files are still switching and are being archived.
    The redo logs are on a RAID 10, we have 4 redo logs at 1 GB each.
    I've run statspack reports on hourly intervals around this event:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And here is a sample while not encountering the issue:
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66
    Yes I know I am already tight on I/O for my redo even during normal operations yet, my redo and archiving works just fine for 3 hours and 15 minutes (11 to 15 log file switches). These normal switches result in a log file sync wait of about 5000 ms for about 45 seconds while the 1GB redo log is being written and then archived.
    I welcome any and all feedback.
    Message was edited by:
    acyoung1
    Message was edited by:
    acyoung1

    Lee,
    log_buffer = 1048576 we use a standard of 1 MB for our buffer cache, we've not altered the setting. It is my understanding that Oracle typically recommends that you not exceed 1MB for the log_buffer, stating that a larger buffer normally does not increase performance.
    I would agree that tuning the log_buffer parameter may be a place to consider; however, this issue last for ~45 minutes once every 4 hours regardless of database load. So for 3 hours and 15 minutes during both peak usage and low usage the buffer cache, redo log and archival processes run just fine.
    A bit more information from statspack reports:
    Here is a sample while the issue is occuring.
    Snap Id Snap Time Sessions
    Begin Snap: 661 24-Mar-06 12:45:08 87
    End Snap: 671 24-Mar-06 13:41:29 87
    Elapsed: 56.35 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 615,141.44 2,780.83
    Logical reads: 13,241.59 59.86
    Block changes: 2,255.51 10.20
    Physical reads: 144.56 0.65
    Physical writes: 61.56 0.28
    User calls: 1,318.50 5.96
    Parses: 210.25 0.95
    Hard parses: 8.31 0.04
    Sorts: 16.97 0.08
    Logons: 0.14 0.00
    Executes: 574.32 2.60
    Transactions: 221.21
    % Blocks changed per Read: 17.03 Recursive Call %: 26.09
    Rollback per transaction %: 0.03 Rows per Sort: 46.87
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 98.91 In-memory Sort %: 100.00
    Library Hit %: 98.89 Soft Parse %: 96.05
    Execute to Parse %: 63.39 Latch Hit %: 99.87
    Parse CPU to Parse Elapsd %: 90.05 % Non-Parse CPU: 85.05
    Shared Pool Statistics Begin End
    Memory Usage %: 89.96 92.20
    % SQL with executions>1: 76.39 67.76
    % Memory for SQL w/exec>1: 72.53 63.71
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 756,729 2,538,034 88.47
    db file sequential read 208,851 153,276 5.34
    log file parallel write 636,648 129,981 4.53
    enqueue 810 21,423 .75
    log file sequential read 65,540 14,480 .50
    And this is a sample during "normal" operation.
    Snap Id Snap Time Sessions
    Begin Snap: 671 24-Mar-06 13:41:29 88
    End Snap: 681 24-Mar-06 14:42:57 88
    Elapsed: 61.47 (mins)
    Cache Sizes
    ~~~~~~~~~~~
    db_block_buffers: 196608 log_buffer: 1048576
    db_block_size: 8192 shared_pool_size: 67108864
    Load Profile
    ~~~~~~~~~~~~ Per Second Per Transaction
    Redo size: 716,776.44 2,787.81
    Logical reads: 13,154.06 51.16
    Block changes: 2,627.16 10.22
    Physical reads: 129.47 0.50
    Physical writes: 67.97 0.26
    User calls: 1,493.74 5.81
    Parses: 243.45 0.95
    Hard parses: 9.23 0.04
    Sorts: 18.27 0.07
    Logons: 0.16 0.00
    Executes: 664.05 2.58
    Transactions: 257.11
    % Blocks changed per Read: 19.97 Recursive Call %: 25.87
    Rollback per transaction %: 0.02 Rows per Sort: 46.85
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 99.02 In-memory Sort %: 100.00
    Library Hit %: 98.95 Soft Parse %: 96.21
    Execute to Parse %: 63.34 Latch Hit %: 99.90
    Parse CPU to Parse Elapsd %: 96.60 % Non-Parse CPU: 84.06
    Shared Pool Statistics Begin End
    Memory Usage %: 92.20 88.73
    % SQL with executions>1: 67.76 75.40
    % Memory for SQL w/exec>1: 63.71 68.28
    Top 5 Wait Events
    ~~~~~~~~~~~~~~~~~ Wait % Total
    Event Waits Time (cs) Wt Time
    log file sync 953,037 195,513 53.43
    log file parallel write 875,783 83,119 22.72
    db file sequential read 221,815 63,944 17.48
    log file sequential read 98,310 18,848 5.15
    db file scattered read 67,584 2,427 .66

  • Log file sync waits

    10.2.0.2 aix 5.3 64bit archivelog mode.
    I'm going to attempt to describe the system first and then outline the issue: The database is about 1Gb in size of which only about 400Mb is application data. There is only one table in the schema that is very active with all transactions inserting and or updating a row to log the user activity. The rest of the tables are used primarily for reads by the users and periodically updated by the application administrator with application code. There's about 1.2G of archive logs generated per day, from 3 50Mb redo logs all on the same filesystem.
    The problem: We randomly have issues with users being kicked out of the application or hung up for a period of time. This application is used at a remote site and many times we can attribute the users issues to network delays or problems with a terminal server they are logging into. Today however they called and I noticed an abnormally high amount of 'log file sync' waits.
    I asked the application admin if there could have been more activity during that time frame and more frequent commits than normal, but he says there was not. My next thought was that there might be an issue with the IO sub-system that the logs are on. So I went to our aix admin to find out the activity of that file system during that time frame. She had an nmon report generated that shows the RAID-1 disk group peak activity during that time was only 10%.
    Now I took two awr reports and compared some of the metrics to see if indeed there was the same amount of activity, and it does look like the load was the same. With the same amount of activity & commits during both time periods wouldn't that lead to it being time spent waiting on writes to the disk that the redo logs are on? If so, why wouldn't the nmon report show a higher percentage of disk activity?
    I can provide more values from the awr reports if needed.
              per sec          per trx
    Redo size:     31,226.81     2,334.25
    Logical reads:     646.11          48.30
    Block changes:     190.80          14.26
    Physical reads:     0.65          0.05
    Physical writes:     3.19          0.24
    User calls:     69.61          5.20
    Parses:          34.34          2.57
    Hard parses:     19.45          1.45
    Sorts:          14.36          1.07
    Logons:          0.01          0.00
    Executes:     36.49          2.73
    Transactions:     13.38
    Redo size:     33,639.71      2,347.93
    Logical reads:     697.58          48.69
    Block changes:     215.83          15.06
    Physical reads:     0.86          0.06
    Physical writes:     3.26          0.23
    User calls:     71.06          4.96
    Parses:          36.78          2.57
    Hard parses:     21.03          1.47
    Sorts:          15.85          1.11
    Logons:          0.01          0.00
    Executes:     39.53          2.76
    Transactions:     14.33
                        Total          Per sec          Per Trx
    redo blocks written           252,046      70.52           5.27
    redo buffer allocation retries      7           0.00           0.00
    redo entries                167,349      46.82           3.50
    redo log space requests      7           0.00           0.00
    redo log space wait time      49           0.01           0.00
    redo ordering marks           2,765           0.77           0.06
    redo size                111,612,156      31,226.81      2,334.25
    redo subscn max counts      5,443           1.52           0.11
    redo synch time           47,910           13.40           1.00
    redo synch writes           64,433           18.03           1.35
    redo wastage                13,535,756      3,787.03      283.09
    redo write time                27,642           7.73           0.58
    redo writer latching time      2           0.00           0.00
    redo writes                48,507           13.57           1.01
    user commits                47,815           13.38           1.00
    user rollbacks                0           0.00           0.00
    redo blocks written           273,363      76.17           5.32
    redo buffer allocation retries      6           0.00           0.00
    redo entries                179,992      50.15           3.50
    redo log space requests      6           0.00           0.00
    redo log space wait time      18           0.01           0.00
    redo ordering marks           2,997           0.84           0.06
    redo size                120,725,932      33,639.71      2,347.93
    redo subscn max counts      5,816           1.62           0.11
    redo synch time           12,977           3.62           0.25
    redo synch writes           66,985           18.67           1.30
    redo wastage                14,665,132      4,086.37      285.21
    redo write time                11,358           3.16           0.22
    redo writer latching time      6           0.00           0.00
    redo writes                52,521           14.63           1.02
    user commits                51,418           14.33           1.00
    user rollbacks                0           0.00           0.00Edited by: PktAces on Oct 1, 2008 1:45 PM

    Mr Lewis,
    Here's the results from the histogram query, the two sets of values were gathered about 15 minutes apart, during a slower than normal activity time.
    105     log file parallel write     1     714394
    105     log file parallel write     2     289538
    105     log file parallel write     4     279550
    105     log file parallel write     8     58805
    105     log file parallel write     16     28132
    105     log file parallel write     32     10851
    105     log file parallel write     64     3833
    105     log file parallel write     128     1126
    105     log file parallel write     256     316
    105     log file parallel write     512     192
    105     log file parallel write     1024     78
    105     log file parallel write     2048     49
    105     log file parallel write     4096     31
    105     log file parallel write     8192     35
    105     log file parallel write     16384     41
    105     log file parallel write     32768     9
    105     log file parallel write     65536     1
    105     log file parallel write     1     722787
    105     log file parallel write     2     295607
    105     log file parallel write     4     284524
    105     log file parallel write     8     59671
    105     log file parallel write     16     28412
    105     log file parallel write     32     10976
    105     log file parallel write     64     3850
    105     log file parallel write     128     1131
    105     log file parallel write     256     316
    105     log file parallel write     512     192
    105     log file parallel write     1024     78
    105     log file parallel write     2048     49
    105     log file parallel write     4096     31
    105     log file parallel write     8192     35
    105     log file parallel write     16384     41
    105     log file parallel write     32768     9
    105     log file parallel write     65536     1

  • Log file sync event

    Hi all,
    We are using Oracle 9.2.0.4 on SUSE Linux 10. In Our statspack report one of the Top timed event is log file sysnc we are getting.We are not using any storage.IS this a bug of 9.2.0.4 or what is the solution of it
    STATSPACK report for
    DB Name         DB Id    Instance     Inst Num Release     Cluster Host
    ai          1495142514 ai                1 9.2.0.4.0   NO      ai-oracle
                Snap Id     Snap Time      Sessions Curs/Sess Comment
    Begin Snap:     241 03-Sep-09 12:17:17      255      63.2
      End Snap:     242 03-Sep-09 12:48:50      257      63.4
       Elapsed:               31.55 (mins)
    Cache Sizes (end)
    ~~~~~~~~~~~~~~~~~
                   Buffer Cache:     1,280M      Std Block Size:         8K
               Shared Pool Size:       160M          Log Buffer:     1,024K
    Load Profile
    ~~~~~~~~~~~~                            Per Second       Per Transaction
                      Redo size:              7,881.17              8,673.87
                  Logical reads:             14,016.10             15,425.86
                  Block changes:                 44.55                 49.04
                 Physical reads:              3,421.71              3,765.87
                Physical writes:                  8.97                  9.88
                     User calls:                254.50                280.10
                         Parses:                 27.08                 29.81
                    Hard parses:                  0.46                  0.50
                          Sorts:                  8.54                  9.40
                         Logons:                  0.12                  0.13
                       Executes:                139.47                153.50
                   Transactions:                  0.91
      % Blocks changed per Read:    0.32    Recursive Call %:    42.75
    Rollback per transaction %:   13.66       Rows per Sort:   120.84
    Instance Efficiency Percentages (Target 100%)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                Buffer Nowait %:  100.00       Redo NoWait %:  100.00
                Buffer  Hit   %:   75.59    In-memory Sort %:   99.99
                Library Hit   %:   99.55        Soft Parse %:   98.31
             Execute to Parse %:   80.58         Latch Hit %:  100.00
    Parse CPU to Parse Elapsd %:   67.17     % Non-Parse CPU:   99.10
    Shared Pool Statistics        Begin   End
                 Memory Usage %:   95.32   96.78   
        % SQL with executions>1:   74.91   74.37
      % Memory for SQL w/exec>1:   68.59   69.14
    Top 5 Timed Events
    ~~~~~~~~~~~~~~~~~~                                                     % Total
    Event                                               Waits    Time (s) Ela Time
    log file sync                                      11,558      10,488    67.52
    db file sequential read                           611,828       3,214    20.69
    control file parallel write                           436         541     3.48
    buffer busy waits                                     626         522     3.36
    CPU time                                                          395     2.54
    ^LWait Events for DB: ai  Instance: ai  Snaps: 241 -242
    -> s  - second
    -> cs - centisecond -     100th of a second
    -> ms - millisecond -    1000th of a second
    -> us - microsecond - 1000000th of a second
    -> ordered by wait time desc, waits desc (idle events last)
                                                                       Avg
                                                         Total Wait   wait    Waits
    Event                               Waits   Timeouts   Time (s)   (ms)     /txn
    log file sync                      11,558      9,981     10,488    907      6.7
    db file sequential read           611,828          0      3,214      5    355.7
    control file parallel write           436          0        541   1241      0.3
    buffer busy waits                     626        518        522    834      0.4
    control file sequential read          661          0        159    241      0.4
    BFILE read                            734          0        110    151      0.4
    db file scattered read            595,462          0         81      0    346.2
    enqueue                                15          5         19   1266      0.0
    latch free                            109         22          1      8      0.1
    db file parallel read                 102          0          1      6      0.1
    log file parallel write             1,498      1,497          1      0      0.9
    BFILE get length                      166          0          0      3      0.1
    SQL*Net break/reset to clien          199          0          0      1      0.1
    SQL*Net more data to client         5,139          0          0      0      3.0
    BFILE open                             76          0          0      0      0.0
    row cache lock                          5          0          0      0      0.0
    BFILE internal seek                   734          0          0      0      0.4
    BFILE closure                          76          0          0      0      0.0
    db file parallel write                173          0          0      0      0.1
    direct path read                       18          0          0      0      0.0
    direct path write                       4          0          0      0      0.0
    SQL*Net message from client       480,888          0    284,247    591    279.6
    virtual circuit status                 64         64      1,861  29072      0.0
    wakeup time manager                    59         59      1,757  29781      0.0

    Your elapsed time is roughly 2000 seconds (31:55 rounded up) - and your log file sync time is roughly 10,000 - which is 5 seconds per second for the duration. Alternatively your session count is roughly 250 at start and end of snapshot - so if we assume that the number of sessions was steady for the duration, every session has suffered 40 seconds of log file sync in the interval. You've recorded roughly 1,500 transactions in the interval (0.91 per second, of which about 13% were rollbacks) - so your log file sync time has averaged more than 6.5 seconds per commit.
    Whichever way you look at it, this suggests that either the log file sync figures are wrong, or you have had a temporary hardware failure. Given that you've had a few buffer busy waits and control file write waits of about 900 m/s each, the hardware failure seems likely.
    Check log file parallel write times to see if this helps to confirm the hypothesis. (Unfortunately some platforms don't report liog file parallel wriite times correctly for earlier versions of 9.2 - so this may not help.)
    You also have 15 enqueue waits averaging 1.2 seconds - check the enqueue stats section of the report to see which enqueue this was: if it was (e.g. CF - control file) then this also helps to confirm the hardware hypothesis.
    It's possible that you had a couple of hardware resets or something of that sort in the interval that stopped your system quite dramatically for a minute or two.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan

  • Log File Sync

    Hi all, I am using Oracle 10gR2 on Solaris 10.
    It did a SQL Trace and came up with the following resultMisses in library cache during parse: 591
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                            241        0.06          0.61
      KJC: Wait for msg sends to complete             2        0.00          0.00
      SQL*Net message to client                    1768        0.00          0.00
      SQL*Net message from client                  1768        0.14          7.94
      row cache lock                                  7        0.00          0.00
      gc cr grant 2-way                               1        0.00          0.00
      db file sequential read                        67        0.87          6.73
      gc current grant 2-way                         19        0.00          0.01
      gc current grant busy                          58        0.01          0.08
      log file sync                                3055        0.98       2592.00
      gc current block 2-way                         14        0.00          0.02
      gc cr block 2-way                              77        0.00          0.06
      log file switch completion                     12        0.98          8.80
      gc current request                              5        1.23          6.15
      gc current block lost                           1        0.45          0.45
      lock deadlock retry                             1        0.00          0.00
      latch free                                      1        0.00          0.00
      enq: TM - contention                            1        0.00          0.00
      gc cr request                                   5        1.23          6.14
      gc cr block lost                                1        0.31          0.31
      cr request retry                                1        0.00          0.00
      latch: session allocation                       1        0.00          0.00
      gc buffer busy                                  2        0.98          1.96
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      237      0.08       0.05          0          0         38           0
    Execute   2184      0.51       6.17          1        200        585         364
    Fetch     1884      0.18       6.96         27       3234        195        2127
    total     4305      0.77      13.19         28       3434        818        2491
    Misses in library cache during parse: 21
    Misses in library cache during execute: 19
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             21        0.00          0.01
      row cache lock                                248        0.01          0.08
      gc cr grant 2-way                               3        0.00          0.00
      db file sequential read                        28        1.01          3.22
      gc current grant busy                           8        0.00          0.00
      gc current block 2-way                          5        0.00          0.00
      gc cr block 2-way                               1        0.00          0.00
      log file switch completion                      4        0.98          3.55
      gc current request                              1        1.22          1.22
      latch: KCL gc element parent latch              1        0.00          0.00
      latch: redo allocation                          1        0.00          0.00
      gc current block busy                           1        0.64          0.64
    1181  user  SQL statements in session.
      314  internal SQL statements in session.
    1495  SQL statements in session.There is a lot of log file sync waits. There were a lot of INSERTS in the sql but I did not find any commits. For example
    INSERT INTO CM_WORKFLOW_AUDIT (AUDIT_TRAIL_ID, CASE_HISTORY_ID,USER_ID,
    GROUP_ID,ACTION_ID,DESCRIPTION,DATE_TIME)
    VALUES
    ('080504001809',2154515,19,2,23,'Ticket[2157817] added to Super Ticket',
    TO_DATE('04-05-2008 13:14:38','dd-mm-yyyy HH24:MI:SS'))
    But there is no commit at the end, there are a lot of INSERTS like this one but no commit at the end of it. So log file sync cant be waiting to flush the buffer into a redolog (well, that is what I think atleast). Can some one please tell me what is causing the log files sync wait? By the way my log_buffer is 12mb.
    Regards.....

    Hi,
    The number of commits can be retrieved from v$sysstat and v$sesstat.
    There are no commit statements in any trace file,
    you need to look at lines starting with XCTEND in the raw trace data.
    Also the size of the log_buffer has 0 to do with the log file sync event, and setting big log_buffer is the typical 'more is better' tuning which doesn't help.
    You need to investigate the speed of the devices with holding online redolog, do NOT locate online redologs on RAID-5 devices.
    Also posting 'Any one ??' when you are not getting response immediately shows you don't seem to understand this is a volunteer forum.
    Sybrand Bakker
    Senior Oracle DBA

  • Log file sync spike

    We have just deployed a 4-node RAC cluster on 10GR2. We force a log switch every 5 minutes to ensure our Dataguard standby site is relatively up to date, we use the ARCH to ship logs. We are running to a very fast HP XP 12000 with massive amounts of write cache, so we never actually write straight to disk. However everytime we do a log switch and archive the log, we see a massive spike in the log file sync event. This is a real-time billing system so we monitor transaction response times in ms. Our response time for a transaction can go from 8ms to around 500ms.
    I can't understand why this is happening, not only are our disks fast but we are also using asynch I/O and ASM. Surely with asynch I/O you should never wait for a write to complete.

    Log file sync event happens when client wait for LGWR finishes write to the log file after client said 'commit'. The way to reduce the number of the 'Log file sync' events is to increase the speed of LGWR process or not to commit that often.
    You've described your disk system as very fast - what is the amount of data you write on every log switch? How does the performance of this write relates to your disk system tests? what block size did you use when testing the disk system? as far as I remember the LGWR uses OS block size and not the DB block size to write data to the disk. Try to experiment on your test system - put your log files on the virtual disk created in RAM and run the test case - do you see the delays?
    With such restrictions for the transaction time you may want to look at Oracle Times-Ten database (http://www.oracle.com/database/timesten.html)
    Since you've mentioned the 10gR2 you could probably use the new feature - asynchronous commit - in this case your transaction will not wait for the LGWR process. Be aware that using the NOWAIT commit opens a small possibility of data loss - the doc describes it quite clear.
    http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_sqlproc.htm#CIHEDGBF
    Mike

  • Log file sync vs log file parallel write probably not bug 2669566

    This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
    Version : 9.2.0.8
    Platform : Solaris
    Application : Oracle Apps
    The number of commits per second ranges between 10 and 30.
    When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
    Below just 2 samples where the ratio is even about 20.
    "snap_time"     " log file parallel write avg"     "log file sync avg"     "ratio
    11/05/2008 10:38:26      8,142     156,343     19.20
    11/05/2008 10:08:23     8,434     201,915     23.94
    So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
    First I thought that I was hitting bug 2669566.
    But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
    And I think that it proves that I am NOT hitting this bug.
    Below is a sample of the output for the log writer.
    -- End of snap 3
    HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
    DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
    DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
    DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
    DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
    DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
    DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
    When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
    In the example above 781036 + 210432 = 991468 micro seconds.
    This is the case for all the snaps taken by snapper.
    So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
    So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
    Any clues?

    Yes that is true!
    But that is the way I calculate the average wait time = total wait time / total waits
    So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
    I use the query below:
    select snap_id
    , snap_time
    , event
    , time_waited_micro
    , (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
    , total_waits
    , (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
    , trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
    from (
    select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
    lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
    lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
    lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
    lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
    row_number() over (partition by event order by sn.snap_id) r
    from perfstat.stats$system_event se, perfstat.stats$snapshot sn
    where se.SNAP_ID = sn.SNAP_ID
    and se.EVENT = 'log file sync'
    order by snap_id, event
    where time_waited_micro - p_time_waited_micro > 0
    order by snap_id desc;

  • Performance Issue: Wait event "log file sync" and "Execute to Parse %"

    In one of our test environments users are complaining about slow response.
    In statspack report folowing are the top-5 wait events
    Event Waits Time (cs) Wt Time
    log file parallel write 1,046 988 37.71
    log file sync 775 774 29.54
    db file scattered read 4,946 248 9.47
    db file parallel write 66 248 9.47
    control file parallel write 188 152 5.80
    And after runing the same application 4 times, we are geting Execute to Parse % = 0.10. Cursor sharing is forced and query rewrite is enabled
    When I view v$sql, following command is parsed frequently
    EXECUTIONS PARSE_CALLS
    SQL_TEXT
    93380 93380
    select SEQ_ORDO_PRC.nextval from DUAL
    Please suggest what should be the method to troubleshoot this and if I need to check some more information
    Regards,
    Sudhanshu Bhandari

    Well, of course, you probably can't eliminate this sort of thing entirely: a setup such as yours is inevitably a compromise. What you can do is make sure your log buffer is a good size (say 10MB or so); that your redo logs are large (at least 100MB each, and preferably large enough to hold one hour or so of redo produced at the busiest time for your database without filling up); and finally set ARCHIVE_LAG_TARGET to something like 1800 seconds or more to ensure a regular, routine, predictable log switch.
    It won't cure every ill, but that sort of setup often means the redo subsystem ceases to be a regular driver of foreground waits.

  • I recently purchased a movie (Prometheus) on my computer through iTunes and every time I try to put it on my iPhone, it fails. How can I successfully sync this video? Mind you, every other file syncs flawlessly.

    I recently purchased a movie (Prometheus) on my computer through iTunes and every time I try to put it on my iPhone, it fails. How can I successfully sync this video? Mind you, every other file syncs flawlessly.

    Can you connect to a wifi source on the device? Enable wifi in settings on the device itself. If you don't have wifi at home, there may be a local hotspot you can use for this.
    The reason this happens is carriers don't want people downloading huge files that would slow their network as well as eat up your data plan.

  • If I turn on Adobe File Sync will that duplicate files in my Dropbox? (Unnecessary and I can't afford the bandwidth).

    I have gigabytes of Adobe CC files arranged in appropriate client folders in Dropbox (and mirrored on my local hard drive).
    It looks like if I turn File Sync on (needed to access Assets) some or all of this will be duplicated and separated from non-Adobe data in each client file.
    Messy, unnecessary and 100GB of bandwidth I don't want to pay for! (I'm on 4G so it's expensive.)

    MichaelGli2,
    The Creative Cloud desktop application will only sync files that are in your "Creative Cloud Files" folder on your local machine.
    By default, the "Creative Cloud Files" folder is in your user's home folder (/Users/<yourusername>/Creative Cloud Files/ on Mac, or C:\Users\<yourusername>\Creative Cloud Files\ on Windows).
    Unless you have deliberately changed the location of your Creative Cloud Files folder yourself so that it is inside your local Dropbox folder, there will be no duplication.

Maybe you are looking for

  • Transformation issue while using IF condition.

    hi everyone, i am using bpel transformation based on the condition of field using IF condition. A       B          C          D 10     20          30          40 20     20          30          50 30     30          20          60 40     40          2

  • Error in MIRO-GR/IR clearing account getting picked automatically

    Dear all, We are having a problem in doing MIRO once we have moved to ECC 6.0.  We are using Non valuated GRN. While doing MIRO system is automatically creating a line item with '0' value. Since we are not using Valuated GRN, we have blocked GR/IR Cl

  • Add/change datatype column (scripting)

    Hello, I am writing a script to add default columns and their values to tables. So far for me it's possible to add columns to an existing table in the model. Next I want to add the value for datatype. I found out there's a method called getDataType()

  • [Maybe solved] Repeated prompts for username and password on HTTP(S)

    I'm new to IOS but I'm taking ICND 1 and ICND 2 to catch up. I've gone as far as configuring a 3560 switch from "scratch" in a test environment, and am in part one of ICND 2. Setting this switch up for HTTP administration, I've been prompted repeated

  • Reg:PI server

    How to troubleshoot the PI server connection with SLD when the connection is lost and causing ABAP dumps. regards kummari