Log file sync vs log file parallel write probably not bug 2669566

This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
Version : 9.2.0.8
Platform : Solaris
Application : Oracle Apps
The number of commits per second ranges between 10 and 30.
When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
Below just 2 samples where the ratio is even about 20.
"snap_time"     " log file parallel write avg"     "log file sync avg"     "ratio
11/05/2008 10:38:26      8,142     156,343     19.20
11/05/2008 10:08:23     8,434     201,915     23.94
So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
First I thought that I was hitting bug 2669566.
But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
And I think that it proves that I am NOT hitting this bug.
Below is a sample of the output for the log writer.
-- End of snap 3
HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
In the example above 781036 + 210432 = 991468 micro seconds.
This is the case for all the snaps taken by snapper.
So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
Any clues?

Yes that is true!
But that is the way I calculate the average wait time = total wait time / total waits
So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
I use the query below:
select snap_id
, snap_time
, event
, time_waited_micro
, (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
, total_waits
, (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
, trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
from (
select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
row_number() over (partition by event order by sn.snap_id) r
from perfstat.stats$system_event se, perfstat.stats$snapshot sn
where se.SNAP_ID = sn.SNAP_ID
and se.EVENT = 'log file sync'
order by snap_id, event
where time_waited_micro - p_time_waited_micro > 0
order by snap_id desc;

Similar Messages

  • Wait Events "log file parallel write" / "log file sync" during CREATE INDEX

    Hello guys,
    at my current project i am performing some performance tests for oracle data guard. The question is "How does a LGWR SYNC transfer influences the system performance?"
    To get some performance values, that i can compare i just built up a normal oracle database in the first step.
    Now i am performing different tests like creating "large" indexes, massive parallel inserts/commits, etc. to get the bench mark.
    My database is an oracle 10.2.0.4 with multiplexed redo log files on AIX.
    I am creating an index on a "normal" table .. i execute "dbms_workload_repository.create_snapshot()" before and after the CREATE INDEX to get an equivalent timeframe for the AWR report.
    After the index is built up (round about 9 GB) i perform an awrrpt.sql to get the AWR report.
    And now take a look at these values from the AWR
                                                                       Avg
                                                 %Time  Total Wait    wait     Waits
    Event                                 Waits  -outs    Time (s)    (ms)      /txn
    log file parallel write              10,019     .0         132      13      33.5
    log file sync                           293     .7           4      15       1.0
    ......How can this be possible?
    Regarding to the documentation
    -> log file sync: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3120
    Wait Time: The wait time includes the writing of the log buffer and the post.-> log file parallel write: http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/waitevents003.htm#sthref3104
    Wait Time: Time it takes for the I/Os to complete. Even though redo records are written in parallel, the parallel write is not complete until the last I/O is on disk.This was also my understanding .. the "log file sync" wait time should be higher than the "log file parallel write" wait time, because of it includes the I/O and the response time to the user session.
    I could accept it, if the values are close to each other (maybe round about 1 second in total) .. but the different between 132 seconds and 4 seconds is too noticeable.
    Is the behavior of the log file sync/write different when performing a DDL like CREATE INDEX (maybe async .. like you can influence it with the initialization parameter COMMIT_WRITE??)?
    Do you have any idea how these values come about?
    Any thoughts/ideas are welcome.
    Thanks and Regards

    Surachart Opun (HunterX) wrote:
    Thank you for Nice Idea.
    In this case, How can we reduce "log file parallel write" and "log file sync" waited time?
    CREATE INDEX with NOLOGGINGA NOLOGGING can help, can't it?Yes - if you create index nologging then you wouldn't be generating that 10GB of redo log, so the waits would disappear.
    Two points on nologging, though:
    <ul>
    it's "only" an index, so you could always rebuild it in the event of media corruption, but if you had lots of indexes created nologging this might cause an unreasonable delay before the system was usable again - so you should decide on a fallback option, such as taking a new backup of the tablespace as soon as all the nologging operatons had completed.
    If the database, or that tablespace, is in +"force logging"+ mode, the nologging will not work.
    </ul>
    Don't get too alarmed by the waits, though. My guess is that the +"log file sync"+ waits are mostly from other sessions, and since there aren't many of them the other sessions are probably not seeing a performance issue. The +"log file parallel write"+ waits are caused by your create index, but they are happeninng to lgwr in the background which is running concurrently with your session - so your session is not (directly) affected by them, so may not be seeing a performance issue.
    The other sessions are seeing relatively high sync times because their log file syncs have to wait for one of the large writes that you have triggered to complete, and then the logwriter includes their (little) writes with your next (large) write.
    There may be a performance impact, though, from the pure volume of I/O. Apart from the I/O to write the index you have LGWR writting (N copies) of the redo for the index and ARCH is reading and writing the completed log files caused by the index build. So the 9GB of index could easily be responsible for vastly more I/O than the initial 9GB.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Log file parallel write

    Hi,
    on 11g R2. I have the following :
    SQL> select  total_waits, time_waited from v$system_event where event='log file parallel write';
    TOTAL_WAITS TIME_WAITED
          74144       28100Is it too much or not ? These valuses , to what should be compared ?
    Thank you.

    Hi,
    thanks to all.
    Here is checkpoint frequency from my alertlog :
    Wed May  9 23:01:14 2012
    Thread 1 advanced to log sequence 14 (LGWR switch)
      Current log# 2 seq# 14 mem# 0: /index01/bases/MYDB/redo02.log
    Thu May 10 02:14:27 2012
    Thread 1 cannot allocate new log, sequence 15
    Private strand flush not complete
      Current log# 2 seq# 14 mem# 0: /index01/bases/MYDB/redo02.log
    Thu May 10 02:14:29 2012
    Thread 1 advanced to log sequence 15 (LGWR switch)
      Current log# 3 seq# 15 mem# 0: /index01/bases/MYDB/redo03.log
    Thu May 10 02:14:29 2012
    ALTER SYSTEM ARCHIVE LOG
    Thu May 10 02:14:29 2012
    Thread 1 advanced to log sequence 16 (LGWR switch)
      Current log# 1 seq# 16 mem# 0: /index01/bases/MYDB/redo01.log No I do not know log advisor. Where is it ?
    Karan,
    thank for your interesting remarks.
    I usually have created DBs without enough attention to REDOLOG file size. I decided to verify on the most recent of my DBs. I found that query and issued it. For me it is not enough but I wanted to understand.
    Regards.

  • Control file parallel write

    Hi,
    From my statspack report one of the top wait events is control file parallel write.
                   Wait Time
    Event          Time     Percentage Avg. wait
    Control file      11000     3.61% 11.68
    parallel write
    How can I tune the control file parallel write event?
    Right now for this instance I have control file multiplexed onto
    3 different drives L. M. N
    Thanks

    If you are doing excessive log swaps, you could be generating too many checkpoints. See how many log swaps you have done in v$loghist. It is also possible to reduce the number of writes to log files by adding the /*+ APPEND */ hint and the "nologging" to insert statements to reduce the amount of log files filled.
    I have also combined update and delete statements to generate fewer writes to the log files.
    You can recreate the log files larger with :
    alter database drop logfile group 3;
    ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '/oracle/oradata/CURACAO9/redo03.log' size 500m reuse;
    ALTER DATABASE ADD LOGFILE member '/oracle/oradata/CURACAO9/redo03b.log' reuse to GROUP 3;
    alter system switch logfile;
    alter system checkpoint;
    alter database drop logfile group 2; -- and do group 2 etc.

  • Performance issues - Log file parallel write

    Hi there,
    Since a few months I have big performance issues with my Oracle 11.2.0.1.0.
    If I look in the Enterprise manager (in blocking sessions) I see al lot of "log file paralles writes" and a lot of "log file sync" .
    We have configured an active data guard environment and are using ASM.
    We are not stressing out the database with heavy queries or commits or something, but sometimes during the day this happens on non specific times...
    We've investigated everything (performance to SAN / heavy queries / oracle problems etc etc) and we really don't know what to do anymore so i thought.. let's try a post on the Forum.....
    Perhaps someone had similar things?
    Thanks,
    BR
    Mark

    mwevromans wrote:
    See blow a tail of alertlog.
    Tue Apr 24 15:12:17 2012
    Thread 1 cannot allocate new log, sequence 194085
    Checkpoint not complete
    Current log# 1 seq# 194084 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194084 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194085
    LGWR: Standby redo logfile selected for thread 1 sequence 194085 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194085 (LGWR switch)
    Current log# 2 seq# 194085 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194085 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    Tue Apr 24 15:12:21 2012
    Archived Log entry 388061 added for thread 1 sequence 194084 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:14:09 2012
    Thread 1 cannot allocate new log, sequence 194086
    Checkpoint not complete
    Current log# 2 seq# 194085 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194085 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194086
    LGWR: Standby redo logfile selected for thread 1 sequence 194086 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194086 (LGWR switch)
    Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    Tue Apr 24 15:14:14 2012
    Archived Log entry 388063 added for thread 1 sequence 194085 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:16:46 2012
    Thread 1 cannot allocate new log, sequence 194087
    Checkpoint not complete
    Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    Thread 1 cannot allocate new log, sequence 194087
    Private strand flush not complete
    Current log# 3 seq# 194086 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194086 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194087
    LGWR: Standby redo logfile selected for thread 1 sequence 194087 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194087 (LGWR switch)
    Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    Tue Apr 24 15:16:54 2012
    Archived Log entry 388065 added for thread 1 sequence 194086 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:18:59 2012
    Thread 1 cannot allocate new log, sequence 194088
    Checkpoint not complete
    Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    Thread 1 cannot allocate new log, sequence 194088
    Private strand flush not complete
    Current log# 1 seq# 194087 mem# 0: +DATA/kewillprd/onlinelog/group_1.262.712516155
    Current log# 1 seq# 194087 mem# 1: +FRA/kewillprd/onlinelog/group_1.438.756466165
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194088
    LGWR: Standby redo logfile selected for thread 1 sequence 194088 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194088 (LGWR switch)
    Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    Tue Apr 24 15:19:06 2012
    Archived Log entry 388067 added for thread 1 sequence 194087 ID 0x90d7aa62 dest 1:
    Tue Apr 24 15:22:00 2012
    Thread 1 cannot allocate new log, sequence 194089
    Checkpoint not complete
    Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    Thread 1 cannot allocate new log, sequence 194089
    Private strand flush not complete
    Current log# 2 seq# 194088 mem# 0: +DATA/kewillprd/onlinelog/group_2.264.712516155
    Current log# 2 seq# 194088 mem# 1: +FRA/kewillprd/onlinelog/group_2.418.756466215
    LGWR: Standby redo logfile selected to archive thread 1 sequence 194089
    LGWR: Standby redo logfile selected for thread 1 sequence 194089 for destination LOG_ARCHIVE_DEST_2
    Thread 1 advanced to log sequence 194089 (LGWR switch)
    Current log# 3 seq# 194089 mem# 0: +DATA/kewillprd/onlinelog/group_3.266.712516155
    Current log# 3 seq# 194089 mem# 1: +FRA/kewillprd/onlinelog/group_3.435.756466241
    Tue Apr 24 15:19:06 2012
    Archived Log entry 388069 added for thread 1 sequence 194088 ID 0x90d7aa62 dest 1:Hi
    1st switch time ==> Tue Apr 24 15:18:59 2012
    2nd switch time ==> Tue Apr 24 15:19:06 2012
    3rd switch time ==> Tue Apr 24 15:19:06 2012
    Redo log file switch has good impact on the performance of the database. Frequent log switches may lead to the slowness of the database . Oracle documents suggests to resize the redolog files so that log switches happen more like every 15-30 min (roughly depending on the architecture and recovery requirements).
    AS i check the alertlog file and find that the log are switchinh very fequent which is one of the reason that you are getting checkpoint  not complete message . i have face this issue many times and i generally increase the size of the logfile and set the archive_lag_time parameter as i have suggested above . If you further want to go root cause and more details then above guys will help you more because i don't have much experience in database tunning . If you looking for aworkarounf then you must go through it .
    Good Luck
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • 'log file sync' versus 'log file prallel write'

    I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
    SELECT time_waited,
    total_waits
    INTO wait_start_lgwr,
    wait_start_lgwr_c
    FROM v$system_event e
    WHERE event LIKE 'log%parallel%';
    SELECT time_waited,
    total_waits
    INTO wait_end_lgwr,
    wait_end_lgwr_c
    FROM v$system_event e
    WHERE event LIKE 'log%parallel%';
    I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
    I did the same thing for LFS.
    What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
    Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
    What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
    When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
    These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
    Can anybody tell me what I am missing?
    P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s.

    I have been asked to run an artificial test that performs a large number of small insert-only transactions with a high degree (200) of parallelism. The COMMITS were not inside a PL/SQL loop so a 'log file sync' (LFS) event occured each COMMIT. I have measured the average 'log file parallel write' (LFPW) time by running the following PL/SQL queries at the beginning and end of a 10 second period:
    SELECT time_waited,
    total_waits
    INTO wait_start_lgwr,
    wait_start_lgwr_c
    FROM v$system_event e
    WHERE event LIKE 'log%parallel%';
    SELECT time_waited,
    total_waits
    INTO wait_end_lgwr,
    wait_end_lgwr_c
    FROM v$system_event e
    WHERE event LIKE 'log%parallel%';
    I took the difference in TIME_WAITED and divided it by the difference in TOTAL_WAITS.
    I did the same thing for LFS.
    What I expected was that the LFS time would be just over 50% more than the LFPW time: when the thread commits it has to wait for the previous LFPW to complete (on average half way through) and then for its own.
    Now I know there is a lot of CPU related stuff that goes on in LGWR but I 'reniced' it to a higher priority and could observe that it was then spending 90% of its time in LFPW, 10% ON CPU and no time idle. Total system CPU time averaged only 25% on this 64 'processor' machine.
    What I saw was that the LFS time was substantially more than the LFPW time. For example, on one test LFS was 18.07ms and LFPW was 6.56ms.
    When I divided the number of bytes written each time by the average 'commit size' it seems that LGWR is writing out data for only about one third of the average number of transactions in LFS state (rather than the two thirds that I would have expected). When the COMMIT was changed to COMMIT WORK NOWAIT the size of each LFPW increased substantially.
    These observations are at odds with my understanding of how LGWR works. My understanding is that when LGWR completes one LFPW it begins a new one with the entire contents of the log buffer at that time.
    Can anybody tell me what I am missing?
    P.S. Same results in database versions 10.2 Sun M5000 and 11.2 HP G7s.

  • I am using Adobe Acrobat 9 Standard version in Windows 8.1 and when I try to create a .pdf file, I receive the following error message "Acrobat could not open "file name.log" because it is either not a supported file type or because the file has been dama

    I am using Adobe Acrobat 9 Standard version in Windows 8.1 and when I try to create a .pdf file, I receive the following error message "Acrobat could not open "file name.log" because it is either not a supported file type or because the file has been damaged.  To create a PDF document, go to the source application then print the document to .pdf"  I am going to the source application and printing the document to .pdf yet it's saving the file as a .log file.  After reinstalling the software, I initially didn't encounter this problem but on my second and third attempts to convert files to .pdf format, this error message reappeared.  How do I resolve this problem?

    I have a similar problem which i did not have before...and it exists only in some powerpoint files which i want to print as a pdf file...and i get the same message as above.
    the log says the bellow details...what's the problem and how can i resolve it? thanks.
    %%[ ProductName: Distiller ]%%
    %%[Page: 1]%%
    %%[Page: 2]%%
    Cambria not found, using Courier.
    %%[ Error: invalidfont; OffendingCommand: show ]%%
    Stack:
      %%[ Flushing: rest of job (to end-of-file) will be ignored ]%%

  • The LOG file \work\dev_jcontrol is not present

    The LOG file \work\dev_jcontrol is not present; even thou I have restarted the server:
    stopsap
    startsap <j2ee_instanse>
    Any idea?

    Hi,
    cluster ID is just combination of below parameters:
    In our case, my source system (ABC) was refreshed from another system (XYZ) recently
    so while installing target system ( DEF), I changed the source system details from ABC to XYZ in below file and retried the
    SAPinst screen. System copy has got completed successfully.
    Open the file <installation directory>/jmt/cluster_id_switch.properties and edit the line
    src.ci.sid=
    src.ci.instance.number=
    src.ci.instance.name=
    src.ci.host=
    If in your case source system is not refreshed recently; You may try with functional host name or OS host name etc. details for above parameters.
    If this does not work check details of "SAP Note 966752 - Java system copy problems with the Java
    Migration Toolkit" which says almost the same thing but I could not get that as the statements related to
    box number are bit confusing and contradictory.
    Cheers !!!
    Ashish

  • Error: License File Exception (check the log file for details): ENT_PE_NODE not found in datastore

    Hi;
    i have install 2 MCS 7816 (publisher and suscriber) with call manager version 6.1, i have load the files licenses for the publisher and the phones, but when i load the file license for the suscriber in publisher server i had the error messeage :
    Error : License File Exception (check the log file for details): ENT_PE_NODE not found in datastore
    what s the solution.
    Thanks for your help

    Hello,
    I seen this issue before, all you have to do is Contact Cisco Licensing team. In order to generate new licenses for your server.
    Thanks,

  • How to avoid db file parallel read for nestloop?

    After upgraded to 11gr2, one job took more than twice as long as before on 10g and 11gr1 with compatibility being 10.2.0.
    Same hardware. (See AWR summary below). My analysis points to that Nestloop is doing index range scan for the inner table's index segment,
    and then use db file parallel read to read data from the table segment, and for reasons that I don't know, the parallel read is very slow.
    AVG wait is more than 300ms. How can I fluence optimier to choose db file sequential read to fetch data block from inner table by tweaking
    parameters? Thanks. YD
    Begin Snap: 13126 04-Mar-10 04:00:44 60 3.9
    End Snap: 13127 04-Mar-10 05:00:01 60 2.8
    Elapsed: 59.27 (mins)
    DB Time: 916.63 (mins)
    Report Summary
    Cache Sizes
    Begin End
    Buffer Cache: 4,112M 4,112M Std Block Size: 8K
    Shared Pool Size: 336M 336M Log Buffer: 37,808K
    Load Profile
    Per Second Per Transaction Per Exec Per Call
    DB Time(s): 15.5 13.1 0.01 0.01
    DB CPU(s): 3.8 3.2 0.00 0.00
    Redo size: 153,976.4 130,664.3
    Logical reads: 17,019.5 14,442.7
    Block changes: 848.6 720.1
    Physical reads: 4,149.0 3,520.9
    Physical writes: 16.0 13.6
    User calls: 1,544.7 1,310.9
    Parses: 386.2 327.7
    Hard parses: 0.1 0.1
    W/A MB processed: 1.8 1.5
    Logons: 0.0 0.0
    Executes: 1,110.9 942.7
    Rollbacks: 0.2 0.2
    Transactions: 1.2
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 99.99 Redo NoWait %: 100.00
    Buffer Hit %: 75.62 In-memory Sort %: 100.00
    Library Hit %: 99.99 Soft Parse %: 99.96
    Execute to Parse %: 65.24 Latch Hit %: 99.95
    Parse CPU to Parse Elapsd %: 91.15 % Non-Parse CPU: 99.10
    Shared Pool Statistics
    Begin End
    Memory Usage %: 75.23 74.94
    % SQL with executions>1: 67.02 67.85
    % Memory for SQL w/exec>1: 71.13 72.64
    Top 5 Timed Foreground Events
    Event Waits Time(s) Avg wait (ms) % DB time Wait Class
    db file parallel read 106,008 34,368 324 62.49 User I/O
    DB CPU 13,558 24.65
    db file sequential read 1,474,891 9,468 6 17.21 User I/O
    log file sync 3,751 22 6 0.04 Commit
    SQL*Net message to client 4,170,572 18 0 0.03 Network

    Its not possible to say anything just by looking at the events.You must understand that statspacks and AWR actualy aggergate the data and than show the results.There may be a very well possibility that some other areas also need to be looked at rather than just focussin on one event.
    You have not mentioned any kind of other info about the wait event like their timings and all that.PLease provide that too.
    And if I understood your question corretly,you said,
    How to avoid these wait events?
    What may be the cause?
    I am afraid that its not possible to discuss each of these wait event here in complete details and also not about what to do when you see them.Please read teh Performance Tuning book which narrates these wait events and corresponding actions.
    Please read and follow this link,
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/instance_tune.htm#i18202
    Aman....

  • File Adapter Writting and Reading strategy

    Hi All,
    When i am trying to write data to file, it was opened by another person and it gave error in the receiver communication channel saying like source is using by another application. Which is well and good
    But in case of Sender channel while writing the data to file even thought it is opened by another person, Sender file adapter is picked the file.
    I didn't get the reason why it is like this. Any one of u had the same issue???
    Regards

    Hi Neetesh,
    >>hmm ... that's weird ... you mean to say that the polling frequency is now (20 + 100) = 120 secs ? Unless and until the file is not being modified or the file is not "open", this parameter shouldn't work ...
    Examples
    Poll interval is 20 sec and MSecs to wait is 100000( 1 MIN 40 Sec)
    1) If file is there and did not opened / modifying
    waiting till 1 min 40sec to pick the file
    2) If the file is there and Modifying it
    Adapter is waiting 1 min 40sec if the files have been changed.
    3) If there is a file and just opened, not writing any files
    Waiting for 1 min 40 sec to pick the file
    >>Can you share the adapter log?
    Processing Details for Cluster Node Server 0 10_31463
         5/20/10 3:49:56 PM           Polling interval started. Length: 20.0 seconds
         5/20/10 3:49:56 PM           Processing finished successfully
         5/20/10 3:49:56 PM           Processing started
         5/20/10 3:49:56 PM            4f1e0c5b-9553-4397-30ef-d3bb55a13a48     Processing finished successfully
         5/20/10 3:49:55 PM            4f1e0c5b-9553-4397-30ef-d3bb55a13a48     Message with ID 4f1e0c5b-9553-4397-30ef-
                                                   d3bb55a13a48 processed
         5/20/10 3:48:15 PM           Processing started
         5/20/10 3:47:55 PM           Polling interval started. Length: 20.0 seconds
         5/20/10 3:47:55 PM           Processing finished successfully
         5/20/10 3:47:55 PM           Processing started
         5/20/10 3:47:35 PM           Polling interval started. Length: 20.0 seconds
    Regards
    Edited by: Vamsi Krishna on May 20, 2010 10:50 PM

  • File Adapter Write missing files

    Hello
    I have some trouble with a bpel process containing a file adapter that should write files to disk. Each bpel-process is started when a read file adapter reads a new matching file (*.xml).
    I have tested the process with a few files (5-20) and a new process is started for each file, and at the end of the process a new file is written to disk by the file adapter (Write). But when I am testing with a large number of files (40-50 files) something strange happens.
    A new process is started for each file, but it looks like the file adapter that should write files to disk is not capable to handle the pressure. 10-20% of the processes doesnt complete as expected. In the BPEL Console everything looks ok, it seems like the File Adaper has written a file to disk, no error is trown. But when I look in the directory the file is not there...
    Has this something to do with performance? Is there any parameters or settings I can tune to make the file adapters work better? Sometimes it also looks like the file adapter (read) doesnt manage to start a new process for each incomming file even if they are deleted and archived...

    I ran into a similar issue. As a workaround we ended up just using java to write the files out. When the File Adapter attempts to write out a file it first writes that file to a temp file and then copies that to the appropriate directory.
    I believe that when two or more threads were attempting to write at the same time the write was failing for one (but appearing to work in the console logs). I think a race condition may be created when two threads attempt to write using the File Adapter for access to the temp file. I contacted my oracle rep about it but they are always pretty worthless so I havent ever heard anything back concerning the issue.

  • File read write

    Hi guys:
    its an easy question so i will explain as simply and as elborately as possible:
    i am reading from a file
    public void appendFile() {
        try {
          BufferedReader in = new BufferedReader(new FileReader("temp\\text.txt"));
          BufferedWriter out = new BufferedWriter(new FileWriter("temp\\text.txt", true));
          String str;
          String res;
          while ( (str = in.readLine()) != null) {
            res = process(str);
            if (res != null) {
              try {
                out.write(res);
                out.close();
              catch (IOException e) {
                log.info(e.toString());
          in.close();
        catch (IOException e) {
      }what i am trying to do is : read a file read though and where i get a certain string i replace it with another string :
    no looking at this code tell me if i will be writing at the same position or not.. i dont really remember if the cursor for readline and write line is the same :
    forexample : i read through ten lines and find the string i am looking for and i write at that moment witha new strinmg:
    1) will it write on the same line and write over the text that is already written
    OR
    2) will it write on the same line and move the text on that (that i am intending to replace) line to the next..
    OR
    3) will it write in the beginning of the file

    found the error i think : in FileWriter ("",true);
    the true defind that it hould be written in the end of file ... and not the begiining
    now i will see if the flase will write to the beginning or where the readline was done...

  • At what point oracle write "Checkpoint not complete" in alert log ?

    DB version: 10.2.0.4/RHEL 5.8
    During a batch run, we encountered lots of 'Checkpoint not complete' errors in alert log.
    Later we discovered that ORLs were sized only 50mb and this DB had only 3 redo log group. Since this is the potential cause, we are going create at least 10 Redo log groups and increase Redo log size to 500MB.
    But , I want to know what exactly causes oracle to write "Checkpoint not complete" in alert log ?
    For the purpose of this discussion, I am assuming we have only 1 ORL per redo log group. Is my below assumption correct ?
    ORL1
    |----------------> ORL1 file got full, so, LGWR starts writing to ORL2 file. Checkpoint occurs at log switch
    |                    DBWR writes modified blocks associated with the redo entries in ORL1 to datafiles
    |
    V
    ORL2
    |----------------> ORL2 file got full, LGWR wants to start writing to ORL3 file. Checkpoint is initiated at log switch.
    |                    But checkpoint can't be finished due to unknowN reasons
    |
    V
    ORL3
    |---------------->

    Your assumption is only partly right.
    I would illustrate it like
    ORL1
    |
    |----------------> ORL1 file got full, so, LGWR starts writing to ORL2 file. Checkpoint occurs at log switch
    |                    DBWR starts writing modified blocks associated with the redo entries in ORL1 to datafiles
    |
    V
    ORL2
    |----------------> ORL2 file got full, LGWR starts writing to ORL3 file.
    |                    Checkpoint for ORL2 is initiated at log switch.
    |
    V
    ORL3
    |
    |----------------> ORL3 file (the last member) also got full very quickly . LGWR wants to start the 'new cycle' by
                        writing (Reusing) to ORL1. But the checkpoint intiated by the log switch of ORL1 from the previous cycle is
                        not complete yet !! .
    Basically you get this error when LGWR attempts to reuse an online redo log file (ORL1 in the above example) and finds that it cannot.
    This is because the remaining ORL files (ORL2 and ORL3) got fully written before  DB writer finished checkpointing the modified blocks associated with ORL1 yet.
    Until the checkpoint of ORL1 is complete, the DB effectively hangs and user sessions have to wait until LGWR can safely reuse ORL1
    Yes a larger redo log size of 10 groups can help. But make sure the I/O subsytem where ORLs are stored has no latency issues.

  • Error 5 occurred at Open/Create/Replace File in Write spreadsheet String.vi

    Hi everyone,
    can anyone help me with this problem?
    "error 5 occurred at Open/Create/Replace File in Write spreadsheet String.vi "
    I've been using this part of the program for over a year an suddenly this error occures. But not always, mainly at the very beginning of my tests when the file should not be open.
    Info: I'm using a realtime PXI-System. Maybe the amount of data can cause the problem? (about 2MB)
    Grüße
    Meike
    Attachments:
    writeResults.jpg ‏345 KB
    error5.jpg ‏52 KB

    Hi Meike,
    is the file opened by a different program? Do you try to access it by FTP in parallel to your VI?
    You could use basic file functions instead of WriteSpreadsheetFile. That way you could open the file before starting the loop, keep it open all the time and close it once you're finished - with the added benefit of easier error handling…
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

Maybe you are looking for