Recursive call with commit not written to redo log

In my DBA training I was led to the belief that a commit caused the log writer to write the redo log buffer to the redo log file, but I find this is not true if the commit is inside recursive code.
I believe this is intentional as a way off reducing i/o but it does raise data integrity problems.
Apparently if you have some PL/SQL (can be sql*plus code or procedure) with a loop containing a commit,
the commit does not actually ensure that the transaction is written to the Redo log.
Instead Oracle only ensures all is written to the redo log when control is returned to the application/sqlplus.
You can see this by checking the redo writes in v$sysstat.
It will be less than the number of expected commits.
Thus the old rule of expecting all committed transation to be there after a database recovery is not necessarily true.
Does anyone know how to force the commit to ensure redo is written
-inside pl/sql or perhaps a setting in the the calling environment ?
Thanks

Thanks for your input
The trouble is that I believe if you stopped in a debugger the log writer would catch up -
Or if you killed your instance in the middle of this test you wouldn't be sure how many commits the db thought it did
ie. the db would recover to the last known commit in the redo logs
- maybe I should turn on tracing ?
Since my question I have a found a site that seems to back up the results I am getting
http://www.ixora.com.au/notes/redo_write_triggers.htm
see the note under point 3
Have a look at the stats below and you will see
redo writes 19026
user commits 100057
How I actually tested was to run
the utlbstat scipt
then run some pl/sql that
- mimiced a business transactions (4 select lookup validations, 2 inserts and 1 insert or update and a commit)
- loop * 100000
then ran utlestat.sql
i.e. test script
@C:\oracle\ora92\rdbms\admin\utlbstat.sql
connect test/test
@c:\mig\Load_test.sql
@C:\oracle\ora92\rdbms\admin\utlestat.sql
Statistic Total Per Transact Per Logon Per Second
CPU used by this session 37433 .37 935.83 79.31
CPU used when call started 37434 .37 935.85 79.31
CR blocks created 62 0 1.55 .13
DBWR checkpoint buffers wri 37992 .38 949.8 80.49
DBWR checkpoints 6 0 .15 .01
DBWR transaction table writ 470 0 11.75 1
DBWR undo block writes 22627 .23 565.68 47.94
SQL*Net roundtrips to/from 4875 .05 121.88 10.33
background checkpoints comp 5 0 .13 .01
background checkpoints star 6 0 .15 .01
background timeouts 547 .01 13.68 1.16
branch node splits 4 0 .1 .01
buffer is not pinned count 4217 .04 105.43 8.93
buffer is pinned count 649 .01 16.23 1.38
bytes received via SQL*Net 1027466 10.27 25686.65 2176.83
bytes sent via SQL*Net to c 5237709 52.35 130942.73 11096.84
calls to get snapshot scn: 1514482 15.14 37862.05 3208.65
calls to kcmgas 303700 3.04 7592.5 643.43
calls to kcmgcs 215 0 5.38 .46
change write time 4419 .04 110.48 9.36
cleanout - number of ktugct 1875 .02 46.88 3.97
cluster key scan block gets 101 0 2.53 .21
cluster key scans 49 0 1.23 .1
commit cleanout failures: b 27 0 .68 .06
commit cleanouts 1305175 13.04 32629.38 2765.2
commit cleanouts successful 1305148 13.04 32628.7 2765.14
commit txn count during cle 3718 .04 92.95 7.88
consistent changes 752 .01 18.8 1.59
consistent gets 1514852 15.14 37871.3 3209.43
consistent gets - examinati 1005941 10.05 25148.53 2131.23
data blocks consistent read 752 .01 18.8 1.59
db block changes 3465329 34.63 86633.23 7341.8
db block gets 3589136 35.87 89728.4 7604.1
deferred (CURRENT) block cl 1068723 10.68 26718.08 2264.24
enqueue releases 805858 8.05 20146.45 1707.33
enqueue requests 805852 8.05 20146.3 1707.31
execute count 1004701 10.04 25117.53 2128.6
free buffer requested 36371 .36 909.28 77.06
hot buffers moved to head o 3801 .04 95.03 8.05
immediate (CURRENT) block c 3894 .04 97.35 8.25
index fast full scans (full 448 0 11.2 .95
index fetch by key 201128 2.01 5028.2 426.12
index scans kdiixs1 501268 5.01 12531.7 1062.01
leaf node splits 1750 .02 43.75 3.71
logons cumulative 2 0 .05 0
messages received 19465 .19 486.63 41.24
messages sent 19465 .19 486.63 41.24
no work - consistent read g 3420 .03 85.5 7.25
opened cursors cumulative 201103 2.01 5027.58 426.07
opened cursors current -3 0 -.08 -.01
parse count (hard) 4 0 .1 .01
parse count (total) 201103 2.01 5027.58 426.07
parse time cpu 2069 .02 51.73 4.38
parse time elapsed 2260 .02 56.5 4.79
physical reads 6600 .07 165 13.98
physical reads direct 75 0 1.88 .16
physical writes 38067 .38 951.68 80.65
physical writes direct 75 0 1.88 .16
physical writes non checkpo 34966 .35 874.15 74.08
prefetched blocks 2 0 .05 0
process last non-idle time 1029203858 10286.18 25730096.45 2180516.65
recursive calls 3703781 37.02 92594.53 7846.99
recursive cpu usage 35210 .35 880.25 74.6
redo blocks written 1112273 11.12 27806.83 2356.51
redo buffer allocation retr 21 0 .53 .04
redo entries 1843462 18.42 46086.55 3905.64
redo log space requests 17 0 .43 .04
redo log space wait time 313 0 7.83 .66
redo size 546896692 5465.85 13672417.3 1158679.43
redo synch time 677 .01 16.93 1.43
redo synch writes 63 0 1.58 .13
redo wastage 4630680 46.28 115767 9810.76
redo write time 64354 .64 1608.85 136.34
redo writer latching time 42 0 1.05 .09
redo writes 19026 .19 475.65 40.31
rollback changes - undo rec 10 0 .25 .02
rollbacks only - consistent 122 0 3.05 .26
rows fetched via callback 1040 .01 26 2.2
session connect time 1029203858 10286.18 25730096.45 2180516.65
session logical reads 5103988 51.01 127599.7 10813.53
session pga memory -263960 -2.64 -6599 -559.24
session pga memory max -788248 -7.88 -19706.2 -1670.02
session uga memory -107904 -1.08 -2697.6 -228.61
session uga memory max 153920 1.54 3848 326.1
shared hash latch upgrades 501328 5.01 12533.2 1062.14
sorts (memory) 1467 .01 36.68 3.11
sorts (rows) 38796 .39 969.9 82.19
switch current to new buffe 347 0 8.68 .74
table fetch by rowid 1738 .02 43.45 3.68
table scan blocks gotten 424 0 10.6 .9
table scan rows gotten 4164 .04 104.1 8.82
table scans (short tables) 451 0 11.28 .96
transaction rollbacks 5 0 .13 .01
user calls 5912 .06 147.8 12.53
user commits 100057 1 2501.43 211.99
user rollbacks 56 0 1.4 .12
workarea executions - optim 1676 .02 41.9 3.55
write clones created in bac 5 0 .13 .01
write clones created in for 745 .01 18.63 1.58
99 rows selected.

Similar Messages

  • LOB changes not wirtten in redo logs?

    Hi there,
    im just evaluating a one replikation software for syncronizing Oracle Databases. We use a 9.2 Database Standard Edition one.
    I was told by a consultant, that LOB tables cannot be syncronized, because the changes are not written into the redologs. Is this true?
    Thanks, Sven

    Sven,
    I belive there is no such restriction in Oracle.
    consider the test case (Oracle 9.2.0.6 Ent.Edition, Windows 2003):
    SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    Database altered.
    SQL> create table test_lob(lob clob) ;
    Table created.
    SQL> alter system switch logfile;
    System altered.
    SQL> declare
    2 l_lob CLOB ;
    3 begin
    4 insert into test_lob values (empty_clob()) returning lob into l_lob ;
    5 dbms_lob.writeappend(l_lob, 4, 'test') ;
    6 end ;
    7 /
    PL/SQL procedure successfully completed.
    SQL> alter system switch logfile;
    System altered.
    SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME =>'j:\orant\archive_logs\Arc1_731.arc', OPTIONS => DBMS_LOGMNR.NEW);
    PL/SQL procedure successfully completed.
    SQL>
    SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
    PL/SQL procedure successfully completed.
    SQL> select scn, sql_redo from V$LOGMNR_CONTENTS;
    returns
    1101810606178,set transaction read write;
    1101810606178,insert into "OBIDEM00"."TEST_LOB"("LOB") values (EMPTY_CLOB());
    1101810606178,update "OBIDEM00"."TEST_LOB" set "LOB" = 'test' where "LOB" = EMPTY_CLOB() and ROWID = 'AAAV7JAABAAAacJAAA';
    as you can see the LOB related statements are in the archived log, means they were written in redo log.
    The question is how the LOB gets created in your case? If it is loaded through the direct-path load or you are using NOLOGGING option then it is possible that the LOB data would not appear in the redo log. In this case it is not the Oracle restriction, but the peculiarity of your application
    Mike

  • Upon commit, lgwr writes to redo logs but dbwr does not write to datafiles

    Guys,
    Upon issuing a commit statement, in which scenarios, the lgwr only writes to redo logs but the dbwr does not at all write to the datafiles?
    Thanx.

    The default behaviour is - on Commit, the lgwr writes to the redo logs immediately, but it may not get immediately written to the datafiles by the dbwr, but sooner or later it would (based on certain conditions). The only situation, which I can think of, when dbwr may not be able to write is datafiles is when the databases crashes, after the commit and before the DBWR could write to the datafiles.
    Not sure, what you are exactly looking for, but hope this helps.
    Thanks
    Chandra Pabba

  • GoldenGate Extract Process will not read from redo log with manual help

    Here is my issue.
    I have GoldenGate replication successfully setup one-way from 1 Source to Many Targets. There is 1 source extract on the DB and many pumps that push the trail file data to the Targets. Replication does work but after manual help with starting up the Source Extract process.
    If I execute the command:
    GGSCI> alter extract <source extract name> begin now
    GGSCI> view report <source extract name>
    The extract starts and reads the source trail file but will not process data, I continually see in the ggserr.log file "OGG-01515: Positioning to begin time MMM DD, YYYY, HH:MM:SS" The date and time are irrevelant for this problem.
    When I see this, I SQ*Plus into the database and look in the v$log table for the current log and sequence #.
    I return to GGSCI and issue the following command:
    GGSCI> alter extract <source extract name> thread 1 extseqno <sequence # from v$log query>
    GGSCI> start <source extract name>
    It then works as expected. Why is this so? I thought the alter extract <source extract name> begin now would do the same output.
    We do use ASM but like I said when I issue the:
    GGSCI> alter extract <source extract name> thread 1 extseqno <sequence # from v$log query>
    It works like it should.
    Very weird.
    - Jason

    Yes, supplemental logging is enabled on both the source and the targets, but why would supplemental logging on the targets have any affect on why the Source Extract on the source can't read from the source redo log?
    This is not a RAC database, rather single-instance with one thread. Also, we are using DBLOGREADER functionality as it is an 11.2.0.3 database.
    My issue is simply, when I start the source extract from being down, meaning it isn't running, I issue this command:
    alter <source extract> begin now
    start <source extract>
    view report <source extract>
    OGG-01515 Positioning to begin time <today's date and time> ie Mar 4, 2013, 3:26:39 PM. (this is repeated over and over and over)
    If I perform a
    info <source extract> detail---> I see the following:
    Log Read Checkpoint Oracle Redo Logs 2013-03-04 15:26:39 Thread 1, Seqno 0, RBA 0 (why is it showing 0, becuase it can't read the redo, WHY NOT?)
    Extract Source BEGIN END
    Not Available <today's date> <today's date> (repeat....)
    However, if I retreive the Redo Log number and I issue:
    alter spe thread 1 extseqno (redo log sequence #)
    start spe.
    Then it works okay. I have to manually tell it what redo log to begin reading from. Why?
    - Jason
    Edited by: 924317 on Mar 4, 2013 9:03 AM

  • Trigger call procedure - Commit not allow

    Hi all,</p>
    <p style="margin-top: 0; margin-bottom: 0">I need help.</p>
    <p style="margin-top: 0; margin-bottom: 0"> </p>
    <p style="margin-top: 0; margin-bottom: 0">I have One table name</font><font FACE="Courier" SIZE="2" COLOR="#0000ff">
    <font FACE="Courier" SIZE="2" COLOR="#808000">EVENTS. </font></font>
    <font FACE="Courier" SIZE="2">I have create two trigger on that table. </p>
    <p style="margin-top: 0; margin-bottom: 0">Trigger 1 is to add a running number
    to events table into "id" column.</p>
    <p style="margin-top: 0; margin-bottom: 0">Trigger 2 is to pass that running
    number in to other procedure after the new row in events complete fill in
    database.</p>
    <p style="margin-top: 0; margin-bottom: 0"> </p>
    <p style="margin-top: 0; margin-bottom: 0">The error is : Commit not allow in
    trigger.</p>
    <p style="margin-top: 0; margin-bottom: 0">The commit is only on my procedure.
    How i want to make it run? Help me.</p>
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">
    <p style="margin-top: 0; margin-bottom: 0"> </p>
    <p style="margin-top: 0; margin-bottom: 0"> </p>
    <p style="margin-top: 0; margin-bottom: 0"> </p>
    </font><font FACE="Courier" SIZE="2" color="#FF0000">
    <p style="margin-top: 0; margin-bottom: 0"><b>TRIGGER 1</b></p>
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">
    <p style="margin-top: 0; margin-bottom: 0">CREATE<font FACE="Courier" SIZE="2">
    </font>OR<font FACE="Courier" SIZE="2"> </font>REPLACE<font FACE="Courier" SIZE="2">
    </font>TRIGGER<font FACE="Courier" SIZE="2"> AUTONUM_EVENTS_ID1</p>
    </font>
    <p style="margin-top: 0; margin-bottom: 0">BEFORE<font FACE="Courier" SIZE="2">
    </font>INSERT</p>
    <p style="margin-top: 0; margin-bottom: 0">ON<font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#808000">EVENTS</font></p>
    <p style="margin-top: 0; margin-bottom: 0">REFERENCING<font FACE="Courier" SIZE="2">
    </font>NEW<font FACE="Courier" SIZE="2"> </font>AS<font FACE="Courier" SIZE="2">
    </font>NEW<font FACE="Courier" SIZE="2"> </font>OLD<font FACE="Courier" SIZE="2">
    </font>AS<font FACE="Courier" SIZE="2"> </font>OLD</p>
    <p style="margin-top: 0; margin-bottom: 0">FOR<font FACE="Courier" SIZE="2">
    </font>EACH<font FACE="Courier" SIZE="2"> </font>ROW</p>
    <p style="margin-top: 0; margin-bottom: 0">begin</p>
    <p style="margin-top: 0; margin-bottom: 0">    select<font FACE="Courier" SIZE="2">
    EVENTS_ID_SEQ</font>.nextval<font FACE="Courier" SIZE="2"> </font>into<font FACE="Courier" SIZE="2">
    </font>:new.ID<font FACE="Courier" SIZE="2"> </font>from<font FACE="Courier" SIZE="2">
    dual</font>;</p>
    <p style="margin-top: 0; margin-bottom: 0">end;</p>
    <p style="margin-top: 0; margin-bottom: 0">/</p>
    <p style="margin-top: 0; margin-bottom: 0"> </p>
    <p style="margin-top: 0; margin-bottom: 0">
    <font FACE="Courier" SIZE="2" color="#FF0000"><b>TRIGGER 2</b></font></p>
    <p style="margin-top: 0; margin-bottom: 0">CREATE</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">OR</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">REPLACE</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">TRIGGER</font><font FACE="Courier" SIZE="2">
    PROCESSINFO</p>
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">
    <p style="margin-top: 0; margin-bottom: 0">AFTER</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">INSERT</p>
    <p style="margin-top: 0; margin-bottom: 0">ON</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#808000">EVENTS</font></p>
    <font FACE="Courier" SIZE="2" COLOR="#0000ff">
    <p style="margin-top: 0; margin-bottom: 0">REFERENCING</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">NEW</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">AS</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">NEW</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">OLD</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">AS</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">OLD</p>
    <p style="margin-top: 0; margin-bottom: 0">FOR</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">EACH</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">ROW</p>
    <p style="margin-top: 0; margin-bottom: 0">DECLARE</p>
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">
    <p style="margin-top: 0; margin-bottom: 0">BEGIN</p>
    </font><font FACE="Courier" SIZE="2">
    <p style="margin-top: 0; margin-bottom: 0"></font>
    <font FACE="Courier" SIZE="2" COLOR="#808000">    UpdateInfo</font><font FACE="Courier" SIZE="2" COLOR="#0000ff">(:new.ID);</p>
    </font><font FACE="Courier" SIZE="2">
    <p style="margin-top: 0; margin-bottom: 0"></font>
    <font FACE="Courier" SIZE="2" COLOR="#0000ff">EXCEPTION</p>
    </font><font FACE="Courier" SIZE="2">
    <p style="margin-top: 0; margin-bottom: 0"></font>
    <font FACE="Courier" SIZE="2" COLOR="#0000ff">WHEN</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#ff0000">OTHERS</font><font FACE="Courier" SIZE="2">
    </font><font FACE="Courier" SIZE="2" COLOR="#0000ff">THEN</p>
    </font><font FACE="Courier" SIZE="2">
    <p style="margin-top: 0; margin-bottom: 0"></font>
    <font FACE="Courier" SIZE="2" COLOR="#008000"><i>-- Consider logging the error
    and then re-raise</p>
    </i></font><font FACE="Courier" SIZE="2">
    <p style="margin-top: 0; margin-bottom: 0"></font>
    <font FACE="Courier" SIZE="2" COLOR="#0000ff">RAISE;</p>
    <p style="margin-top: 0; margin-bottom: 0">END</font><font FACE="Courier" SIZE="2">
    PROCESSINFO</font><font FACE="Courier" SIZE="2" COLOR="#0000ff">;</p>
    <p style="margin-top: 0; margin-bottom: 0">/</p>
    </font>

    Because of the following :
    Restrictions on Trigger Implementation The implementation of a trigger is subject to the following restrictions:
          The PL/SQL block of a trigger cannot contain transaction control SQL statements (COMMIT, ROLLBACK, SAVEPOINT, and SET CONSTRAINT) if the block is executed within the same transaction.If you really want to do a commit then search for autonomous transaction.
    Hope that helps
    Regards
    Raj

  • TS4079 I can only make international calls with siri, not local calls

    I live in Burundi, with the country code +257. I can make calls to the US, Canada and Europe but I can't make local calls. I have tried all the troubleshooting tips and tricks, checked my settings etc. Any ideas?

    You also posted this on an iPhone forum. Not sure why you posted it here on an iPad forum.

  • Authenticated user name not written to access log file

    Our users are authenticated with certificates to an LDAP directory server. The Access log file is using the Extended-2 format.
    The name of the authenticated user is now written in the Access log.
    10.4.57.44 - - [19/May/2004:14:07:21 -0400] "GET /somepage HTTP/1.1" 200 10382
    Anybody know why?

    I've got the same problem with Sun Java System Web Proxy Server 4.02 for Linux.
    When I installed it, authenticated users were visible in the access log. Then I tryied to change format and since that moment, this information is disappeared.
    Any news about this bug?
    Thanx.
    G.

  • Why not use Redo log for consistent read

    Oracle 11.1.0.7:
    This might be a stupid question.
    As I understand if a select was issued at 7:00 AM and the data that select is going to read has changed at 7:10 AM even then Oracle will return the data that existed at 7:00 AM. And for this Oracle needs the data in Undo segments.
    My question is since redo also has past and current information why can't redo logs be used to retreive that information? Why is undo required when redo already has all that information.

    user628400 wrote:
    Thanks. I get that piece but isn't it the same problem with UNDO? It's written as it expires and there is no guranteee until we specifically ask oracle to gurantee the UNDO retention? I guess I am trying to understand that UNDO was created for effeciency purposes so that there is less performance overhead as compared to reading and writing from redo.And this also you said,
    >
    If data was changed to 100 to 200 wouldn't both the values be there in redo logs. As I understand:
    1. Insert row with value 100 at 7:00 AM and commit. 100 will be writen to redo log
    2. update row to 200 at 8:00 AM and commit. 200 will be written to redo log
    So in essence 100 and 200 both are there in the redo logs and if select was issued at 7:00 data can be read from redo log too. Please correct me if I am understanding it incorrectly.I guess you didnt understand the explaination that I did. Its not the old data that is kept. Its the changed vector of Undo that is kept which is useful to "recover" it when its gone but not useful as such for a select statement. Whereas in an Undo block, the actual value is kept. You must remember that its still a block only which can contain data just like your normal block which may contain a table like EMP. So its not 100,200 but the change vectors of these things which is useful to recover the transaction based on their SCN numbers and would be read in that order as well. And to read the data from Undo, its quite simple for oracle to do so using an Undo block as the transaction table which holds the entry for the transaction, knows where the old data is kept in the Undo Segment. You may have seen XIDSEQ, XIDUSN, XIDSLOT in the tranaction id which are nothing but the information that where the undo data is kept. And to read it, unlke redo, undo plays a good role.
    About the expiry of Undo, you must know that only INACTIVE Undo extents are marked for expiry. The Active Extents which are having an ongoing tranaction records, are never marked for it. You can come back after a lifetime and if undo is there, your old data would be kept safe by oracle since its useful for the multiversioning. Undo Retention is to keep the old data after commit, something which you need not to do if you are on 11g and using Total Recall feature!
    HTH
    Aman....

  • Brarchive not backing up offline redo log on RAC

    Hi all Oracle and SAP experts,
    After running brarchive on my newly setup RAC system, the program reports that there are 'No offline redo log files found for processing'. Hence, non of the archive log files are being backup.
    Tons of archive log files are generated in the archive log directory. I do not understand why brarchive is unable to locate these files.
    There are 2 Windows RAC nodes. I used the command 'brarchive -u / -c force -p initTST.sap -sd' to trigger brarchive. It is using rman to backup to a windows shared folder. And the brbackup runs successfully.
    The output from brarchive is below:
    BR0002I BRARCHIVE 7.00 (16)
    BR0006I Start of offline redo log processing: aebbxosu.svd 2009-07-24 14.07.36
    BR0477I Oracle pfile F:\oracle\TST\102\database\initTST1.ora created from spfile F:\oracle\TST\102\database\spfile.ora
    BR0101I Parameters
    Name                           Value
    oracle_sid                     TST1
    oracle_home                    F:\oracle\TST\102
    oracle_profile                 F:\oracle\TST\102\database\initTST1.ora
    sapdata_home                   F:\oracle\TST
    sap_profile                    F:\oracle\TST\102\database\initTST.sap
    backup_dev_type                disk
    archive_copy_dir              
    10.11.0.101\backup\RAC_test
    compress                       no
    disk_copy_cmd                  rman
    cpio_disk_flags                -pdcu
    rman_compress                  no
    archive_dupl_del               only
    parallel_instances             TST2:F:\oracle\TST\102@TST2
    system_info                    tstadm RACNODE1 Windows 5.2 Build 3790 Service Pack 2 AMD64
    oracle_info                    TST 10.2.0.4.0 8192 1492 10504542 RACNODE1 UTF8 UTF8
    sap_info                       620 SAPTST TST D1583565402 R3_ORA 0020087949
    make_info                      NTAMD64 OCI_10201_SHARE Aug 22 2006
    command_line                   brarchive -u / -c force -p initTST.sap -sd
    BR0013W No offline redo log files found for processing
    BR0007I End of offline redo log processing: aebbxosu.svd 2009-07-24 14.07.54
    BR0280I BRARCHIVE time stamp: 2009-07-24 14.07.55
    BR0004I BRARCHIVE completed successfully with warnings

    I had set BR_TRACE to 15 and receive the following in my trace file:
    BR0249I BR_TRACE: level 3, function BrCurrRedoGet exit with 0
    BR0249I BR_TRACE: level 2, function BrInstCheck exit with -10
    BR0248I BR_TRACE: level 2, function BrDiskStatGet entry with '
    10.11.0.101\backup\RAC_test'
    BR0250I BR_TRACE: level 2, function BrDiskStatGet exit with '19999863332864 9780518486016 9780518486016 9770967198432'
    BR0248I BR_TRACE: level 2, function arch_last_get entry with 'F:\oracle\TST\saparch\archTST1.log'
    BR0249I BR_TRACE: level 2, function arch_last_get exit with 0
    BR0248I BR_TRACE: level 2, function BrArchNameGet entry with '0 TST1'
    BR0250I BR_TRACE: level 2, function BrArchNameGet exit with 'G:\oracle\TST\oraarch\681026106_1_0.dbf'
    BR0248I BR_TRACE: level 2, function BrNameBuild entry with '41 G:\oracle\TST\oraarch\681026106_1_0.dbf NULL'
    BR0250I BR_TRACE: level 2, function BrNameBuild exit with 'G:\oracle\TST\oraarch'
    BR0248I BR_TRACE: level 2, function BrFileStatGet entry with 'G:\oracle\TST\oraarch'
    BR0250I BR_TRACE: level 2, function BrFileStatGet exit with '39171616256 0'
    BR0248I BR_TRACE: level 2, function BrArchExist entry with 'TST1'
    BR0248I BR_TRACE: level 3, function BrArchNameGet entry with '987656789 TST1'
    BR0250I BR_TRACE: level 3, function BrArchNameGet exit with 'G:\oracle\TST\oraarch\1_1_987656789.dbf'
    BR0249I BR_TRACE: level 2, function BrArchExist exit with -3
    BR0248I BR_TRACE: level 2, function BrDiskStatGet entry with '
    10.11.0.101\backup\RAC_test'
    BR0248I BR_TRACE: level 2, function BrDbDisconnect entry with 'void'
    BR0280I BRARCHIVE time stamp: 2009-07-27 09.41.42
    BR0644I BR_TRACE: location BrDbDisconnect-1, SQL statement:
    'COMMIT RELEASE'
    BR0300I BR_TRACE: SQL code: 0, number of processed rows: 0
    BR0248I BR_TRACE: level 3, function BrZombieKill entry with 'void'
    BR0250I BR_TRACE: level 3, function BrZombieKill exit with 'void'
    BR0249I BR_TRACE: level 2, function BrDbDisconnect exit with 0
    BR0013W No offline redo log files found for processing
    My current database incarnation is 681026106 but brarchive is searching for archivelog from incarnation 987656789 (1_1_987656789.dbf). As a workaround, I created dummy files (eg. 1_1_987656789.dbf') for each node and managed to trick brarchive into believing these are the real files. Subsequent backup works fine. Thanks Michael!

  • Using "Single Pass" entry point with sequences not named "MainSequence"

    Hi,
    The object model of TestStand provides clustering of multiple sequences in one sequence file. The editor itself allways knows which sequence the user has actually selected (enrty Run ... in Menu Execute is allways fullfilled with the correct sequence name).
    Why isn't there any support of using the entry points of the model with this selected sequence, these sequences use the default "MainSequence" if they are called by the corresponding menu entry of the Execute Menu.
    I am looking for any solution to use the entry points (by menu calls) with sequences not named MainSequence as well.
    Any ideas?
    Regards,
    Sunny

    Sunny -
    In the process model entry point "Single Pass", if you look at the sequence context during an execution, you will notice that the sequence that was visible or "selected" at the time of the execution is started is specified under "RunState.InitialSelection.SelectedSequences[0]". You could alter the MainSequence call step in the process model to call that sequence instead. Keep in mind that you will have to decide how to handle the case when no sequence is selected, do you call MainSequence or do nothing?
    Scott Richardson (NI)
    Scott Richardson
    National Instruments

  • When is anything written to standby redo logs on standby database?

    I am on Oracle 10.2.0.4 on HP UNIX. I have read Oracle 10.2 concepts guide on technet.oracle.com, have read may article on metalink and internet, yet I am unable to verify when anything is written to standby redo logs on stand by database.
    I have a simple database reconfiguration: a primary database and one standby database.
    I created primary database and set up log_archive_dest_2 to use LGWR SYNC AFFIRM
    I have created standby redo logs on primary.
    alter database add standby logfile GROUP
    I create standby control file on primary.
    I copied all the primary information to create standby database. I have put standby database in managed recovery.
    I did archive log switches, I created a table and inserted information in table.
    I never saw standby redo logs updated on standby database by looking at timestamp of standby redo log files.
    I then setup database in maximum availability mode by running following on primary:
    Alter database set standby database to maximize availability
    When I do insert into my tables, I do see standby redo log files on primary database being updated, but I have never seen standby redo logs on standby database updated. Why?
    I am still at loss when actually standby redo logs are updated on standby database.
    When I read Oracle 9i database documentation on data guard, it says that you do not need standby redo logs on primary instead you need them on standby. Only reason, you need them on primary is from primary changes role to standby database, so standby redo logs on standby database should be updated instead of standby redo logs on primary.

    What is the PROTECTION_MODE ,PROTECTION_LEVEL values of your database.
    As per metalink:--
    Create standby redo log files, if necessary:
    Standby redo logs are necessary for the higher protection levels such as
    Guaranteed, Instant, and Rapid. In these protection modes LGWR from the
    Primary host writes transactions directly to the standby redo logs.
    This enables no data loss solutions and reduces the amount of data loss
    in the event of failure. Standby redo logs are not necessary if you are using
    the delayed protection mode.
    If you configure standby redo on the standby then you should also configure
    standby redo logs on the primary database. Even though the standby redo logs
    are not used when the database is running in the primary role, configuring
    the standby redo logs on the primary database is recommended in preparation
    for an eventual switchover operation.
    Standby redo logs must be archived before the data can be applied to the
    standby database. The standby archival operation occurs automatically, even if
    the standby database is not in ARCHIVELOG mode. However, the archiver process
    must be started on the standby database. Note that the use of the archiver
    process (ARCn) is a requirement for selection of a standby redo log.
    METALINK ID:- Doc ID: Note:219344.1
    Edited by: Anand... on Sep 15, 2008 2:15 AM

  • Backup vs redo log files

    Hi,
    Can anyone tell me what's the real difference between a backup file and a redo log file/archived redo log file and the scenarios (examples) when each of them can be used (for example, "......") ? Both are used for database/instance recovery. I have read the concepts of there 2 terms, but I need some additional information about the difference.
    Thanks!

    Roger25 wrote:
    What i still don't understand is how redo mechanism works; I know, redo logs records changes made to the database and it's used to re-do informations in case of a system failure/crash, for example. Ok, let's say I have a huge update, but when the system crash has occured, the transaction is neither comitted, nor rolled back. Then, how this redo is useful? If a system crash occur, all those updates aren't rolled back automatically? So the database state is as before executing that huge update? So in this case, 'what to redo'?No, with the system's crash, the transaction never gets a chance to get committed (and even rolled back) because a commit is only supposed to happen if there is an explicit or implicit commit statement is issued. Now, for the redo , with the statement, a commit marker is entered at the end of the redo stream of that transaction denoting that it's finally over now and for the corresponding transaction, the status is cleared from the transaction table that's stored in the Undo segment in which the Undo information of that transaction is stored. If you have given a huge update, the redo would be very highly required as you should be knowing that it's not all the time that the dirty buffers that are written to the data files unlikely the redo log files which are written every 3 econds. This means, there is a very fair chance that many changes have yet not even propagated to the data files for which the change vectors are now already there in the redo log files. Now, for the sake of clarity, assume that this transaction was actually committed! So now, without those changes be applied to the data files how Oracle can actually show the committed reults when the database next time would be opened? For this purpose, the redo change vectors are required. And for the uncommitted transactions , the applcation of the redo change vectors would keep on updating the Checkpoint numbers of the data files and would eventually get them synched with the control file and redo log files, without which the database would never be opened.
    HTH
    Aman....

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Redo Log Switch 결과...

    환경 : 8.1.7.3.0 (no archive log 모드)
    log_checkpoint_timeout = 0
    log_checkpoint_interval = 999999999
    redo log size = 200M
    현재 check point는 log switch 상태에서만 가능하도록 설정된 것 같습니다.
    거래량이 적어서 그런지 log switch는 30시간 주기입니다.
    제가 실행한 것은 아래 4번째 로그가 current일때
    alter system checkpoint를 하고 조금 있다가..
    alter system switch logfile를 하여 1번째가 current가 된 상황입니다.
    3/16일 14시 까지도 계속 active상태입니다...
    1. 문제가 생긴건지요??? 도움부탁합니다...
    2. no archive log mode에서도 switch 주기를 줄이는 것이 복구에 도움이 되나요...?
    ===========================================
    STATUS , FIRST_CHANGE#,FIRST_TIME
    CURRENT , 8846777646687,2007-03-15 16:57:55
    INACTIVE, 8846777587798,2007-03-14 10:34:40
    INACTIVE, 8846777609448,2007-03-14 17:17:38
    ACTIVE , 8846777643690,2007-03-15 16:01:22

    no archivemode에서 정상복구를 바라는 것인지요?
    잘못된 정책이란 생각이 듭니다.
    no archive mode에서 v$log의 first_change# 중에 가장 작은 것
    이 v$recover_file의 change# 보다 크거나 같다면 복구불능입니다.
    배치작업이라도 있어서 log switch가 한번의 cycle을 돌게 되면 이전
    백업으로는 복구불능입니다. archive mode로 지금 바로 바꾸시지요..
    log_checkpoint_timeout은 checkpoint에 대한 timeout 시간값을
    지정하는 것입니다.
    LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
    checkpoint는 아시는 바와 같이 데이터파일과
    리두로그 컨트롤파일의 SCN을 일치시키는 것입니다. 주요한 것은
    DBWR프로세스가 데이터파일에 write를 하겠구요.
    물론 checkpoint와 인스턴스 복구는 관련이 있습니다. checkpoint timeout을
    적당히주면 instance recovery에서 좀 더 빠르게 instance 복구후 DB가
    open되겠습니다. 만약 설정하신대로 하신다면 DB를 abort로
    내리고 open하게 되면
    instance recovery시에 더 많은 시간이 필요하겠습니다.
    게다가 트랜잭션으로 인한 log switch하는 시간이 30시간보다
    작다면 timeout을 준들 영향을 주지 않겠지요. redo log가 꽉
    차게 되면 log switch를 자동으로 하게 되는데 log switch를
    하기전에 checkpoint를 주게 되어 있으니까요.
    그런데 checkpoint와 물리적/논리적 복구와는 다른 개념입니다.
    checkpoint는 위에서 말씀드린 instance recovery와 관련이 있고
    물리적/논리적 복구에서는 archive file이 떨어져 있는가 current redo log가
    존재하는가에 따라서 복구가능여부를 결정되는 것이지요..
    그리고 ACTIVE 상태라는 것은 문서상의 정의에서는 archive mode일 경우
    archiving이 되는 중일 경우, 그리고 이 상태는 complete recovery시에 redo log
    적용시 필요한 정보가 있다는 것입니다.
    no archive mode에서 복구정책을 적용 하겠다는 것은 위험한 발상인 것 같습니다.
    물론 DSS시스템의 경우에는 이미 정책을 no archive mode로
    만들어두고 주말마다 offline backup을 하기도 합니다.
    하지만 DSS에서는 하루에 300번 이상의 log switch가 일어나는
    경우가 있을 정도이니 아무리 백업이 되어 있다 한들 완전복구는
    불능이겠지요. offline backup을 했을 때까지만 복구가 됩니다.
    $LOG
    This view contains log file information from the control files.
    Column Datatype Description
    GROUP#
    NUMBER
    Log group number
    THREAD#
    NUMBER
    Log thread number
    SEQUENCE#
    NUMBER
    Log sequence number
    BYTES
    NUMBER
    Size of the log (in bytes)
    MEMBERS
    NUMBER
    Number of members in the log group
    ARCHIVED
    VARCHAR2(3)
    Archive status (YES |NO)
    STATUS
    VARCHAR2(16)
    Log status:
    UNUSED - Online redo log has never been written to. This is the state of a redo log that was just added, or just after a RESETLOGS, when it is not the current redo log.
    CURRENT - Current redo log. This implies that the redo log is active. The redo log could be open or closed.
    ACTIVE - Log is active but is not the current log. It is needed for crash recovery. It may be in use for block recovery. It might or might not be archived.
    CLEARING - Log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE statement. After the log is cleared, the status changes to UNUSED.
    CLEARING_CURRENT - Current log is being cleared of a closed thread. The log can stay in this status if there is some failure in the switch such as an I/O error writing the new log header.
    INACTIVE - Log is no longer needed for instance recovery. It may be in use for media recovery. It might or might not be archived.
    no archive mode에서도 복구하는 여러가지 방법들이 있기는 합니다.
    예를들어 current redo log가 깨졌을 때에 recovery 방법이라던지
    등등이 문서에 있긴하지요. 하지만 no archivemode에서 백업을 붓고
    복구하는 방법은 찾아보기 힘드실 것입니다. 위에서도 말씀드렸듯이
    no archive mode에서
    v$recover_file의 CHANGE# > v$logl의 minimum FIRST_CHANGE# 이면 데이터파일은 복구가능합니다.
    그러나 CHANGE# <= minimum FIRST_CHANGE# 이면 복구 불가능합니
    다. 그러니 백업을 붓고 복구를 하는 방법에 대한 문서는 거의
    찾기 힘듭니다. advance 방법에 대한 문서에서만 adjust_scn을 쓴다던지 하는 등이 나와있을 뿐입니다.
    글 수정:
    민천사(민연홍)
    아무래도 졸면서 썼나봅니다.;;
    interval과 timeout은 엄연히 다른데요. 왜 timeout과 interval을
    혼동했는지..;;
    LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks.
    LOG_CHECKPOINT_TIMEOUT specifies (in seconds) the amount of time that has passed since the incremental checkpoint at the position where the last write to the redo log (sometimes called the tail of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for more than integer seconds.

  • How to change redo log size in oracle 10g

    Hi Experts,
    Can anybody confirm how to change redo log size in oracle 10g?
    Amit

    Dear Amit,
    You can enlarge the size of existing Online Redo log files, by adding new groups with different size of files (origlog$/mirrlog$) and then carefully droping the old groups with  their associated inactive files.
    Please refer SAP Note 309526 - Enlarging redo log files to perform the activity.
    Steps to perform:
    STEP-1. Analyze the exisiting situation and prepare an action plan.
    A. You have to ensure that no more than one log switch per minute occurs during peak times.
    It may also be necessary to increase the size of the online redo logs until they are large enough.
    Too many log switches lead to too many checkpoints, which in turn lead to a high writing load in the I/O subsystem.
    Use ST04 -> Additional Functions --> Display GV$-Views
    There you can select
    Gv$LOG_HISTORY --->for determing your existing LOG switching frequency.
    GV$LOG -
    > list the status(INACTIVE/CURRENT/ACTIVE) /size/sequence no. of existing online redolog files
    GV$LOGFILE  --- > list the information of existing online  redolog files with their storage paths
    You can document the existing situation of Online Redo Log Fiile management before going to enlarge Redo Log Files.
    It will be helpful, if something goes wrong while performing activities.
    B. Based on above Situation analysis, Plan your New Redo Log Group and there Members with new optimal size.
    e.g.
    Group No.         Redo Log File Locations  u201C/oracle/<SID>/u201D                  Size
                                 /origlogA                  /mirrlogA            
    15                        log_g15m1.dbf         log_g15m2.dbf               100 MB
    17                        log_g17m1.dbf            log_g17m2.dbf               100 MB
                                /origlogB                    /mirrlogB
    16                       log_g16m1.dbf          log_g16m2.dbf            100 MB
    18                       log_g18m1.dbf            log_g18m2.dbf            100 MB
    Continue to next.....

Maybe you are looking for

  • How to use your credit to buy music

    Hello Everyone, this is more than the answer to a question I had and unsuccessfully finding answers that did not resolve my problem. I made a couple of comments one some of those answers basically complaining as how poor Apple Support was by leaving

  • HT2534 Can someone please tell me how to delete/DEactivate my apple account/ID

    Please can someone tell me how to completely REMOVE/deactivate/delete my apple ID from apple altogether PLEASE Thanks

  • How to keep internet activity live?

    I currently have a filemaker pro database that I use for work. It is located on my local machine because we do not have an in-house server. Is there any way to access that database when I am not logged in and have the database open? What are my optio

  • Error in PSA deletion

    Hi every one, While i am deleting PSA data i have gone these step selected <b>Info Source</b>>right click and select <b>Delete PSA data</b>->in the next screen given the data(19.12.2006) at <b>Before</b> ---> <b>start</b> after doing this i got an er

  • Why my adobe flash player doesn't work after installing update?

    Hello, I installed the latest update of adobe flash player but i still can't play videos on youtube, facebook, .... It keeps asking me to update to the latest flash player? Can someone help me out with this? Thx a lot!