Adding logfile to primary/standby

Hi: I am on 10.2.0.3.
I have physiacl standby for my primary. Now for the performance reason I'm adding a few redo log groups (and increasing the size of each one) to the primary.
As far as I understand:
- I need to add one standby log file on the primary for each online log (group) I am adding.
- I need to add online log and standby log on the standby for each primary online log I am adding.
Question: since adding logs need to be done in the open mode, would read only be good for standby? Or I need to do something else?
TIA.

Standby Redo Logs Creation:
On primary database, execute --
sql> SELECT max (group#) from v$logfile;
sql> SELECT bytes from v$log;
sql> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 '/u01/app/oracle/oradata/abcredo4.log' SIZE 52428800;
sql> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 '/u01/app/oracle/oradata/abc/redo5.log' SIZE 52428800;
sql> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 '/u01/app/oracle/oradata/abc/redo6.log' SIZE 52428800;
You can create standby redo log files when primary database is open as well.
:)

Similar Messages

  • Physical stndby doesn'1 apply redo or primary standby didn't transport redo

    I setup physical standby (OEL5.0, DB 11.2), every things seem right. But when i create table on primary DB, it does not appear on standby database. The Physical standby doesn'1 apply redo or primary standby didn't transport redo to standby Datadase
    Please consider mine configuration
    ----Standby :
    SQL> select database_role, protection_mode, log_mode from v$database;
    DATABASE_ROLE PROTECTION_MODE LOG_MODE
    PHYSICAL STANDBY MAXIMUM PERFORMANCE ARCHIVELOG
    SQL> archive log list;
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence 38
    Next log sequence to archive 40
    Current log sequence 40
    ------Primary :
    SQL> select database_role, protection_mode, open_mode from v$database;
    DATABASE_ROLE PROTECTION_MODE OPEN_MODE
    PRIMARY MAXIMUM PERFORMANCE READ WRITE
    SQL> create table test_stby (
    2 cot_1 number,
    3 cot_2 number);
    Table created.
    SQL> insert into test_stby values (10,11);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> alter system switch logfile;
    System altered.
    ---- Standby :
    SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
    no rows selected
    SQL> SELECT PROCESS, STATUS,SEQUENCE#,BLOCK#,BLOCKS,DELAY_MINS FROM V$MANAGED_STANDBY;
    PROCESS STATUS SEQUENCE# BLOCK# BLOCKS DELAY_MINS
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    ARCH CONNECTED 0 0 0 0
    MRP0 WAIT_FOR_LOG 40 0 0 0
    - Standby cancel redo apply, open read only and query for table test_stby, it does not have this table
    Please help
    Ch

    I did solved this problem :)

  • Resizing the logfile in Primary data base

    Hi,
    If I increase the size of redo log files on my primary site, would this propagate the new size of redo across to standby ?
    Thanks & Regards
    Manoj

    If you are not using standby redo at your standby database, simply go ahed. If you maintain standby redo at standby site, then, increase the logfile size at standby site as well. It wont give you a problem, but, error message you will receive in the alter log of standby database, when primary size and standby redo size is differ. Oracle tendency is to select the same size of redo at both ends, if standby redo in place.
    Jaffar

  • Primary & standby db on same OS plaform but having different OS version

    Hi,
    I have primary & standby database 10.2.0.4 on AIX 5.3 Operating System platform .
    I would like to Upgrade my Operating System(AIX) of standby databse server from 5.3 to 6.0.
    Will diffenet Operating System version for primary & standby databases support ?
    Thnaks.

    so, if my primary db on aix 5.3 & standby db on aix 6.1 plaform, then it will not work
    or oracle db will not support for this differnent os versions.
    is it right?Yes, along with that I'm asking why you want to choose different versions/releases of OS?
    I seen there some issues even if some RPM's are missing. So if the complete version changes then think of it.
    Moreover check the certification in support.oracle.com
    Recently i checked for Linux 6, it's not yet certified. I'm not sure of unix here. So please do refer http://support.oracle.com
    Thanks.

  • Dataguard the achivelog not transfort between primary & standby db

    Dear all Dataguard gurus
    i tried to install oracle 10g dataguard on 2 machines with Centos 4.6 for development
    after finish the installation, i 've got some "STRANGE" ..when i do a query at dataguard1
    dataguard1 is primary database
    dataguard2 is standby database
    sys@dataguard1> select sequence#, first_time, next_time from v$archived_log;
    SEQUENCE# FIRST_TIM NEXT_TIME
    16 24-JUL-09 24-JUL-09
    17 24-JUL-09 24-JUL-09
    18 24-JUL-09 24-JUL-09
    19 24-JUL-09 24-JUL-09
    20 24-JUL-09 24-JUL-09
    21 24-JUL-09 24-JUL-09
    22 24-JUL-09 11-AUG-09
    23 11-AUG-09 12-AUG-09
    24 12-AUG-09 12-AUG-09
    25 12-AUG-09 12-AUG-09
    26 12-AUG-09 12-AUG-09
    sys@dataguard1>select sequence#,applied
    2 from v$archived_log
    3 order by sequence#;
    SEQUENCE# APP
    16 NO
    17 NO
    18 NO
    19 NO
    20 NO
    21 NO
    22 NO
    23 NO
    24 NO
    but if i do the same quey on standby database
    sys@dataguard2> select sequence#, first_time, next_time from v$archived_log;
    nothing archive...and not applied
    what could be the problem????
    thanks for u'r attention & help
    Edited by: kang dadang on Aug 12, 2009 7:22 PM

    thanks for response jerald
    everything is same, all pass for sys on primary & standby are the same, password file on both machine are same
    here my parameter file[not all parameter included]
    [primary db]..on dataguard1 machines [dbname : dgtest]
    *.db_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest'
    *.db_name='dgtest'
    *.DB_UNIQUE_NAME='dgtest'
    *.fal_client='dgtest2'
    *.fal_server='dgtest'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(dgtest,dgtest2)'
    *.log_archive_dest_1='LOCATION=/u02/oradata/archive/dgtest/
    valid_for=(all_logfiles,all_roles)
    db_unique_name=dgtest'
    *.log_archive_dest_2='service=dgtest2 lgwr async
    valid_for=(online_logfiles,primary_roles)
    db_unique_name=dgtest2'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='enable'
    *.log_archive_format='%s_%t_%r.arc'
    *.log_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest',
    '/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/','/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/'
    *.service_names='dgtest'
    *.standby_file_management='auto'
    [standby] on dataguard1 machines..[dbname : dgtest2]
    *.db_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest'
    *.db_name='dgtest'
    *.DB_UNIQUE_NAME='dgtest2'
    *.fal_client='dgtest2'
    *.fal_server='dgtest'
    *.log_archive_config='DG_CONFIG=(dgtest,dgtest2)'
    *.log_archive_dest_1='LOCATION=/u02/oradata/archive/dgtest/
    valid_for=(all_logfiles,all_roles)
    db_unique_name=dgtest2'
    *.log_archive_dest_2='service=dgtest lgwr async
    valid_for=(online_logfiles,primary_roles)
    db_unique_name=dgtest'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='enable'
    *.log_archive_format='%s_%t_%r.arc'
    *.log_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest',
    '/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/','/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.service_names='dgtest'
    *.standby_file_management='auto'
    thanks for all...jerald
    Edited by: kang dadang on Aug 12, 2009 11:52 PM

  • How to Refresh UAT Primary/standby from Production primary/standby

    Hi ,
    We have the following setup :
    Primary/standby - Production
    Primary/Standby - UAT
    I need to know the process on how to refresh the UAT primary/standby .
    I'm thinking on the following lines :
    1] If we have the export dump of production , can we go ahead and drop the schemas to be refreshed on both UAT Primary and UAT Standby UAT then perform the schema import on Primary UAT .
    2] In case if I have to do a full refresh of UAT , do I need to rebuild the UAT environment from the proudction backups Ie,
    (i) Drop both the UAT primary and UAT standby database .
    (ii) using the produciton backup built the UAT Primary . Take the UAT primary backup and build the UAT standby .
    Appricaite if any one can provide some best practices to refresh UAT from Production.

    That setup seems to be rare. We assume that the Standby Database is used for testing and have a feature for that called Snapshot Standby.
    But in your scenario, you need to develop your own techniques for "refreshment of the UAT Primary".
    An easy way (if your Primary is not too large): Throw away your UAT env each time the "Production Primary" changes and RMAN clone it to "UAT Primary" then.
    Then RMAN duplicate "UAT Standby" again. Could be relatively easily scripted.
    But again: Why not just create an ordinary Standby on your UAT hardware and use it for testing with Snapshot Standby feature? Much easier to maintain.
    Kind regards
    Uwe Hesse
    http://uhesse.wordpress.com

  • Switchover_status "FAILED DESTINATION" on both primary-standby databases

    Hi,
    I setup primary-standby databases and try to test the switchover functionality between them using the following commands:
    alter database commit to switchover to primary with session shutdown;
    alter database commit to switchover to physical standby with session shutdown;
    Currently, both DBs are showing
    open_mode= READ WRITE for "select open_mode_from v$database;" and
    switchover_status = FAILED DESTINATION for "select switchover_status from v$database"
    When both databases get into these states, how can I return them back to primary and standby roles? Could you please provide me a sequence of steps that I can execute to return them back to normal primary-standby states?
    Thanks in advance for your suggestion.

    The OS is Red hat 4.1.2-500.
    Oracle version is 11g.
    Here is the sequence of steps that lead to this issue:
    1. switchover_status on primary = to standby, switchover_status on secondary = not allowed
    So on primary, I execute
    alter database commit to switchover to physical standby with session shutdown;
    and could be shutdown/startup mount ( I don't remember exactly)
    2. switchover_status on secondary = to primary
    So on secondary, I execute
    alter database commit to switchover to primary with session shutdown;
    alter database open;
    3. Now on secondary, the switchover_status = FAILED DESTINATION
    On primary, I execute
    alter database commit to switchover to primary with session shutdown;
    and could be "alter database open" ( I don't remember exactly)
    The switchover_status in step 1) and 2) above should indicates that the data guard is working. At the end of the above steps, I cause both databases to get into switchover_status = "FAILED DESTINATION".
    I hope to get one of the DBs back into standby role.
    If I execute "alter database commit to switchover to physical standby with session shutdown", I will get
    "ORA-16416: No viable Physical Standby switchover targets available"
    In this situation, is there any recommendation for rescue?

  • How tyo know that primary & standby are syncronyzed(SCN)

    Dears,,
    If i have data guard solution (Primary and physical standby) databases
    How to make sure that both primary & standby are syncronyzed(SCN)
    Please,help,,

    Hi,
    Compared across the sequences applied on standby with primary archive logs generation generation
    select max(sequence#) from v$archived_log where applied='YES';
    - Pavan Kumar N

  • Primary/standby and 3rd archive log destination?

    Running Oracle EE 10.2.0.4 linux 64 bit. I have a primary and standby configuration using data guard, with appropriate LOG_ARCHIVE_DEST_1, LOG_ARCHIVE_DEST_2 for the primary and standby - all works as expected. I want to multiplex the log files by adding a 3rd archive log destination to a different disk. I'm not clear on whether or not specifying a second 'primary' archive destination will write to both dest_1 and dest_3 or just alternate between the 2? I want to make sure that both DEST_1 and DEST_3 are written to (there can be a lag for DEST_3) -
    Is the following all I need or are there additional parameters to LOG_ARCHIVE_DEST_n I'm missing?
    LOG_ARCHIVE_DEST_3 = 'valid_for=(ONLINE_LOGFILE,ALL_ROLES)', 'location="/myThirdLoc"'
    Thanks -

    The database hang because of the redo log file is full. my test database is 16.7G and I gave the recovery filesystem 19 G for the redo log. Apparently it is not enough. After i set up the backup with daily incremental and run for 5 days, the recovery is full and db hang. How do you free up the space or set up the backup so that it can recycle the space? My DB is 10g r2 on RHEL3.

  • Why the redo.logfile in my standby db flash_recovery\dbsid\backupset\ ?

    1)I use rman script"duplicate target database for standby nofilenamecheck dorecover"
    why the redo logfile automatcly generate in my standby db flash_recovery_area\sid\backupset?while it is in oradata\sid fold on primary role databse?
    2)does dataguard apply archivedlog transmitted from primary db directly?or by standby redo log or standby log?
    3)How and why standby redo logfile used?

    the sequence# 1 has the fisrt_change# greater than
    the next_chage#, it's any reason for that?.
    SEQUENCE# FIRST_CHANGE# NEXT_CHANGE#
    1 534907 551724
    I was confused?????
    SQL> select 534907-551724  from dual;
    534907-551724
           -16817

  • Archiving Error. from Primary & Standby Server. pls help.

    Hi all.
    Im currently getting error messages from both of my Primary and Standby Database. can anyone help me identify what error message would it be?
    Primary Database:
    Tue Apr 21 09:40:13 2009
    Creating archive destination LOG_ARCHIVE_DEST_2: 'Standby'
    ARC0: FAL archive, error 20 creating remote archivelog file 'Standby'
    Tue Apr 21 09:40:14 2009
    Errors in file /export/oradata/log/bdump/arc0_20901.trc:
    ORA-00020: maximum number of processes () exceeded
    ARC0: FAL archive failed, see trace file.
    ARCH: FAL archive failed. Archiver continuing
    Tue Apr 21 09:40:14 2009
    ORACLE Instance - Archival Error. Archiver continuing.
    ARCH: Connecting to console port...
    Tue Apr 21 09:40:14 2009
    ORA-16055: FAL request rejected
    ARCH: Connecting to console port...
    Tue Apr 21 09:40:14 2009
    Errors in file /export/oradata/log/bdump/arc0_20901.trc:
    ORA-16055: FAL request rejected
    Tue Apr 21 09:40:40 2009
    Thread 1 advanced to log sequence 41971
    Current log# 1 seq# 41971 mem# 0: /export/oradata/logfile/redoMIBS01a.log
    Current log# 1 seq# 41971 mem# 1: /export/oradata/logfile/redoMIBS01b.log
    Tue Apr 21 09:40:40 2009
    ARC1: Evaluating archive log 4 thread 1 sequence 41970
    ARC1: Beginning to archive log 4 thread 1 sequence 41970
    Creating archive destination LOG_ARCHIVE_DEST_1: '/export/oradata/MIBS/archive/log00010001S0000041970.ARC'
    ARC1: Completed archiving log 4 thread 1 sequence 41970
    Tue Apr 21 09:41:40 2009
    Errors in file /export/oradata/log/bdump/arc0_20901.trc:
    ORA-00020: maximum number of processes () exceeded
    Standby Database:
    Tue Apr 21 09:36:10 2009
    RFS: Possible network disconnect with primary database
    Closing latent archivelog for thread 1 sequence 41687
    EOF located at block 40958 low SCN 0:123821645 next SCN 0:123821645
    Latent archivelog '/opt/app/oracle/product/9.2.0/dbs/archlog00010001S0000041687.ARC'
    If you wish to failover to this standby database, you should use the
    following command to manually register the archivelog for recovery:
    ALTER DATABASE REGISTER LOGFILE '/opt/app/oracle/product/9.2.0/dbs/archlog00010001S0000041687.ARC';
    Tue Apr 21 09:36:10 2009
    Errors in file /export/oradata/log/udump/mibs_rfs_3802.trc:
    ORA-00367: checksum error in log file header
    ORA-00311: cannot read header from archived log
    ORA-00334: archived log: '/opt/app/oracle/product/9.2.0/dbs/archlog00010001S0000041687.ARC'
    ORA-27091: skgfqio: unable to queue I/O
    ORA-27072: skgfdisp: I/O error
    SVR4 Error: 22: Invalid argument
    Additional information: 1
    Tue Apr 21 09:36:19 2009
    RFS: Possible network disconnect with primary database
    Closing latent archivelog for thread 1 sequence 41688
    EOF located at block 2677 low SCN 0:123821835 next SCN 0:123821835
    Latent archivelog '/opt/app/oracle/product/9.2.0/dbs/archlog00010001S0000041688.ARC'
    If you wish to failover to this standby database, you should use the
    following command to manually register the archivelog for recovery:
    ALTER DATABASE REGISTER LOGFILE '/opt/app/oracle/product/9.2.0/dbs/archlog00010001S0000041688.ARC';
    Tue Apr 21 09:36:19 2009
    Errors in file /export/oradata/MIBS/log/udump/mibs_rfs_3807.trc:
    ORA-00367: checksum error in log file header
    ORA-00311: cannot read header from archived log
    ORA-00334: archived log: '/opt/app/oracle/product/9.2.0/dbs/archlog00010001S0000041688.ARC'
    ORA-27091: skgfqio: unable to queue I/O
    ORA-27072: skgfdisp: I/O error
    SVR4 Error: 2: No such file or directory
    Additional information: 1
    Really2 needing anyone's help at this point.

    Hi..
    Errors in file /export/oradata/log/bdump/arc0_20901.trc:ORA-00020: maximum number of processes () exceeded>
    What is the value of the parameter PROCESSES.You need to increase the value which would require bouncing the database.Increaing the Processes parameter requires increase in session parameter too.
    Sessions = (1.1 * PROCESSES) + 5
    [http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams169.htm]
    [http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams191.htm]
    HTH
    Anand

  • Backup and Primary/Standby

    I have a 1tb primary database and a 1tb standby database. I am considering backing up the primary database and just deleteing the standby database archive logs.
    I know that I should be able to use rman delete archivelog all; and have it delete the archvie logs. My question besides should I backup both databases, will rman delete even archivelogs that have not yet been applied or will it baulk?

    Hi,
    >
    I have a 1tb primary database and a 1tb standby database. I am considering backing up the primary database and just deleteing the standby database archive logs.
    I know that I should be able to use rman delete archivelog all; and have it delete the archvie logs. My question besides should I backup both databases, will rman delete even archivelogs that have not yet been applied or will it baulk?
    >
    You can take the RMAN backup from the primary database.I will suggest instead of doing DELETE ARCHIVELOG ALL , delete the archivelog with UNTIL TIME clause or COMPLETED BEFORE clause.
    refer [http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta008.htm#RCMRF106]
    [http://download.oracle.com/docs/cd/B19306_01/backup.102/b14194/rcmsynta014.htm#i95042]
    HTH
    Anand
    Edited by: Anand... on Mar 3, 2009 7:22 PM
    Added 2 Links

  • Create ONLINE logfile in physical standby database

    We have created a physical standby database with rman duplicate command on a remote server
    "duplicate target database for standby dorecover nofilenamecheck"
    When I see the standby server...Online logfiles are not created however its relevant entries are there in V$LOG and V$LOGFILE views.
    I guess it is the default behaviour of duplicate command in RMAN and we can not specify LOGFILE clause when we create standby database.
    Now the problem is we could not drop the online logfile on standby database since it's status is "CURRENT", "ACTIVE".
    Since the ONLINE LOGFILE are not actually created , "ALTER DATABASE CLEAR LOGFILE GROUP " command returns with error as it could not find the file in the server.
    So How we can drop the current/active online logfile and add new ones in standby db?

    I'm assuming you have physical standby. Here are step I did in the past.
    1) create a backup control file
    2) bring the database back using the "recreate control file" it the trace file BUT you need to remove or comment out the line that has the corrupt or missing redo log file. And don't forget to add the tempfile after you recreate the controlfile
    example:
    alter database backup controlfile to trace;
    STARTUP NOMOUNT
    CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
    LOGFILE
    GROUP 1 '/oracledata/orcl/redo01.log' SIZE 200M,
    GROUP 2 '/oracledata/orcl/redo02.log' SIZE 200M,
    GROUP 3 '/oracledata/orcl/redo03.log' SIZE 200M,
    # GROUP 3 '/oracledata/orcl/redo03.log' SIZE 200M
    -- STANDBY LOGFILE
    -- GROUP 10 '/oracledata/orcl/redostdby04.log' SIZE 200M,
    -- GROUP 11 '/oracledata/orcl/redostdby05.log' SIZE 200M
    DATAFILE
    '/oracledata/orcl/system01.dbf',
    '/oracledata/orcl/undotbs01.dbf',
    '/oracledata/orcl/sysaux01.dbf',
    '/oracledata/orcl/users01.dbf'
    CHARACTER SET WE8ISO8859P1
    If you just want to add the standby redo log then using this command.
    alter database add standby logfile
    '/<your_path>/redostdby01.log' size 200M reuse,

  • Need to re-organize the primary/standby servers

    Our QA environment has two NODE RAC as primary(let's say on Chicago) and two node RAC as standby(on Boston).
    Now during the testing procedures, find out application connection to Chicago Primary too slow, so they want to use standby BOSTON as primary.
    How do I reverse them? This is a environment have not put in use yet.
    Do I have to use dbca to create db on Standby BOston site , then create standby on Chicago primary site?
    What is the best way?
    Thank you in advance.

    Our QA environment has two NODE RAC as primary(let's say on Chicago) and two node RAC as standby(on Boston).
    Now during the testing procedures, find out application connection to Chicago Primary too slow, so they want to use standby BOSTON as primary.
    How do I reverse them? This is a environment have not put in use yet.What is the DB version?
    Switch over is only procedure,
    you have some alternate options ( only for QA )
    1) disconnect standby from primary "log_archive_dest_state_2=defer" ,
    2) Create a snapshot/restore point on standby
    3) perform failover on standby ( only standby conversion to primary)
    4) users can able to connect to new primary,
    5) once you done with that, restore that snapshot, it will flashback to the same position where you created a restore point
    6) Then again enable "log_archive_dest_2"
    7) make sure all the archives exist in primary, whenever you disconnected.
    Do I have to use dbca to create db on Standby BOston site , then create standby on Chicago primary site?You want the same data of primary? or any new database will work for you?
    If you are clear with this, then you have answer with you. If there is no concern on data then create a dummy database and give access instead of performing operations on DR & RAC site.
    HTH.

  • Dataguard.... [ the achivelog not transfort between primary & standby db] m

    Dear all Dataguard gurus [sorry if i wrong to place this message]
    i tried to install oracle 10g dataguard on 2 machines with Centos 4.6 for development
    after finish the installation, i 've got some "STRANGE" ..when i do a query at dataguard1
    dataguard1 is primary database machines [hostname]
    dataguard2 is standby database machines [hostname]
    sys@dataguard1> select sequence#, first_time, next_time from v$archived_log;
    SEQUENCE# FIRST_TIM NEXT_TIME
    --------- 16 24-JUL-09 24-JUL-09
    17 24-JUL-09 24-JUL-09
    18 24-JUL-09 24-JUL-09
    19 24-JUL-09 24-JUL-09
    20 24-JUL-09 24-JUL-09
    21 24-JUL-09 24-JUL-09
    22 24-JUL-09 11-AUG-09
    23 11-AUG-09 12-AUG-09
    24 12-AUG-09 12-AUG-09
    25 12-AUG-09 12-AUG-09
    26 12-AUG-09 12-AUG-09
    sys@dataguard1>select sequence#,applied
    2 from v$archived_log
    3 order by sequence#;
    SEQUENCE# APP
    16 NO
    17 NO
    18 NO
    19 NO
    20 NO
    21 NO
    22 NO
    23 NO
    24 NO
    but if i do the same quey on standby database
    sys@dataguard2> select sequence#, first_time, next_time from v$archived_log;
    nothing archive...and not applied..
    here my parameter file not all parameter included
    primary db..on dataguard1 machines dbname : dgtest
    *.db_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest'
    *.db_name='dgtest'
    *.DB_UNIQUE_NAME='dgtest'
    *.fal_client='dgtest2'
    *.fal_server='dgtest'
    *.job_queue_processes=10
    *.log_archive_config='DG_CONFIG=(dgtest,dgtest2)'
    *.log_archive_dest_1='LOCATION=/u02/oradata/archive/dgtest/
    valid_for=(all_logfiles,all_roles)
    db_unique_name=dgtest'
    *.log_archive_dest_2='service=dgtest2 lgwr async
    valid_for=(online_logfiles,primary_roles)
    db_unique_name=dgtest2'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='enable'
    *.log_archive_format='%s_%t_%r.arc'
    *.log_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest',
    '/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/','/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/'
    *.service_names='dgtest'
    *.standby_file_management='auto'
    standby on dataguard1 machines..dbname : dgtest2
    *.db_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest'
    *.db_name='dgtest'
    *.DB_UNIQUE_NAME='dgtest2'
    *.fal_client='dgtest2'
    *.fal_server='dgtest'
    *.log_archive_config='DG_CONFIG=(dgtest,dgtest2)'
    *.log_archive_dest_1='LOCATION=/u02/oradata/archive/dgtest/
    valid_for=(all_logfiles,all_roles)
    db_unique_name=dgtest2'
    *.log_archive_dest_2='service=dgtest lgwr async
    valid_for=(online_logfiles,primary_roles)
    db_unique_name=dgtest'
    *.log_archive_dest_state_1='enable'
    *.log_archive_dest_state_2='enable'
    *.log_archive_format='%s_%t_%r.arc'
    *.log_file_name_convert='/u01/app/oracle/oradata/dgtest','/u01/app/oracle/oradata/dgtest',
    '/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/','/u01/app/oracle/flash_recovery_area/DGTEST/onlinelog/'
    *.remote_login_passwordfile='EXCLUSIVE'
    *.service_names='dgtest'
    *.standby_file_management='auto'
    what could be the problem????
    thanks for u'r attention & help
    maybe hunterX could help the problem ;)
    Edited by: kang dadang on Aug 13, 2009 12:09 AM
    Edited by: kang dadang on Aug 13, 2009 12:10 AM

    thanks for reply amit..
    when i do that query
    select * from V$ARCHIVE_DEST where dest_id=2
    Status = ERROR
    ERROR = ORA-01031: insufficient privileges
    ORA-01031: insufficient privileges ????
    what could it be the problem??
    nb : the passwdfile between db iis same
    but i configure direct I/O could it be the problem?
    thanks amit
    Edited by: kang dadang on Aug 17, 2009 10:08 PM
    Edited by: kang dadang on Aug 17, 2009 10:11 PM
    Edited by: kang dadang on Aug 17, 2009 10:16 PM
    Edited by: kang dadang on Aug 17, 2009 10:25 PM

Maybe you are looking for

  • Is what I want possible?

    I started a website for my writer's group using iWeb and never expected it to take off like it has. It was meant to be a resource for my fellow writers in the group and a way for us to stay connected and up to date with one another and has turned int

  • I downloaded Lion in October and it would not install because it said "the disk Mcintosh Hard drive is damaged and cannot be repaired" Any ideas?

    I downloaded Lion from the APP store onto my imac (running Snow Leopard v. 10.6.8, 3.06 Ghz Intel Core i3, 4GB Memory with over 200GB of storage space, which I also Time machine back up to an external hard drive. I also run Office for the Mac 2008, i

  • Hard coded links in R12.1.1 SOLARIS 10 64 bit Rapid Install stack

    Hi, Just a heads up for anyone using the Solaris 10 64 bit Rapid Install. I am doing a 11.5.10.2 > 12.1.1 upgrade on Solaris 10 64-bit. On my Rapid Install delivered R12.1.1 tech stack there are some hard coded links to Oracle test server files in th

  • User cannot log into ZCM Agent 11.3.1

    We just went through a domain migration. All PCs were unregistered from the old ZCM 11.2 server in the old domain before they were migrated. When we went to re-register them to the 11.3.1 ZCM server, we ran into 2 issues. Some of the systems successf

  • Notification webpage

    Is it possible to not have notifications that this ad has been blocked imbedded into the website were the ad was? The reason I ask is that the filtering that I was using before blocks ads and pop ups. It didn't replace them with a notification that i