Logical Standby 'CURRENT' log files

I have an issue with a Logical Standby implementation where although everything in the Grid Control 'Data Guard' page is Normal, when I view Log File Details I have 62 files listed with a status of 'Committed Transactions Applied'.
The oldest (2489) of these files is over 2 days old and the newest (2549) is 1 day old. The most recent applied log is 2564 (current log is 2565).
As for the actual APPLIED_SCN in the standby, it's greater than the highest NEXT_CHANGE# for newest logfile 2549. The READ_SCN is less than the NEXT_CHANGE# of the oldest log (2489) appearing in the the list of files - which is why all these log files appear on this list.
I am confused why the READ_SCN is not advancing. The documentation states that once the NEXT_CHANGE# of a logfile falls below READ_SCN the information in those logs has been applied or 'persistently stored in the database'.
Is it possible that there is a transaction that spans all these log files? More recent logfiles have dropped off the list and have been applied or 'persistently stored'.
Basically I'm unsure how to proceed and clean up this list of files and ensure that everything has been applied.
Regards
Graeme King

Thank you Larry. I have actually already reviewed this document. We are not getting the error they list for long running transactions though.
I wonder if it is related to the RMAN restore we did where I restored the whole standby database while the standby redo log files were not obviously restored and therefore were 'newer' than the restored database?
After I restored I did see lots trace files with this message:
ORA-00314: log 5 of thread 1, expected sequence# 2390 doesn't match 2428
ORA-00312: online log 5 thread 1: 'F:\ORACLE\PRODUCT\10.2.0\PR0D_SRL0.F'
ORA-00314: log 5 of thread 1, expected sequence# 2390 doesn't match 2428
ORA-00312: online log 5 thread 1: 'F:\ORACLE\PRODUCT\10.2.0\PR0D_SRL0.F'
I just stopped and restarted the SQL apply and sure enough it was cycled through all the log files in the list from the (READ_SCN onwards) but they are still in the list. Also there is very little activity on this non-production database.
regards
Graeme

Similar Messages

  • Use of standby redo log files in primary database

    Hi All,
    What is the exact use of setting up standby redo log files in the primary database on a data guard setup?
    any good documents?

    A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone.
    You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.
    refer the link,and Perform the following steps to configure the standby redo log.:-
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i1225703
    If the real-time apply feature is enabled, log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.
    refer the link
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/log_apply.htm#i1023371

  • Standby redo log file

    Hi,
    From oracle document i know in the standby side the RFS process writes to Standby redo log file -> archive log and the MRP procees applys the archive logs to the standby database.
    my question is ,if we dont create standby redo log file what happens?

    Hello;
    When redo is received by an RFS on the standby , the RFS process writes the redo data into archived redo logs or optionally to the SRL.
    Standby Redo Logs is where the RFS process at your Standby database writes incoming redo, they help performance because the RFS does not have to create the Archive log file.
    Standby redo logs are a component of the Data Guard setup. They should be the same size as the redo logs on the Primary.
    Standby Redo Logs do not not to be multiplexed.
    I would create SRL them on both the Primary and the Standby. Think of it as one database in either Standby or Primary Mode. If you have to switchover you still need them.
    If you have SRL's setup and need to do either a failover or switchover it should occur faster and safer.
    Best Regards
    mseberg

  • What is the purpose of standby redo log files

    Hi,
    What is the purpose of the standby redo log files in DR?
    what if the standby redo log files are created? or else not created?
    Please explain
    Thanks

    3.1.3 Configure a Standby Redo LogA standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone.
    You should plan the standby redo log configuration and create all required log groups and group members when you create the standby database. For increased availability, consider multiplexing the standby redo log files, similar to the way that online redo log files are multiplexed.>
    Reference http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#i1225703
    HTH
    Anand

  • Unassigned Status Of Standby Redo Log Files

    I created 2 standby redo log groups, and use LGWR in primary site to
    transfer redo data, all are good. But when I query the V$STANDBY_LOG
    view, I found that the status column of my both standby redo logs is UNASSIGNED".
    also sequence#
    THREAD# and all others are 0 and 0.
    Any explains.

    Thanks for the reply Sophie. I did perform log switch at my primary site but the status of standby redo log files remained unassinged. I am pasting here the message in my Alert Log file may be that can help you to diagonose the problem.
    ALTER DATABASE SET STANDBY DATABASE PROTECTED
    Tue Jul 26 15:35:18 2005
    Completed: ALTER DATABASE SET STANDBY DATABASE PROTECTED
    Tue Jul 26 15:35:22 2005
    ALTER DATABASE OPEN
    Tue Jul 26 15:35:23 2005
    LGWR: Primary database is in CLUSTER CONSISTENT mode
    LGWR: Primary database is in MAXIMUM PROTECTION mode
    LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR
    LNS0 started with pid=18
    Tue Jul 26 15:35:28 2005
    LGWR: Error 16086 verifying archivelog destination LOG_ARCHIVE_DEST_2
    LGWR: Continuing...
    Tue Jul 26 15:35:28 2005
    Errors in file e:\oracle\admin\test\bdump\test_lgwr_1864.trc:
    ORA-16086: standby database does not contain available standby log files
    LGWR: Error 16086 disconnecting from destination LOG_ARCHIVE_DEST_2 standby host 'TESTstdb'
    LGWR: Minimum of 1 applicable standby database required
    Tue Jul 26 15:35:28 2005
    Errors in file e:\oracle\admin\test\bdump\test_lgwr_1864.trc:
    ORA-16072: a minimum of one standby database destination is required
    LGWR: terminating instance due to error 16072
    Instance terminated by LGWR, pid = 1864

  • Dataguard lost both Primary redo log and standby redo log files

    Hi,
    I am new to data guard, i came acorss a scenario where we loose both primary redo log file and standby redo log files.
    Can someone please help me understand how to recover from this situation.
    Thanks!

    >loose both primary redo log file and standby redo log files
    We have to be very clear.
    There are (set A) online redo log files  and (set B) standby redo log files at (location 1) Primary and (location 2) Standby.
    The standby redo log files, depending on the configuration, aren't strictly mandatory.  The standby can be applying redo without online redo log files present as well, depending on how it was setup.
    So, the question is  : Did you lose online redo log files at the primary ?  Didn't the primary shutdown itself then ? If so, you have to do an incomplete recovery at the primary OR switch over to the standby (which may or may not have received the last transaction, depending on how it was configured and operating)   OR restore from the standby (again, with possible loss of transactions) to the primary.
    Hemant K Chitale

  • Do I need to create new group for standby redo log files?

    I have 10 group of redo log files with 2 members for each group for my primary database , Do I need to create new group for standby redo log files for the standby database
    Group#     Members
    ==============
    1              2
    2              2
    3             2
    4             2
    5             2
    6             2
    7             2
    8             2
    9             2
    10           2
    If So, The following statment is correct? or nto
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 1 ('D:\Databases\epprod\StandbyRedoLog\REDO01.LOG',D:\Databases\epprod\StandbyRedoLog\REDO01_1.LOG');
    please correct me if am doin mistake
    becuase when I issue the statment I getting error message sayin the group is already created.

    Thanks John
    I just find the answer
    Yes, it's recomeded to add new group , for instnace If I have 10 group from 1 to 10 then the standby shoudl be from 11 to 20
    Thanks I found the answer.

  • Dataguard Solution for standby redo log file groups

    Respected Experts,
    My database version is 10.2.0.1.0 and Red Hat 5 os.I want to create a standby database using RMAN.
    Can any one help me with the full steps.And i'm also confuse about number of standby redo log file members
    need to be created.
    Thanks and Regards
    Monoj Das

    My database version is 10.2.0.1.0 and Red Hat 5 os.I want to create a standby database using RMAN.To configure standby either you can use duplicate target database for standby
    or
    1) restore standby controlfile
    2) mount standby database
    3) restore database
    and configure standby paraemter then start MRP, will do.
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ps.htm
    Can any one help me with the full steps.And i'm also confuse about number of standby redo log file members
    need to be created.It depends which parameter you want to use, if you mention log_archive_dest_2='service ARCH ' then no need to create any standby redo log file groups,
    If you use log_archive_dest_2='service LGWR ' here transport will be in terms of redo and you need standby redo log files on standby database. Which is realtime.
    When you use LGWR, data lost will be less if in case of any online redo log file lost. which is recommended.
    HTH.

  • How do look if i have Standby Redo Log files

    How do look if i have Standby Redo Log files
    example of creating them..

    To check existence of Standby Redo Log Files:
    SQL> v$standby_log
    Name                                      Null?    Type
    ----------------------------------------- -------- ---------------------------- GROUP#                                             NUMBER
    DBID                                               VARCHAR2(40)
    THREAD#                                            NUMBER
    SEQUENCE#                                          NUMBER
    BYTES                                              NUMBER
    USED                                               NUMBER
    ARCHIVED                                           VARCHAR2(3)
    STATUS                                             VARCHAR2(10)
    FIRST_CHANGE#                                      NUMBER
    FIRST_TIME                                         DATE
    LAST_CHANGE#                                       NUMBER
    LAST_TIME                                          DATE
    SQL> select * from v$standby_log;
    no rows selected
    SQL>To Create a Standby Redo Log file:
    SQL> alter database add standby logfile group 11 ('/u01/app/test.log') size 5m;
    Database altered.
    SQL> set line 10000
    SQL> select * from v$standby_log;
        GROUP# DBID                                        THREAD#  SEQUENCE#      BYTES       USED ARC STATUS     FIRST_CHANGE# FIRST_TIM LAST_CHANGE# LAST_TIME
            11 UNASSIGNED                                        0          0    5242880        512 YES UNASSIGNED              0                      0
    SQL>and this is how you drop:
    SQL> alter database drop standby logfile group 11;
    Database altered.
    SQL> ! rm /u01/app/test.log
    SQL>Asif Momen
    http://momendba.blogspot.com
    Edited by: Asif Momen on Mar 16, 2010 1:32 PM
    Included DROP example

  • Current log file is lost

    Hi all,
    I am using oracle 9i in windows, my database is in archive log mode, i have 2 log group, now i lost my current log file, i know it will solve by incomplete backup.
    Kindly any one tell me the command. how can i recover it,shall i use the backup control file?
    senthil

    Plan A. Is your log group multiplexed? Drop lost logfile member and Add a new one.
    ALTER SYSTEM SWITCH LOGFILE;
    ALTER DATABASE DROP LOGFILE MEMBER 'lost redo log';
    ALTER DATABASE ADD LOGFILE MEMBER 'new redo log file' TO GROUP <n>;
    Plan B. It is not multiplexed, but database is still open.
    ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP <n>;
    ALTER SYSTEM SWITCH LOGFILE;
    ALTER DATABASE DROP LOGFILE MEMBER 'lost redo log';
    ALTER DATABASE ADD LOGFILE MEMBER 'new redo log file' TO GROUP <n>;
    Plan C. Not multiplexed, database shutdown.
    Perform incomplete recover.

  • ACS -current log file CSMonLog Active.csv is showing blank

    under ACS service monitoring TAB, the current log file CSMonLog Active.csv is showing blank ?
    Could anyone let me know why this happens ?

    CSMon—CSMon service is responsible for the monitoring, recording, and notification of Cisco Secure CS ACS performance, and includes automatic response to some scenarios. For instance,TACACS+ and RADIUS service dies, CS ACS by default restarts all the services, unless otherwise configured. Monitoring includes monitoring the overall status of Cisco Secure ACS and the system on which it is running. CSMon actively monitors three basic sets of system parameters:
        Generic host system state—monitors disk space, processor utilization, and memory utilization.
        Application-specific performance—periodically performs a test login each minute using a special built-in test account by default.
        System resource consumption by Cisco Secure ACS—CSMon periodically monitors and records the usage by Cisco Secure ACS of a small set of key system resources. Handles counts, memory utilization, processor utilization, thread used, and failed log-on attempts, and compares these to predetermined thresholds for indications of atypical behavior.
    CSMon works with CSAuth to keep track of user accounts that are disabled for exceeding their failed attempts count maximum. If configured, CSMon provides immediate warning of brute force attacks by alerting the administrator that a large number of accounts have been disabled.
    By default CSMon records exception events in logs both in the CSV file and Windows Event Log that you can use to diagnose problems. Optionally you can configure event notification via e-mail so that notification for exception events and outcomes includes the current state of Cisco Secure ACS at the time of the message transmission. The default notification method is simple mail-transfer protocol (SMTP) e-mail, but you can create scripts to enable other methods. However, if the event is a failure, CSMon takes the actions that are hard-coded when the triggering event is detected. If the event is a warning event, it is logged, the administrator is notified if it is configured, and no further action is taken. After a sequence of re-tries, CSMon also attempts to fix the cause of the failure and individual service restarts. It is possible to integrate custom-defined action with CSMon service, so that a user-defined action can be taken based on specific events.
    Answering your query: This may be a brand new installation OR none of ACS services restarted lately so logs OR CSMON logging might have disabled under system configuration > Logging.
    ~BR
    Jatin Katyal
    **Do rate helpful posts**

  • File system task - Move file issue, skip the current log file thats open by another process - IIS logs

    Hi,
    I have created a package to move IIS log files from their source directory to another directory called processing. However, when the package tries to move the current log file it errors because the file is in use by another process (by IIS as its still writing
    to it). I would like the package to just ignore the "in-use" file and process the rest and show the package completing successfully. The file in use will be rolled by IIS at midnight and the file will be picked up by the next package run.
    However, when the file is rolled and becomes free a new log file is created which will be marked as "in-use"...... which should be ignored at the next run.
    Is there some way that I can tell the file system task to ignore in use files or add an event handler to do the same sort of thing?
    Any assistance is appreciated

    Hi Arthur,
    Thank you for your reply.
    I have resolved the issue with the following example:
    http://www.timmitchell.net/post/2010/08/23/ssis-conditional-file-processing-in-a-foreach-loop/
    What I realised was that I needed to just ignore the current log file which gets created daily, so using the above example (and changing the last written date to created date) I am able to ignore/skip the log thats in use.
    Adam

  • Logical standby | archive log deleted | how to remove gap ???

    hi gurus...
    i have problem on logical standby
    by mistake standby log coming to logical standby has been deleted , now how to fill up the gap ???
    ON STANDBY
    SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# APPLIED
    228 674847 674872 YES
    229 674872 674973 CURRENT
    230 674973 674997 NO
    231 674997 675023 NO
    232 675023 675048 NO
    233 675048 675109 NO
    234 675109 675135 NO
    235 675135 675160 NO
    236 675160 675183 NO
    237 675183 675208 NO
    238 675208 675232 NO
    239 675232 675257 NO
    240 675257 675282 NO
    241 675282 675382 NO
    242 675382 675383 NO
    243 675383 675650 NO
    244 675650 675652 NO
    245 675652 675670 NO
    246 675670 675688 NO
    247 675688 675791 NO
    248 675791 678524 NO
    archive log are shipping to standby location also and getting registered
    ALERT LOG OF STANDBY
    Fri May 7 12:25:36 2010
    Primary database is in MAXIMUM PERFORMANCE mode
    RFS[21]: Successfully opened standby log 5: '/u01/app/oracle/oradata/BEST/redo05.log'
    Fri May 7 12:25:37 2010
    RFS LogMiner: Registered logfile [u01/app/oracle/flash_recovery_area/BEST/archivelog/archBEST_248_1_715617824.dbf] to LogMiner session id [1]
    but i dont have standby log after 229 sequence ...
    ON PRIMARY
    SYS@TEST AS SYSDBA> archive log list
    Database log mode Archive Mode
    Automatic archival Enabled
    Archive destination /u01/app/oracle/flash_recovery_area/TEST/standlogOldest online log sequence 247
    Next log sequence to archive 249
    Current log sequence 249
    what to do next to apply sequences and bring both in sync.
    please help me ,,,,
    Edited by: user12281508 on May 7, 2010 9:45 AM

    thanks for response.
    no its pure logical standby
    i have tried to ftp the archive logs of primary to standby and applied manually
    SYS@BEST AS SYSDBA> alter database register logfile '/u01/app/oracle/flash_recovery_area/BEST/archivelog/archBEST_230_1_715617824.dbf';
    alter database register logfile '/u01/app/oracle/flash_recovery_area/BEST/archivelog/archBEST_230_1_715617824.dbf'
    ERROR at line 1:
    ORA-01289: cannot add duplicate logfile
    SYS@BEST AS SYSDBA> alter database register logfile '/u01/app/home/archTEST_230_1_715617824.dbf';
    alter database register logfile '/u01/app/home/archTEST_230_1_715617824.dbf'
    ERROR at line 1:
    ORA-01289: cannot add duplicate logfile
    any other way ????

  • Transaction log shipping restore with standby failed: log file corrupted

    Restore transaction log failed and I get this error: for only 04 no of database in same SQL server, renaming are working fine.
    Date                     
    9/10/2014 6:09:27 AM
    Log                        
    Job History (LSRestore_DATA_TPSSYS)
    Step ID                
    1
    Server                  
    DATADR
    Job Name                           
    LSRestore_DATA_TPSSYS
    Step Name                        
    Log shipping restore log job step.
    Duration                             
    00:00:03
    Sql Severity        0
    Sql Message ID 0
    Operator Emailed           
    Operator Net sent          
    Operator Paged               
    Retries Attempted         
    0
    Message
    2014-09-10 06:09:30.37  *** Error: Could not apply log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' to secondary database 'TPSSYS'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred while processing the log for database 'TPSSYS'. 
    If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'TPSSYS' (13:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database 'TPSSYS', file 'TPSSYS' on file 1.
    Processed 1 pages for database 'TPSSYS', file 'TPSSYS_log' on file 1.(.Net SqlClient Data Provider) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Skipping log backup file '\\10.227.32.27\LsSecondery\TPSSYS\TPSSYS_20140910003724.trn' for secondary database 'TPSSYS' because the file could not be verified.
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: An error occurred restoring the database access mode.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteScalar requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  Deleting old log backup files. Primary Database: 'TPSSYS'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  The restore operation completed with errors. Secondary ID: 'dd25135a-24dd-4642-83d2-424f29e9e04c'
    2014-09-10 06:09:30.37  *** Error: Could not log history/error message.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.37  *** Error: Could not cleanup history.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-10 06:09:30.37  *** Error: ExecuteNonQuery requires an open and available Connection. The connection's current state is closed.(System.Data) ***
    2014-09-10 06:09:30.38  ----- END OF TRANSACTION LOG RESTORE    
    Exit Status: 1 (Error)

    I Have restore the database to new server and check with new log shipping but its give this same error again, If it is network issue i believe issue need to occur on every database in  that server with log shipping configuration
    error :
    Message
    2014-09-12 10:50:03.18    *** Error: Could not apply log backup file 'E:\LsSecondery\EAPDAT\EAPDAT_20140912051511.trn' to secondary database 'EAPDAT'.(Microsoft.SqlServer.Management.LogShipping) ***
    2014-09-12 10:50:03.18    *** Error: An error occurred while processing the log for database 'EAPDAT'.  If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    An error occurred during recovery, preventing the database 'EAPDAT' (8:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
    RESTORE LOG is terminating abnormally.
    can this happened due to data base or log file corruption, if so how can i check on that to verify the issue
    Its not necessary if the issue is with network it would happen every day IMO it basically happens when load on network is high and you transfer log file which is big in size.
    As per message database engine was not able to restore log backup and said that you must rebuild log because it did not find out log to be consistent. From here it seems log corruption.
    Is it the same log file you restored ? if that is the case since log file was corrupt it would ofcourse give error on wehatever server you restore.
    Can you try creating logshipping on new server by taking fresh full and log backup and see if you get issue there as well. I would also say you to raise case with Microsoft and let them tell what is root cause to this problem
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Articles

  • Standby Redo Log Files and Directory Structure in Standby Site

    Hi Guru's
    I just want to confirm, i know that if the Directory structure is different i need to mention these 2 parameter in pfile
    on primary site:
    DB_CONVERT_DATAFILE='standby','primary'
    LOG_CONVERT_DATAFILE='standby','primary'
    On secondary Site:
    DB_CONVERT_DATAFILE='primary','standby'
    LOG_CONVERT_DATAFILE='primary','standby'
    But i want to confirm this wheather i need to issue the complete path of the directory in both the above paramtere:
    like:
    DB_CONVERT_DATAFILE='/u01/oracle/app/oracle/oradata/standby','/u01/oracle/app/oracle/oradata/primary'
    LOG_CONVERT_DATAFILE='/u01/oracle/app/oracle/oradata/standby','/u01/oracle/app/oracle/oradata/primary'
    Second Confusion:-
    After transferring Redo Standby log files created on primary and taken to standby on the above mentioned directory structure and after restoring the backup of primary db alongwith the standby control file will not impact the physical standby redo log placed on the above mentioned location.
    Thanks in advance for your help

    Hello,
    Regarding your 1st question, you need to provide the complete path and not just the directory name.
    On the standby:
    db_file_name_convert='<Full path of the datafiles on primary server>','<full path of the datafiles to be stored on the standby server>';
    log_file_name_convert='<Full path of the redo logfiles on primary server>','<full path of the redo logfiles on the standby server>';
    Second Confusion:-
    After transferring Redo Standby log files created on primary and taken to standby on the above mentioned directory structure and after restoring the backup of primary db alongwith the standby control file will not impact the physical standby redo log placed on the above mentioned location.
    How are you creating the standby database ? Using RMAN duplicate or through the restore/recovery options ?
    You can create the standby redo logs later.
    Regards,
    Shivananda

Maybe you are looking for

  • How do I create a vertical side menu with its own styling separate from the horizontal bar above it?

    Hello.  I need assistance in styling my side menu.  I would like for it to  have a different style than the horizontal bar that is above it.  I would like that the side menu have a black background and with white text vertically aligned to the left. 

  • How to create ECO 5.0 CRM project in NWDS?

    hi, I created a development track in NWDI and next I want to create a project in NWDS. So I switched to the inactive DCs view in NWDS and I choosed the SAP_SHRWEB_1 DC and selected the entry crm/isa/web/b2b and created a project. But I cant find the

  • Pages - Numbers deletes merge function!

    WHY did Numbers and Pages updates kill the merge function I use frequently? Guess I am going back to Word and Excel, and have many documents and spreadsheets to convert.  Very disappointing.

  • Periodically am redirected to dnsassist page when accessing active sites

    Several times a day when trying to go to active/working websites I am redirected to http://search.dnsassist.verizon.net/assist.php?url= and the following message is displayed: Sorry, We could not find (URL specified) It may be unavailable or may not

  • System will not start

    T60 will not start.  While turning on and initiating system, it died suddenly.  Even the battery light was not illuminated.  Battery part# is FRU P/N 42T4504.  Should the system still start with the AC adapter, even if battery has died?  Are others h