Delete expired   data guard

Hi,
I got a question regarding the primary db and the data guard db. I had to reduce the amount of files retained due to space limitations on the primary. When I clicked on the delete expired button and the job is run the files that are deleted on the primary are they also deleted on the data guard db? If not will this cause a problem if when a fail over or switch over is needed?
Thanks in advance for your help.
al

Good Morning Kam Singh
I just have one additional question that occurred to me last night at home. There was an issue with hard parsing that my ADDM advised increasing my SGA by 2.5 gigs. Someone on the forum here was kind enough to give me the view V$sgainfo to find out how much free memory was available in my system. Well when I interrogated this view it came up with a big fat zero availability. Now from what I understood from my readings was that 10g sga was dynamic. That the memory processes are adjusted to need from information taken from the AWR. At the page where the ADDM listed a button to implement the change (increase) I hesitated until I found out how much free was available. Right now I am glad that I did. My question is If I did click on that implementation button would have the oem tool taken the memory not being used from some other location and allocate it to this spot or would it have returned an error of some kind?
You may have more experience with this and may have come across this situation than I. Do you recall what you did?
Does anyone else recall what they did in this situation?
Please give advise.
Regards to all,
al

Similar Messages

  • HT1657 How can I delete a rented movie from my ipad before its expiration date?

    How can I delete a watched rented movie from my Ipad before its expiration date?

    Hello, K M Cartwright. 
    Thank you for visiting Apple Support Communities. 
    You definitely can delete the movie prior to the expiration date.  However, the movie will automatically disappear from your iTunes library 24 (or 48) hours after you've begun viewing it.  Here are the steps on how to delete iTunes media. 
    How to delete content you've downloaded from the iTunes Store, App Store, iBooks Store, or Mac App Store
    http://support.apple.com/kb/HT5772
    Cheers,
    Jason H. 

  • Data guard and RMAN deletion policy.

    Dear experts,
    I have Data guard setup with physical standby. I scheduled backup archivelog to tape from both primary and standby servers. I need that archivelogs will be deleted only after shiped on standby and after backup 1 time to tape device. So I set following RMAN configure command.
    I am using separate RMAN catalogs for primary and secondary.
    CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE' SHIPPED TO ALL STANDBY;
    After I did a test.
    1. Stop standby
    2. Generate archivelogs in Primary (so to fill 100% the FRA )
    3. Backup archivelogs
    4. Start standby
    The logs dosen't shiped to standby because Oracle deleted them to free up space in FRA. So the second part of my deletion policy don't want to work. My goal is that when FRA is full the primary will hang instead of deleting not yet shiped archivelogs.
    Please share with me your experience.
    Thanks in advance.

    Hello;
    Test on 11.2.0.3
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE' SHIPPED TO ALL STANDBY;
    new RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE' SHIPPED TO ALL STANDBY;
    new RMAN configuration parameters are successfully stored
    starting full resync of recovery catalog
    full resync complete
    RMAN-08591: WARNING: invalid archived log deletion policy
    But this is OK
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
    old RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE' SHIPPED TO ALL STANDBY;
    new RMAN configuration parameters:
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
    new RMAN configuration parameters are successfully stored
    starting full resync of recovery catalog
    full resync complete
    These should work:
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO SHIPPED TO ALL STANDBY;
    RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
    Best Regards
    mseberg

  • RMAN,Data Guard and Archive log deletion

    Our DG environment is running Oracle 11g R2
    we have a 3 node DG environment with
    A being the Primary
    B and C being Active Data Guard Standbys
    Backups are taken off of B and go directly to tape.
    Standby Redo Logs and Fast Recovery Area are being used
    Taking recommendation from "Using Recovery Manager with Oracle Data Guard in Oracle Database 10g"
    RMAN Setting on Primary ("A")
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
    RMAN Setting on Standby ("B") where Backup is done
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE
    RMAN Setting on other Standby ("C")
    CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
    How can we know what archive logs are eligible to be deleted from "A" and "C" ?
    When does the delete take place?
    How can we tell when the archive logs are being deleted from "A" and "C" ?

    Dear user10260925,
    The documentation that you have read is reliable but insufficient.
    The Oracle can manage the archivelog directory and knows which one is eligible for the deletion. Those stuff you have posted in here has been taken from the online documentation and is supported and can be used when the Oracle knows and manages the archivelogs. That is simply called the flash recovery area. Please read about the FRA in this very moment.
    Under normal circumstances people in the industry uses some scripts to achive the archivelog deletion on the standby system.
    Here is a useful example to you;
    # Remove old archivelogs
    00,30 * * * * /home/oracle/scripts/delete_applied_redo_logs_OPTSTBY.sh
    vals3:/home/oracle#cat /home/oracle/scripts/delete_applied_redo_logs_OPTSTBY.sh
    export ORACLE_SID=optstby
    export ORACLE_HOME=/oracle/product/10.2.0/db_1
    cd /db/optima/archive/OPTPROD/archivelog
    /oracle/product/10.2.0/db_1/bin/sqlplus "/ as sysdba" @delete_applied_redo_logs.sql
    grep arc delete_applied_redo_logs.lst > delete_applied_redo_logs_1.sh
    chmod 755 delete_applied_redo_logs_1.sh
    sh delete_applied_redo_logs_1.sh
    rm delete_applied_redo_logs_1.sh
    rm delete_applied_redo_logs.lst
    vals3:/home/oracle#cd /db/optima/archive/OPTPROD/archivelog
    vals3:/db/optima/archive/OPTPROD/archivelog#cat delete_applied_redo_logs.sql
    set echo off
    set heading off
    spool /db/optima/archive/OPTPROD/archivelog/delete_applied_redo_logs.lst
    select 'rm -f ' || name from v$archived_log where applied = 'YES';
    spool off
    exit
    vals3:/db/optima/archive/OPTPROD/archivelog#Hope That Helps.
    Ogan

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Setting up the standby side after a crash (Data Guard)

    Hi,
    I hope this is the right area to publish my problem...
    I can't find something like codetags for the system output - so I'm sorry for the bad looking
    I have a problem. I use Oracle 11.2.0.3.0 with a dataguard environment. My primary database crashed and I activate the standby to be the new primary.
    After the old primary gets repaired I want to define them as the ney standby. This didn't work because we have disabled flashback logging.
    We created the new standby:
    rman target sys/password@prim auxiliary sys/password@stby
    duplicate target database for standby from active database;
    After this we do this on the new standby:
    alter database recover managed standby database disconnect from session;
    It looks the now there is a working physical standby.
    Now I look on the primary dataguard:
    DGMGRL> show database verbose stby;
    Database - stby
      Role:            PHYSICAL STANDBY
      Intended State:  APPLY-ON
      Transport Lag:   (unknown)
      Apply Lag:       (unknown)
      Real Time Query: OFF
      Instance(s):
        dbuc4
      Properties:
        DGConnectIdentifier             = 'stby'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'optional'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'AUTO'
        ArchiveLagTarget                = '900'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = ''
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName                         = 'dbuc4'
        StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=stby)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=stby_DGMGRL.blubb.de)(INSTANCE_NAME=stby)(SERVER=DEDICATED)))'
        StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
        AlternateLocation               = ''
        LogArchiveTrace                 = '0'
        LogArchiveFormat                = '%t_%s_%r.arc'
        TopWaitEvents                   = '(monitor)'
    Database Status:
    ORA-16795: the standby database needs to be re-created
    DGMGRL> show database verbose prim;
    Database - prim
      Role:            PRIMARY
      Intended State:  TRANSPORT-ON
      Instance(s):
        dbuc4
        dbuc4stby
      Properties:
        DGConnectIdentifier             = 'prim'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'OPTIONAL'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'AUTO'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = ''
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName(*)
        StaticConnectIdentifier(*)
        StandbyArchiveLocation(*)
        AlternateLocation(*)
        LogArchiveTrace(*)
        LogArchiveFormat(*)
        TopWaitEvents(*)
        (*) - Please check specific instance for the property value
    Database Status:
    SUCCESS
    DGMGRL> show database verbose dbuc4stby;
    Database - dbuc4stby
      Role:            PRIMARY
      Intended State:  TRANSPORT-ON
      Instance(s):
        dbuc4
        dbuc4stby
      Properties:
        DGConnectIdentifier             = 'dbuc4stby'
        ObserverConnectIdentifier       = ''
        LogXptMode                      = 'ASYNC'
        DelayMins                       = '0'
        Binding                         = 'OPTIONAL'
        MaxFailure                      = '0'
        MaxConnections                  = '1'
        ReopenSecs                      = '300'
        NetTimeout                      = '30'
        RedoCompression                 = 'DISABLE'
        LogShipping                     = 'ON'
        PreferredApplyInstance          = ''
        ApplyInstanceTimeout            = '0'
        ApplyParallel                   = 'AUTO'
        StandbyFileManagement           = 'AUTO'
        ArchiveLagTarget                = '0'
        LogArchiveMaxProcesses          = '4'
        LogArchiveMinSucceedDest        = '1'
        DbFileNameConvert               = ''
        LogFileNameConvert              = ''
        FastStartFailoverTarget         = ''
        InconsistentProperties          = '(monitor)'
        InconsistentLogXptProps         = '(monitor)'
        SendQEntries                    = '(monitor)'
        LogXptStatus                    = '(monitor)'
        RecvQEntries                    = '(monitor)'
        SidName(*)
        StaticConnectIdentifier(*)
        StandbyArchiveLocation(*)
        AlternateLocation(*)
        LogArchiveTrace(*)
        LogArchiveFormat(*)
        TopWaitEvents(*)
        (*) - Please check specific instance for the property value
    Database Status:
    SUCCESS
    DGMGRL> show configuration
    Configuration - dg
      Protection Mode: MaxPerformance
      Databases:
        prim - Primary database
        stby     - Physical standby database (disabled)
          ORA-16795: the standby database needs to be re-created
    Fast-Start Failover: DISABLED
    Configuration Status:
    SUCCESS
    On the stby side it looks like:
    DGMGRL> show configuration
    ORA-16795: Die Standby-Datenbank muss neu erstellt werden (The standby database needs to be recreated)
    Did I have to create a new dataguard configuration?
    I didn't know how I get this to work.
    Thx fuechsin

    Hi,
    first of all: big thanks for your answer!
    I think, this was a bad idea, too. But there was not enough space, so we decided to turn it off without thinking of consequences. :/
    When the new hardware arrive I will enable flashback and never turn it off
    The dataguard-log on the stby says:
    01/23/2014 11:56:13
    >> Starting Data Guard Broker bootstrap <<
    Broker Configuration File Locations:
          dg_broker_config_file1 = "/Daten/stby/stby_dgbroker1.dat"
          dg_broker_config_file2 = "/Daten/stby/stby_dgbroker2.dat"
    01/23/2014 11:56:17
    Database needs to be reinstated or re-created, Data Guard broker ready
    I want to try to delete the configuration on the prim and stby side and reconfigure it. But I don't know if there are side-effects on the working prim side - it is a productive system.
    Best regards,
    fuechsin

  • Stock transfer are not showing correct expire date for Shelf life subcontracting

    Hi Gurus,
    We have shelf life active for PPDS. We had added 5 characteristic to have min and Max shelf life in APO so that data will be considered during pegging.
    Self life works perfectly fine for production plant where based on manufacture date min shelf life is considered.
    LOBM_APO_SL_MIN
    LOBM_APO_SL_MAX
    LOBM_APO_SL_UTC
    LOBM_VERAB
    LOBM_HSDAT
    Question is for sub contracting process stock or inventory date is giving back dated where when we run planning run stock is getting expired.
    Can any one help in getting stock correct expire date instead of 1982---- some thing like this.
    Thanks in advance
    Thanks & Regards,
    Rajesh Kumar

    When you plan subcontracting in SNP following is generated.
    Object                             Source                                   Destination
    Sub Con PR (Header)       Sub contractor                       Plant
    PR/STR (Child)                Plant                                     Subcontractor
    PR (Child)                       Child Mat Vendor                    Plant (or direct to Sub Contractor)
    Usually first and third items will get converted to Purchase orders and will vanish as and when Goods Receipts are posted against them.
    The second order may not get converted in SAP.  Tis will remain in the system.  But in the next planning run, these will be deleted if the quantities have been already fulfilled.
    Regards
    Nitin Thatte

  • Application custom expiration date

    I have a iOS project (deployed as Adhoc) with requirements / restrictions:
    1."custom expiration date" : app wouldn't launch after period of time after installation.
    The requirement "custom expiration date" like usual provision profile, but instead of ***Specific Expire Date***, it need to be 3 months or 90days after ***app installation(or app first launch)***.
    2.have to be ***"Offline app"*** (no internet)
    Other than above, the app need to some user-abuse-prevention as:
    1.If user change device time for purpose of extending expiration date, the app can detect it and display error message, then exit it.
    2.Lets say i detect app first launch by checking existense of a secret .plist,
    how to I prevent user from jailbreak the device, and go delete this secret .plist?
    Given scenario that user did delete this secret .plist, the next time app launch will consider this app is "first launch".'
    3.My baseline for preventing is user dont know decompile/recompile, and memory hack.
    The only "agreement" on client is, upon I release the testflight link to them, they will only install to every device once and will not try to download .ipa itself for later installation.
    What is the most elegant way solution of doing it?

    I am not submit to AppStore, it's project with Adhoc deployment.

  • Treasury Memo Records - Expiration Date FF7B and FF63

    Hi,
    When I create a memo record in transaction FF63, or directly in transaction FF7B, the expiration date that I insert has no meaning in the cash management and forecast report. That is, even after the expiration date, when I analyze my forecast the memo records do not disappear. The value remains in the report, which means that we must archive or delete the memo record mannualy.
    How can I make the memo record disappear automatically in the expiration date?
    Can you help, please?
    Best regards,
    Lénia

    Chaikaru,
    I understand the setting in the archival type where you can say how long it can stay in the system. But in our situation we cannot wait until 6 months to archive the memo records.
    We get current day bank statements, mutiple times during the day. We get it at 8.00 AM, 10.00 AM, 12.00 Noon and 1.00 PM. These are uploaded into SAP and Memo records are created. These memo records are displayed in FF7A to view the cash position.
    The cash manager comes in at 8.00 AM into the office and runs FF7A. Based on this report, they take their investment decisions and enter into transaction with counterparties.
    The next statement comes at 10.00 AM. We first archive all the memo records created at 8.00 AM using FF6A, uplaod the new statement received at 10.00 AM using FF_5, create memo records using FPS3. Then when we run FF7A we get the latest cash position report. Based on this report, the Cash Manager makes adjustment to the investment decision he made at 8.00 AM. This process goes on until 1.00 PM.  If we don't archive every 2 hours, there will be double counting inthe FF7A report.
    So even though the Archival type says you can archive after 6 months or 12 months, we cannot keep the records for more than 2 hours in the system, We archive it every two hour or as the new statement comes in.
    IN this situation, how can you get a past dated cash position report? Can we retrieve the memo records from archive and run the report for a specific date in the past?
    Thanks
    Kalyan

  • Data Guard Scenario

    Hi,
    We have a data guard configuration with 1 primary and 2 standy databases. Two databases are local means physically located next to each other and one is across the WAN. The 2 local databases are candidate of fast start failover and we also have a observer on separate box. For testing purposes, when we pull the network cables from 2 local databases to make third one primary, the whole data guard broker hangs and doesn't allow us to run any command using DGMGRL to manually failover to third database running across WAN. So we have to disable the dg broker process on third database manually and convert it to primary without broker. Then when we put the network cable back for those 2 local servers, the observer thinks that one of that is primary and try to start that database as primary and also try to convert third database as standby by flashing it back to previous scn and it never succeeded. In order to make everything work, we have to manually convert the roles of all the databases and make them in sync and delete the broker configuration and readd it. It's too messy to deal with, I am wondering if there is any clean way to failover back to local databases.
    Database Version: 10.2.0.4
    Platform: Linux
    Thanks
    Daljit Singh

    Thanks for the reply!
    Yes, all the 3 databases are being observed by the observer which is running on 4th box but the third database is not candidate of fast start failover. when we pull the cable from 2 servers, the observer box is still able to ping the third box server. Both of the standby are in sync with primary, there is no problem converting third database to primary manually by shutting down dg broker and doing it using alter database command. But the problem is when we do this then the broker configuration doesn't get updated and when we try to put other 2 databases online, broker think that the local database is primary and then all the problem starts. Before all this test, everything was working, configuration was perfect, and both the standbys were catching up.
    Mainly I want to find out if there is any way to failover to third database using broker when we pull network cables from local stores. Right now, as I mentioned, it hang when we pull cables from local servers and it doesn't allow us to run any command from dgmgrl, all commands failed with time out error.
    Thanks
    Daljit Singh
    Edited by: Daljit on Sep 30, 2009 11:17 AM

  • Data Guard is not working

    I have configured Oracle data guard in 11gR2 RAC database.
    In the primary database I get "ORA-16198: Timeout incurred on internal channel during remote
    archival" error.
    In the physical standby database log file there is shown consistently
    "Possible network disconnect with primary database"
    and
    "Deleted Oracle managed file +RECOVERY/bddipdrs/archivelog/2010_09_02/thread_2_seq_90.289.728688311"
    In the primary database log file consistently it shows,
    "Reclaiming FAL entry from dead process"
    "ARCk: Standby redo logfile selected for thread 1 sequence 120 for destination LOG_ARCHIVE_DEST_2"
    But no redo apply is done is standby database.
    Any feedback is really welcomed.
    Thank you very much

    After setting log_archive_dest_2 in primary I see a different error,
    SQL> SELECT DEST_ID "ID",STATUS "DB_status",DESTINATION "Archive_dest",ERROR "Error" FROM V$ARCHIVE_DEST WHERE DEST_ID =2;
    ID DB_status
    Archive_dest
    Error
    2 ERROR
    bddipdrs
    ORA-12160: TNS:internal error: Bad error number
    1 row selected.
    From trace file,
    [oracle@DC-DB-01 ~]$ tail -n 200 /u01/app/oracle/diag/rdbms/bddipdc/bddipdc1/alert/log.xml
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.412+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> nt OS err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.445+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.445+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    Fatal NI connect error 12160, connecting to:
    (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=DRS-DB-01-vip)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=bddipdc.world)(CID=(PROGRAM=oracle)(HOST=DC-DB-01)(USER=oracle))))
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.445+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    VERSION INFORMATION:
    TNS for Linux: Version 11.2.0.1.0 - Production
    TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.1.0 - Production
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.445+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> Time: 03-SEP-2010 11:17:45
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> Tracing not turned on.
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> Tns error struct:
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> ns main err code: 1
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>TNS-00001: INTCTL: error while getting command line from the terminal
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> ns secondary err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> nt main err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> nt secondary err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.446+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> nt OS err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.479+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.479+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    Fatal NI connect error 12160, connecting to:
    (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=DRS-DB-01-vip)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=bddipdc.world)(CID=(PROGRAM=oracle)(HOST=DC-DB-01)(USER=oracle))))
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.479+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    VERSION INFORMATION:
    TNS for Linux: Version 11.2.0.1.0 - Production
    TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.1.0 - Production
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.479+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> Time: 03-SEP-2010 11:17:45
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> Tracing not turned on.
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> Tns error struct:
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> ns main err code: 1
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt>TNS-00001: INTCTL: error while getting command line from the terminal
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> ns secondary err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> nt main err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> nt secondary err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    type='UNKNOWN' level='16' host_id='DC-DB-01'
    host_addr='192.168.100.101'>
    <txt> nt OS err code: 0
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='DC-DB-01' host_addr='192.168.100.101' module=''
    pid='24310'>
    <txt>Error 12160 received logging on to the standby
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='DC-DB-01' host_addr='192.168.100.101' module=''
    pid='24310'>
    <txt>Errors in file /u01/app/oracle/diag/rdbms/bddipdc/bddipdc1/trace/bddipdc1_arcq_24310.trc:
    ORA-12160: TNS:internal error: Bad error number
    </txt>
    </msg>
    <msg time='2010-09-03T11:17:45.480+06:00' org_id='oracle' comp_id='rdbms'
    client_id='' type='UNKNOWN' level='16'
    host_id='DC-DB-01' host_addr='192.168.100.101' module=''
    pid='24310'>
    <txt>PING[ARCq]: Heartbeat failed to connect to standby &apos;bddipdrs&apos;. Error is 12160.
    </txt>
    </msg>
    [oracle@DC-DB-01 ~]$
    [oracle@DC-DB-01 ~]$
    [oracle@DC-DB-01 ~]$
    [oracle@DC-DB-01 ~]$ less /u01/app/oracle/diag/rdbms/bddipdc/bddipdc1/trace/bddipdc1_arcq_24310.trc
    *** 2010-09-03 11:03:44.868 869 krsu.c
    Error 12160 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'bddipdrs'
    Error 12160 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'bddipdrs'
    *** 2010-09-03 11:03:44.868 4132 krsh.c
    PING[ARCq]: Heartbeat failed to connect to standby 'bddipdrs'. Error is 12160.
    *** 2010-09-03 11:03:44.868 2747 krsi.c
    krsi_dst_fail: dest:2 err:12160 force:0 blast:1
    Redo shipping client performing standby login
    *** 2010-09-03 11:12:45.149
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12160 and errmsg is 'ORA-12160: TNS:internal error: Bad error number
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12160 and errmsg is 'ORA-12160: TNS:internal error: Bad error number
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12160 and errmsg is 'ORA-12160: TNS:internal error: Bad error number
    *** 2010-09-03 11:12:45.213 4132 krsh.c
    Error 12160 received logging on to the standby
    *** 2010-09-03 11:12:45.213 869 krsu.c
    Error 12160 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'bddipdrs'
    Error 12160 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'bddipdrs'
    ORA-12160: TNS:internal error: Bad error number
    *** 2010-09-03 11:12:45.214 4132 krsh.c
    PING[ARCq]: Heartbeat failed to connect to standby 'bddipdrs'. Error is 12160.
    *** 2010-09-03 11:12:45.214 2747 krsi.c
    krsi_dst_fail: dest:2 err:12160 force:0 blast:1
    *** 2010-09-03 11:17:45.377
    Redo shipping client performing standby login
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12160 and errmsg is 'ORA-12160: TNS:internal error: Bad error number
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12160 and errmsg is 'ORA-12160: TNS:internal error: Bad error number
    OCIServerAttach failed -1
    .. Detailed OCI error val is 12160 and errmsg is 'ORA-12160: TNS:internal error: Bad error number
    *** 2010-09-03 11:17:45.480 4132 krsh.c
    Error 12160 received logging on to the standby
    *** 2010-09-03 11:17:45.480 869 krsu.c
    Error 12160 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'bddipdrs'
    Error 12160 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'bddipdrs'
    ORA-12160: TNS:internal error: Bad error number
    *** 2010-09-03 11:17:45.481 4132 krsh.c
    PING[ARCq]: Heartbeat failed to connect to standby 'bddipdrs'. Error is 12160.
    *** 2010-09-03 11:17:45.481 2747 krsi.c
    krsi_dst_fail: dest:2 err:12160 force:0 blast:1
    It looked very interesting!

  • Data Guard und restart-database

    Hi experts,
    I did the following activities:
    + defined first a restart standalone database dg1 on the system dwh with Linux RH 5.4/ Oracle 11.2.0.3 -Environment with clusterware.
    + then created a standby database dg2 on the system stb with Linux RH 5.4/ Oracle 11.2.0.3 -Environment with with clusterware.
    + and made a switchover from dg1 to dg2. Now dg2 is primary and dg1 is standby.
    My Problems.
    1/ dg1 does not restart automatically now after system reboots
    2/ dg2 is not automatically restarted, because there is no resource for it defined lócal or in cluster.
    I tried to make them to restart-databases unsuccessfully
    a/ to define dg2 a restart-db on stb :
    [oracle@stb ~]$ srvctl add instance -d dg2 -i dg -n stb
    PRCD-1120 : The resource for database dg2 could not be found.
    PRCR-1001 : Resource ora.dg2.db does not existb/ to add ora.dg2.db as cluster-resource on dwh :
    [grid@dwh ~]$ crsctl add resource ora.dg2.db -type ora.database.type -file /home/grid/dg2
    CRS-0245:  User doesn't have enough privilege to perform the operation
    CRS-4000: Command Add failed, or completed with errors.Where file /home/grid/dg2 contains:
    TYPE=ora.database.type
    STATE=ONLINE
    TARGET=ONLINE
    ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
    ACTION_FAILURE_TEMPLATE=
    ACTION_SCRIPT=
    ACTIVE_PLACEMENT=1
    AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
    AUTO_START=restore
    CARDINALITY=1
    CARDINALITY_ID=0
    CHECK_INTERVAL=1
    CHECK_TIMEOUT=30
    CLUSTER_DATABASE=false
    CREATION_SEED=222
    DATABASE_TYPE=SINGLE
    DB_UNIQUE_NAME=dg2
    DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)
    DEGREE=1
    DESCRIPTION=Oracle Database resource
    ENABLED=1
    FAILOVER_DELAY=0
    FAILURE_INTERVAL=60
    FAILURE_THRESHOLD=1
    GEN_AUDIT_FILE_DEST=/u01/app/oracle/admin/dg2/adump
    GEN_START_OPTIONS=
    GEN_START_OPTIONS@SERVERNAME(stb)=open
    GEN_USR_ORA_INST_NAME=
    GEN_USR_ORA_INST_NAME@SERVERNAME(stb)=dg
    HOSTING_MEMBERS=stb
    ID=ora.dg2.db
    INSTANCE_FAILOVER=1
    LOAD=1
    LOGGING_LEVEL=1
    MANAGEMENT_POLICY=AUTOMATIC
    NLS_LANG=
    NOT_RESTARTING_TEMPLATE=
    OFFLINE_CHECK_INTERVAL=0
    ONLINE_RELOCATION_TIMEOUT=0
    ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
    ORACLE_HOME_OLD=
    PLACEMENT=restricted
    PROFILE_CHANGE_TEMPLATE=
    RESTART_ATTEMPTS=2
    ROLE=PRIMARY
    SCRIPT_TIMEOUT=60
    SERVER_POOLS=
    SPFILE=
    START_DEPENDENCIES=weak(type:ora.listener.type,uniform:ora.ons)
    START_TIMEOUT=600
    STATE_CHANGE_TEMPLATE=
    STOP_DEPENDENCIES=
    STOP_TIMEOUT=600
    TYPE_VERSION=3.2
    UPTIME_THRESHOLD=1h
    USR_ORA_DB_NAME=dg
    USR_ORA_DOMAIN=
    USR_ORA_ENV=
    USR_ORA_FLAGS=
    USR_ORA_INST_NAME=dg
    USR_ORA_OPEN_MODE=open
    USR_ORA_OPI=false
    USR_ORA_STOP_MODE=immediate
    VERSION=11.2.0.3.0.My Questions:
    1/ What I have done wrong?
    2/ Are there simple solultions?
    Thanks and regards
    hqt200475

    Hello;
    PRCD-1120 and PRCR-1001 the reason for these is raccheck is not able to list this database.
    double check you name :
    Select name from v$database;
    Use crs_stat to see if you name if correct and if the database is offline.
    I'm thinking registered with the cluster is in OFFLINE state.
    crsctl setperm resource <resource_name> -o oracle
    crsctl setperm resource <resource_name> -g oinstall
    Not really a Data Guard issue, more of a Clusterware issue. ( I say this because Clusterware is getting out of my area )
    I think it a bug, but I cannot find the bug number.
    May help :
    http://oracleexamples.wordpress.com/tag/srvctl-add-service/
    Reconfiguring & Recreating The 11gR2 Restart/OHAS/SIHA Stack Configuration (Standalone). [ID 1422517.1]
    Resources in OCR Are not Cleaned up Completely After Database ORACLE_HOME De-Install [ID 1108023.1]
    How to Delete From or Add Resource to OCR in Oracle Clusterware [ID 1069369.1]
    Best Regards
    mseberg
    Edited by: mseberg on Apr 23, 2012 7:43 AM

  • Data guard performance problem (rac to single instance)

    i have a table it has GPS data, and gps table has too many data, 5 millions,
    iam using RAC (2 nodes 11gr2). standby database is single instance data guard,
    single instance database (standby)'s hardware cpu is lower than RAC machines. rac nodes have (15k) disks, standby has (7200rpm),
    so i dont want to use GPS tables on data guard system, i dont want to run GPS table's DML commands (delete, insert), i think it may increase performance,
    is it posible? what is your advise?
    any feedback makes me happy,
    best regards

    it's not possible with data guard, but you can use streams or golgengate for this purpose. Have a look at dataguard performance tuning guide. Maybe there is something you can fix on the configuration to be faster.
    [Data Guard Redo Apply and Media Recovery Best Practices|http://www.oracle.com/technetwork/database/features/availability/maa-wp-10grecoverybestpractices-129577.pdf]
    [Redo Transport and Network Best Practices|http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr1-activedataguard-1-128199.pdf]
    I don't know an 11g version for these docs but they would still help.

  • Oracle9i Data Guard - Filtering for a Logical Standby DB?

    Hello All
    1) When using Oracle9i data guard with a logical standby database is it possible to "screen" the sql statements that are executed? For example if I don't want any "delete" commands to be replicated on the standby box can I filter them out?.
    2) Are there any unlogged transactions that don't get "replicated"? Can I get a list of these commands/transactions?
    Any insight would be greatly appreciated
    Thanks in advance
    ...anik

    I want to make db_name=boston, not keep it as the same as primary=chicago. Is this valid configuration?NO. DB_NAME must be the same ("chicago") at both sites. The Standby will be using a different DB_UNIQUE_NAME (e.g. "boston") and can be using a different Instance name / SID (e.g. "boston").
    can I rename datafiles Yes. The database file names can be changed.
    If I don't use primary DB backup. Instead, I copy all datafiles, redo_log files (no control files) to standbyWhat is the difference between the first sentence (a backup of the primary) and the second sentence (a copy of the primary) ? A Copy is a backup.
    Are you intending to differentiate between an RMAN Backup and a User-Managed (aka "scripted") backup ?
    Normally, for DataGuard, tou can use non-RMAN methods to copy the database but there's no value add in this.
    You'd still have to setup DataGuard ! (and I wonder if you'd have complications setting up Active DataGuard).
    But remember that you MUST create the Standby controlfile from the Primary and copy it over to your Standby -- particularly as you are planning to use DataGuard. This is not created by 'alter database backup controlfile to trace' , but by 'ALTER DATABASE CREATE STANDBY CONTROLFILE AS 'filename''
    Hemant K Chitale

  • Data Guard -- v$archive_log applied column shows wrong info

    I'm playing with 10g Data Guard. Both Primary and Physical Standby are in Maximum Availability mode. When I query v$archived_log column applied for dest_id=2 (which is physical standby) for some files it shows NO value but alert log on both primary and standby shows file transferred info. Even on physical standby v$archived_log shows log is applied YES value. My question is : So why is Primary database's v$archived_log shows value NO ?
    I am trying to setup crontab so that once I see value YES in primary's v$archived_log for dest_id = 2 then I can backup archived log file and delete it from primary database machine. But my perl script won't work because primary v$archived_log shows value NO for applied column for dest_id = 2.
    Thanks.

    Hi OrionNet,
    I think I am looking at the wrong column and also on the wrong database for what I need to do. Let me explain what I am trying to achieve.
    I have a shell script to check if archived logs are shipped from Primary to Standby AND if Standby successfully applied it or not. My shell script was looking at Primary databases using following query
    select sequence#, archived, applied
    from v$archived_log
    where dest_id = 2 -- running on Primary BUT looking at standby archived log destination
    order by sequence# ;
    SEQUENCE# ARCHIEVED APPLIED
    =====================
    58 YES YES
    59 YES YES
    *60* YES NO
    61 YES YES
    After reading [v$archived_log reference entry in manual|http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/dynviews_1016.htm#REFRN30011]
    APPLIED Indicates whether the archivelog has been applied to its corresponding standby database (YES) or not (NO). The value is always NO for local destinations.
    This column is meaningful at the physical standby site for the ARCHIVED_LOG entries with REGISTRAR='RFS' (which means this log is shipped from the primary to the standby database). If REGISTRAR='RFS' and APPLIED is NO, then the log has arrived at the standby but has not yet been applied. If REGISTRAR='RFS' and APPLIED is YES, the log has arrived and been applied at the standby database.
    You can use this field to identify archivelogs that can be backed up and removed from disk.
    I think I should use following query on Standby database and not on primary database
    select sequence#, registrar, applied
    from v$archived_log
    where dest_id = 1 -- query running on standby so dest_id = 1 which is standby archive log destination
    and registrar = 'RFS'
    order by sequence# ;
    SEQUENCE# REGISTRAR APPLIED
    =====================
    58 RFS YES
    59 RFS YES
    *60* RFS YES
    61 RFS YES
    So, my shell script should connect to standby database from primary database and evaluate which archive logs to delete after backup from primary machine.
    Now I'll generate some gaps on Standby and check query again to make sure what I understand and expect is correct.
    Hope I am clear now. Thanks for your help. My bad, I didn't read the manual correctly the first time.

Maybe you are looking for