Data guard performance problem (rac to single instance)

i have a table it has GPS data, and gps table has too many data, 5 millions,
iam using RAC (2 nodes 11gr2). standby database is single instance data guard,
single instance database (standby)'s hardware cpu is lower than RAC machines. rac nodes have (15k) disks, standby has (7200rpm),
so i dont want to use GPS tables on data guard system, i dont want to run GPS table's DML commands (delete, insert), i think it may increase performance,
is it posible? what is your advise?
any feedback makes me happy,
best regards

it's not possible with data guard, but you can use streams or golgengate for this purpose. Have a look at dataguard performance tuning guide. Maybe there is something you can fix on the configuration to be faster.
[Data Guard Redo Apply and Media Recovery Best Practices|http://www.oracle.com/technetwork/database/features/availability/maa-wp-10grecoverybestpractices-129577.pdf]
[Redo Transport and Network Best Practices|http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr1-activedataguard-1-128199.pdf]
I don't know an 11g version for these docs but they would still help.

Similar Messages

  • Benefit of RAC over single instance

    Dear,
    There would be up-gradation of our main system from oracle 10.2.0.4 to 11.2.0.4. The management wants to stay on single instance as it is now. But Oracle RAC is something that is must for all the critical system I need to present the benefit of RAC over single instance considering the high cost. Can you show me the benefits of RAC that ppl would go for it instead  of single instance????

    Hi,
    The Benefits of Oracle Real Application Clusters.
    High Availability-Oracle Real Application Clusters 11g provides the foundation for data centre-high availability. It is also
    an integral component of Oracle’s Maximum Availability Architecture, which provides best practices to
    provide the highest availability for your data center. Oracle Real Application provides the following ke
    characteristics essential for a high available data management
    Reliability – The Oracle Database is known for its reliability. Oracle Real Application Clusters takes this
    a step further by removing the database server as a single point of failure. If an instance fails, the
    remaining instances in the server pool remain open and active. Oracle Clusterware monitors all Oracle
    processes and immediately restarts any failed component.
    Recoverability – The Oracle Database includes many features that make it easy to recover from all
    types of failures. If an instance fails in an Oracle RAC database, it is recognized by another instance in
    the server pool and recovery will start automatically. Fast Application Notification (FAN) and Fast
    Connection Failover (FCF) or Transparent Application Failover (TAF) make it easy for applications to
    mask component failures from the user.
    Error Detection – Oracle Clusterware automatically monitors Oracle RAC databases as well as other
    Oracle processes (ASM, listener, etc) and provides fast detection of problems in the environment. It also
    automatically recovers from failures often before users noticed that a failure has occurred. Fast
    Application Notification (FAN) provides the ability for applications to receive immediate notification of
    cluster component failures in order to re-issue the transaction before the failure surfaces.
    Continuous Operations – Oracle Real Application Clusters provides continuous service for both
    planned and unplanned outages. If a server (or an instance) fails, the database remains open and the
    application is able to access data. Most database maintenance operations can be completed without
    downtime and are transparent to the user. Many other maintenance tasks can be done in a rolling
    fashion so application downtime is minimized or removed. Fast Application Notification and Fast
    Connection Failover assist applications in meeting service levels.
    Scalability-Oracle Real Application Clusters provides a unique technology for scaling applications. Traditionally,
    when database servers ran out of capacity, they were replaced with new and larger servers. As servers
    grow in capacity, they are more expensive. For databases using Oracle RAC, there are alternatives for
    increasing the capacity. Applications that have traditionally run on large SMP servers can be migrated to
    run on pools of small servers. Alternatively, you can maintain the investment in the current hardware and
    add a new servers to the pool (or to create a server pool) to increase the capacity. Adding servers to a
    server pool with Oracle Clusterware and Oracle RAC does not require an outage and as soon as the new
    instances are started, the application can take advantage of the extra capacity. All servers in the server pool
    must run the same operating system and the same version of Oracle, but they do not have to be of exactly
    the same capacity. Customers today run server pools that fit their needs often using servers of (slightly)
    different characteristics.
    http://www.oracle.com/technetwork/database/clustering/overview/twp-rac11gr2-134105.pdf

  • Moving from Oracle RAC to single instance

    Hi,
    We are running EBS R12 on windows 2008 R2. Its a 2 node setup, we want to move it to single instance.
    I heard that moving from RAC to single instance is not supported by oracle is it true?
    Can someone kindly guide me documentation for the same.

    user10243788 wrote:
    Hi,
    We are running EBS R12 on windows 2008 R2. Its a 2 node setup, we want to move it to single instance.
    I heard that moving from RAC to single instance is not supported by oracle is it true?
    Can someone kindly guide me documentation for the same.Its totally possible technically to move from RAC to single node and its even documented. I dont know where you have heard that its not supported.
    See the process of removing node from rac
    http://docs.oracle.com/cd/B28359_01/rac.111/b28254/adddelunix.htm
    http://docs.oracle.com/cd/B19306_01/rac.102/b14197/adddelunix.htm
    http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_23.shtml
    For performance related
    Moving RAC to single instance

  • Db context file creation for rac to single instance cloning

    DOC ID: 559518.1 Section 6: RAC to Single Instance Cloning mentions that the context file creation should be done as in the case of Single Instance cloning
    what would be the command syntax?

    Thanks Hussein. However, section 6 of doc 559518.1 mentions that part 5.1.3 when cloning from rac to single node should be done as in the case of Single Instance cloning.
    the syntax for rac to rac cloning (which is in 5.1.3) is
    perl adclonectx.pl \
    contextfile=[PATH to OLD Source RAC contextfile.xml] \
    template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
    pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
    initialnode
    so what is the syntax for rac to single instance? I reckon I will still use adclonectx.pl, but now what would be the complete sysntax for single instance cloning?

  • RMAN backup restore from RAC to single instance ASM

    Hi,
    We are using oracle 11gR2 on AIX 6.1,
    We need to restore RMAN backup from RAC to single instance ASM,
    Im new to RAC & ASM, What will be the changes
    What will be steps involved into it.
    Thanks

    Hello,
    Refer this MOS doc *HowTo Restore RMAN Disk backups of RAC Database to Single Instance On Another Node [ID 415579.1]*
    On the Single Instance ASM, you need to specify the diskgroup name for the parameters "control_files, db_create_file_dest"
    Handle:      user10745179
    Email:      [email protected]
    Status Level:      Newbie (5)
    Registered:      Feb 24, 2009
    Total Posts:      168
    Total Questions:      74 (52 unresolved)
    Name      Devesh
    Location      Mumbai If you feel that your questions have been answered, then please consider closing the threads by providing appropriate points. Please keep the forum clean !!
    Regards,
    Shivananda

  • Data Guard configuration for RAC database disappeared from Grid control

    Primary Database Environment - Three node cluster
    RAC Database 10.2.0.1.0
    Linux Red Hat 4.0 2.6.9-22 64bit
    ASM 10.2.0.1.0
    Management Agent 10.2.0.2.0
    Standby Database Environment - one Node database
    Oracle Enterprise Edition 10.2.0.1.0 Single standby
    Linux Red Hat 4.0 2.6.9-22 64bit
    ASM 10.2.0.1.0
    Management Agent 10.2.0.2.0
    Grid Control 10.2.0.1.0 - Node separate from standby and cluster environments
    Oracle 10.1.0.1.0
    Grid Control 10.2.0.1.0
    Red Hat 4.0 2.6.9-22 32bit
    After adding a logical standby database through Grid Control for a RAC database, I noticed sometime later the Data Guard configuration disappeared from Grid Control. Not sure why but it is gone. I did notice that something went wrong with the standby creation but i did not get much feedback from Grid Control. The last thing I did was to view the configuration, see output below.
    Initializing
    Connected to instance qdcls0427:ELCDV3
    Starting alert log monitor...
    Updating Data Guard link on database homepage...
    Data Protection Settings:
    Protection mode : Maximum Performance
    Log Transport Mode settings:
    ELCDV.qdx.com: ARCH
    ELXDV: ARCH
    Checking standby redo log files.....OK
    Checking Data Guard status
    ELCDV.qdx.com : ORA-16809: multiple warnings detected for the database
    ELXDV : Creation status unknown
    Checking Inconsistent Properties
    Checking agent status
    ELCDV.qdx.com
    qdcls0387.qdx.com ... OK
    qdcls0388.qdx.com ... OK
    qdcls0427.qdx.com ... OK
    ELXDV ... WARNING: No credentials available for target ELXDV
    Attempting agent ping ... OK
    Switching log file 672.Done
    WARNING: Skipping check for applied log on ELXDV : disabled
    Processing completed.
    Here are the steps followed to add the standby database in Grid Control
    Maintenance tab
    Setup and Manage Data Guard
    Logged in as sys
    Add standby database
    Create a new logical standby database
    Perform a live backup of the primary database
    Specify backup directory for staging area
    Specify standby database name and Oracle home location
    Specify file location staging area on standby node
    At the end am presented with a review of the selected options and then the standby database is created
    Has any body come across a similar issue?
    Thanks,

    Any resolution on this?
    I just created a Logical Standby database and I'm getting the same warning (WARNING: No credentials available for target ...) when I do a 'Verify Configuration' from the Data Guard page.
    Everything else seems to be working fine. Logs are being applied, etc.
    I can't figure out what credentials its looking for.

  • Data Guard Summary problem using Grid Control.

    I setup data guard using Grid Control and after completion, the console of the standby database shows: "Unable to determine Data Guard information." under the Data Guard Summary section. The primary database is not showing the standby. I'm using Red Hat Linux Server 5.7 (64bit) and Oracle 10.2.0.5. This is a standalone (no RAC or ASM involved). When I run show configuration, the following comes up:
    DGMGRL> SHOW CONFIGURATION;
    Configuration
    Name: PRODDB_ghph@ora01
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    ghph - Primary database
    gsbh - Physical standby database (disabled)
    Current status for "PRODDB_ghph@ora01":
    SUCCESS
    I tried searching online and metalink but nothing. Any help in solving this problem would be appreciated. TIA
    Edited by: Gensis2001 on Jan 8, 2013 3:08 PM

    Gensis2001 wrote:
    Does this make any sense?
    SYS@gsbh> SELECT * FROM V$ARCHIVE_GAP;
    no rows selected
    SYS@gsbh> select process, status, sequence# from v$managed_standby;
    PROCESS STATUS SEQUENCE#
    ARCH CLOSING 60353
    ARCH CLOSING 60352
    RFS IDLE 60354
    MR(fg) WAIT_FOR_GAP 58673
    4 rows selected.Currently standby is waiting for the sequence *58673* but the later on archives of series 60353... are already archived.
    Can you confirm that only archive sequence * 58673* is missing or any more? If you have less number of archives are missing then see how the parameters you configured. like FAL_SERVER, LOG_ARCHIVE_CONFIG, LOG_ARCHIVE_DEST_n so on.
    Check below command and see any errors with remote destinations
    select severity,error_code,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') "timestamp" , message from v$dataguard_status where dest_id=2;Source: http://www.oracle-ckpt.com/dataguard_troubleshoot_snapper/
    If the archives are missing so many and you do not have backup, then certainly you have to choose incremental roll forward to synchronize the primary database, then only you can see the configuration status of Broker as valid. You can refer this article to perform incremental roll forward http://www.oracle-ckpt.com/rman-incremental-backups-to-roll-forward-a-physical-standby-database-2/

  • Dataguard configuration from 2-node rac to single instance with out ASM

    Hi Gurus,
    Oracle Version : 11.2.0.1
    Operating system:linux.
    Here i am trying to configure data Guard from 2-node rac to a singled instance stanby database . I have done all the changes in the parameter file for both primary and stand by database and when i am trying to duplicate my target database it is giving error as shown below.
    [oracle@rac1 dbs]$ rman target / auxiliary sys/qfundracdba@poorna
    Recovery Manager: Release 11.2.0.1.0 - Production on Thu Jul 21 14:49:01 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: QFUNDRAC (DBID=3138886598)
    connected to auxiliary database: QFUNDRAC (not mounted)
    RMAN> duplicate target database for standby from active database;
    Starting Duplicate Db at 21-JUL-11
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=63 device type=DISK
    contents of Memory Script:
       backup as copy reuse
       targetfile  '/u01/app/oracle/product/11.2.0/db_1/dbs/orapwqfundrac1' auxiliary format
    '/u01/app/oracle/product/11.2.0/db_1//dbs/orapwpoorna'   ;
    executing Memory Script
    Starting backup at 21-JUL-11
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=10 instance=qfundrac1 device type=DISK
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 07/21/2011 14:49:29
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-03009: failure of backup command on ORA_DISK_1 channel at 07/21/2011 14:49:29
    ORA-17629: Cannot connect to the remote database server
    ORA-17627: ORA-01017: invalid username/password; logon denied
    ORA-17629: Cannot connect to the remote database serverHere i was able to connect to my auxiliary database as shown below
    [oracle@rac1 dbs]$ rman target /
    Recovery Manager: Release 11.2.0.1.0 - Production on Thu Jul 21 15:00:10 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: QFUNDRAC (DBID=3138886598)
    RMAN> connect auxiliary sys/qfundracdba@poorna
    connected to auxiliary database: QFUNDRAC (not mounted)Can any one please help me .
    Thanks & Regards
    Poorna Prasad.S

    Hi All,
    Can any one please find out my both the parameters file and tell me if any thing is wrong.
    Primary Database parameters.
    qfundrac1.__db_cache_size=2818572288
    qfundrac2.__db_cache_size=3372220416
    qfundrac1.__java_pool_size=16777216
    qfundrac2.__java_pool_size=16777216
    qfundrac1.__large_pool_size=16777216
    qfundrac2.__large_pool_size=16777216
    qfundrac1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    qfundrac2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    qfundrac1.__pga_aggregate_target=4294967296
    qfundrac2.__pga_aggregate_target=4294967296
    qfundrac1.__sga_target=4294967296
    qfundrac2.__sga_target=4294967296
    qfundrac1.__shared_io_pool_size=0
    qfundrac2.__shared_io_pool_size=0
    qfundrac1.__shared_pool_size=1375731712
    qfundrac2.__shared_pool_size=855638016
    qfundrac1.__streams_pool_size=33554432
    qfundrac2.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/qfundrac/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='+ASM_DATA2/qfundrac/controlfile/current.256.754410759'
    *.db_block_size=8192
    *.db_create_file_dest='+ASM_DATA1'
    *.db_create_online_log_dest_1='+ASM_DATA2'
    *.db_domain=''
    *.DB_FILE_NAME_CONVERT='/u02/poorna/oradata/','+ASM_DATA1/','/u02/poorna/oradata','+ASM_DATA2/'
    *.db_name='qfundrac'
    *.db_recovery_file_dest_size=40770732032
    *.DB_UNIQUE_NAME='qfundrac'
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=qfundracXDB)'
    *.fal_client='QFUNDRAC'
    *.FAL_SERVER='poorna'
    qfundrac2.instance_number=2
    qfundrac1.instance_number=1
    *.LOG_ARCHIVE_CONFIG='DG_CONFIG=(qfundrac,poorna)'
    *.LOG_ARCHIVE_DEST_1='LOCATION=+ASM_FRA VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=qfundrac'
    *.LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=poorna'
    *.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
    *.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
    *.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
    *.LOG_ARCHIVE_MAX_PROCESSES=30
    *.LOG_FILE_NAME_CONVERT='/u02/poorna/oradata/','+ASM_DATA1/','/u02/poorna/oradata','+ASM_DATA2/'
    *.open_cursors=300
    *.pga_aggregate_target=4294967296
    *.processes=300
    *.remote_listener='racdb-scan.qfund.net:1521'
    *.REMOTE_LOGIN_PASSWORDFILE='EXCLUSIVE'
    *.sec_case_sensitive_logon=FALSE
    *.sessions=335
    *.sga_target=4294967296
    *.STANDBY_FILE_MANAGEMENT='AUTO'
    qfundrac2.thread=2
    qfundrac1.thread=1
    qfundrac1.undo_tablespace='UNDOTBS1'
    qfundrac2.undo_tablespace='UNDOTBS2'and my standby database prameter file.
    poorna.__db_cache_size=314572800
    poorna.__java_pool_size=4194304
    poorna.__large_pool_size=4194304
    poorna.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    poorna.__pga_aggregate_target=343932928
    poorna.__sga_target=507510784
    poorna.__shared_io_pool_size=0
    poorna.__shared_pool_size=176160768
    poorna.__streams_pool_size=0
    *.audit_file_dest='/u01/app/oracle/admin/poorna/adump'
    *.audit_trail='db'
    *.compatible='11.2.0.0.0'
    *.control_files='/u01/app/oracle/oradata/poorna/control01.ctl','/u01/app/oracle/flash_recovery_area/poorna/control02.ctl'
    *.db_block_size=8192
    *.db_domain=''
    #*.db_name='poorna'
    #*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=4039114752
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=poornaXDB)'
    *.local_listener='LISTENER_POORNA'
    *.memory_target=849346560
    *.open_cursors=300
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sec_case_sensitive_logon=FALSE
    *.undo_tablespace='UNDOTBS1'
    ############### STAND By PARAMETERS ########
    DB_NAME=qfundrac
    DB_UNIQUE_NAME=poorna
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(poorna,qfundrac)'
    #CONTROL_FILES='/arch1/boston/control1.ctl', '/arch2/boston/control2.ctl'
    DB_FILE_NAME_CONVERT='+ASM_DATA1/','/u02/poorna/oradata/','+ASM_DATA2/','/u02/poorna/oradata'
    LOG_FILE_NAME_CONVERT= '+ASM_DATA1/','/u02/poorna/oradata/','+ASM_DATA2/','/u02/poorna/oradata'
    LOG_ARCHIVE_FORMAT=log%t_%s_%r.arc
    LOG_ARCHIVE_DEST_1= 'LOCATION=/u02/ARCHIVE/poorna  VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=poorna'
    LOG_ARCHIVE_DEST_2= 'SERVICE=qfundrac ASYNC  VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)  DB_UNIQUE_NAME=qfundrac'
    LOG_ARCHIVE_DEST_STATE_1=ENABLE
    LOG_ARCHIVE_DEST_STATE_2=ENABLE
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    STANDBY_FILE_MANAGEMENT=AUTO
    FAL_SERVER=qfundrac
    FAL_CLIENT=poornaThanks & Regards,
    Poorna Prasad.S

  • Streams Setup from RAC to Single instance

    Does anyone have a document to setup streams from RAC to Non RAC. I successfully setup streams on 2 single instances but I am having issues in replicating, Streams is setup on node1 or Rac and Apply process is also setup on single node. but data is not replicating.
    Appreciate any suggestions.

    From Metalink Note 418755.1:
    Additional Configuration for RAC Environments for a Source Database Archive Logs
    The archive log threads from all instances must be available to any instance
    running a capture process. This is true for both local and downstream capture.
    Queue Ownership
    When Streams is configured in a RAC environment, each queue table has an
    "owning" instance. All queues within an individual queue table are owned by
    the same instance. The Streams components (capture/propagation/apply) all
    use that same owning instance to perform their work. This means that
    + a capture process is run at the owning instance of the source queue.
    + a propagation job must run at the owning instance of the queue
    + a propagation job must connect to the owning instance of the target queue.
    Ownership of the queue can be configured to remain on a specific instance,
    as long as that instance is available, by setting the PRIMARY _INSTANCE
    and/or SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE.
    If the primary_instance is set to a specific instance (ie, not 0), the queue
    ownership will return to the specified instance whenever the instance is up.
    Capture will automatically follow the ownership of the queue.If the ownership
    changes while capture is running, capture will stop on the current instance
    and restart at the new owner instance.
    For queues created with Oracle Database 10g Release 2, a service will be
    created with the service name= schema.queue and the network name
    SYS$schema.queue.global_name for that queue. If the global_name of the
    database does not match the db_name.db_domain name of the database, be sure
    to include the global_name as a service name in the init.ora.
    For propagations created with the Oracle Database 10g Release 2 code with
    the queue_to_queue parameter to TRUE, the propagation job will deliver only
    to the specific queue identified. Also, the source dblink for the target
    database connect descriptor must specify the correct service (global name of
    the target database ) to connect to the target database. For example, the
    tnsnames.ora entry for the target database should include the CONNECT_DATA
    clause in the connect descriptor for the target database. This claus should
    specify (CONNECT_DATA=(SERVICE_NAME='global_name of target database')).
    Do NOT include a specific INSTANCE in the CONNECT_DATA clause.
    For example, consider the tnsnames.ora file for a database with the global name
    db.mycompany.com. Assume that the alias name for the first instance is db1 and
    that the alias for the second instance is db2. The tnsnames.ora file for this
    database might include the following entries:
    db.mycompany.com=
    (description=
    (load_balance=on)
    (address=(protocol=tcp)(host=node1-vip)(port=1521))
    (address=(protocol=tcp)(host=node2-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)))
    db1.mycompany.com=
    (description=
    (address=(protocol=tcp)(host=node1-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)
    (instance_name=db1)))
    db2.mycompany.com=
    (description=
    (address=(protocol=tcp)(host=node2-vip)(port=1521))
    (connect_data=
    (service_name=db.mycompany.com)
    (instance_name=db2)))
    Use the italicized tnsnames.ora alias in the target database link USING clause.
    DBA_SERVICES lists all services for the database. GV$ACTIVE_SERVICES identifies
    all active services for the database In non_RAC configurations, the service
    name will typically be the global_name. However, it is possible for users to
    manually create alternative services and use them in the TNS connect_data
    specification . For RAC configurations, the service will appear in these views
    as SYS$schema.queue.global_name.
    Propagation Restart
    Use the procedures START_PROPAGATION and STOP_PROPAGATION from
    DBMS_PROPAGATION_ADM to enable and disable the propagation schedule.
    These procedures automatically handle queue_to_queue propagation.
    Example:
    exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation'); or
    exec DBMS_PROPAGATION_ADM.stop_propagation('name_of_propagation',force=>true);
    exec DBMS_PROPAGATION_ADM.start_propagation('name_of_propagation');
    If you use the lower level DBMS_AQADM procedures to manage the propagation schedule,
    be sure to explicitly specify the destination_queue name when queue_to_queue propagation has been configured.
    Example:
    DBMS_AQADM.UNSCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');
    DBMS_AQADM.SCHEDULE_PROPAGATION('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ENABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');,
    DBMS_AQADM.DISABLE_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');, DBMS_AQADM.ALTER_PROPAGATION_SCHEDULE('source_queue_name','destination',destination_queue=>'specific_queue');
    Changing the GLOBAL_NAME of the Source Database
    See the OPERATION section on Global_name below. The following are some
    additional considerations when running in a RAC environment.
    If the GLOBAL_NAME of the database is changed, ensure that any propagations
    are dropped and recreated with the queue_to_queue parameter set to TRUE.
    In addition, if the GLOBAL_NAME does not match the db_name.db_domain of the
    database, include the global_name for the queue (NETWORK_NAME in DBA_QUEUES)
    in the list of services for the database in the database parameter
    initialization file.
    Section 4. Target Site Configuration
    The following recommendations apply to target databases, ie, databases in which
    Streams apply is configured.
    1. Privileges
    Grant Explicit Privileges to APPLY_USER for the user tables
    Examples:
    Privileges for table level DML: INSERT/UPDATE/DELETE,
    Privileges for table level DDL: CREATE (ANY) TABLE , CREATE (ANY) INDEX,
    CREATE (ANY) PROCEDURE
    2. Instantiation
    Set Instantiation SCNs manually if not using export/import. If manually
    configuring the instantiation scn for each table within the schema, use the
    RECURSIVE=>TRUE option on the DBMS_STREAMS_ADM.SET_SCHEMA_INSTANTIATION_SCN
    procedure
    For DDL Set Instantiation SCN at next higher level(ie,SCHEMA or GLOBAL level).
    3. Conflict Resolution
    If updates will be performed in multiple databases for the same shared
    object, be sure to configure conflict resolution. See the Streams
    Replication Administrator's Guide Chapter 3 Streams Conflict Resolution,
    for more detail.
    To simplify conflict resolution on tables with LOB columns, create an error
    handler to handle errors for the table. When registering the handler using
    the DBMS_APPLY_ADM.SET_DML_HANDLER procedure, be sure to specify the
    ASSEMBLE_LOBS parameter as TRUE.
    In Streams Concepts manual 10.2 chapter 22: Monitoring Apply
    Displaying detailed information about Apply errors.
    4. Apply Process Configuration
    A. Rules
    If the maintain_* procedures are not suitable for your environment,
    please use the ADD_RULES  procedures (ADDTABLE_RULES , ADD_SCHEMA_RULES ,
    ADD_GLOBAL_RULES (for DML and DDL), ADD_SUBSET_RULES (DML only).
    These procedures minimize the number of steps required to configure Streams
    processes. Also, it is possible to create rules for non-existent objects,
    so be sure to check the spelling of each object specified in a rule carefully.
    APPLY can be configured with or without a ruleset. The ADD_GLOBAL_RULES can
    be used to apply all changes in the queue for the database. If no ruleset is
    specified for the apply process, all changes in the queue are processed by the apply process.
    A single Streams apply can process rules for multiple tables or schemas
    located in a single queue that are received from a single source database .
    For best performance, rules should be simple. Rules that include LIKE clauses are
    not simple and will impact the performance of Streams.
    To eliminate changes for particular tables or objects, specify the
    include_tagged_lcr clause along with the table or object name in the
    negative rule set for the Streams process. Setting this clause will
    eliminate all changes, tagged or not, for the table or object.
    B. Parameters
    Set the following parameters after a apply process is created:
    + DISABLE_ON_ERROR=N Default: Y
    If Y, then the apply process is disabled on the first unresolved error,
    even if the error is not fatal.
    If N, then the apply process continues regardless of unresolved errors.
    + PARALLELISM=3* Number of CPU Default: 1
    Apply parameters can be set using the SET_PARAMETER procedure from the
    DBMS_APPLY_ADM package. For example, to set the DISABLE_ON_ERROR parameter
    of the streams apply process named APPLY_EX, use the following syntax while
    logged in as the Streams Administrator:
    exec dbms_apply_adm.set_parameter('apply_ex','disable_on_error','n');
    Change the apply parallelism parameter recommendation to a lower number.
    In general, try 4 or 8 and increase or decrease as necessary for your workload.
    In some cases, performance can be improved by setting the following hidden
    parameter. This parameter should be set when the major workload is UPDATEs
    and the updates are performed on just a few columns of a many-column table.
    + DYNAMICSTMTS=Y Default: N
    If Y, then for UPDATE statements, the apply process will optimize the
    generation of SQL statements based on required columns.
    CHECKPOINTFREQUENCY=1000
    Increase the frequency of logminer checkpoints especially in a
    database with significant LOB or DDL activity.
    exec dbms_capture_adm.set_parameter('capture_ex','_checkpoint_frequency','1000');
    5. Additional Configuration for RAC Environments for a Apply Database
    Queue Ownership
    When Streams is configured in a RAC environment, each queue table has an
    "owning" instance. All queues within an individual queue table are owned
    by the same instance. The Streams components (capture/propagation/apply)
    all use that same owning instance to perform their work. This means that
    the database link specified in the propagation must connect to the owning
    instance of the target queue. the apply process is run at the owning instance
    of the target queue
    Ownership of the queue can be configured to remain on a specific instance,
    as long as that instance is available, by setting the PRIMARY _INSTANCE and
    SECONDARY_INSTANCE parameters of DBMS_AQADM.ALTER_QUEUE_TABLE. If the
    primary_instance is set to a specific instance (ie, not 0), the queue
    ownership will return to the specified instance whenever the instance is up.
    Apply will automatically follow the ownership of the queue. If the ownership
    changes while apply is running, apply will stop on the current instance and
    restart at the new owner instance.
    Changing the GLOBAL_NAME of the Database
    See the OPERATION section on Global_name below. The following are some
    additional considerations when running in a RAC environment.
    If the GLOBAL_NAME of the database is changed, ensure that the queue is
    empty before changing the name and that the apply process is dropped and
    recreated with the apply_captured parameter = TRUE. In addition, if the
    GLOBAL_NAME does not match the db_name.db_domain of the database, include
    the GLOBAL_NAME in the list of services for the database in the database
    parameter initialization file.

  • Converting from RAC to Single Instance - Memory

    hi,
    we are moving forward with virtualizing our database environment and want to use 11g RACOne. We currently are using 3 node, 10g RAC. In coming up with specifications I am wondering what general rule there is for sizing the SGA. As an example if one of my databases has 3 500MB SGA's do I simply have a single instance with 1500MB SGA? I'm not sure what the approach would be.
    Any info appreciated. Thanks in advance...ron

    RonHeeb wrote:
    thanks for the response. i have been regularly taking current sizes of each SGA from gv$sgastat to see what's being allocated. my thinking is that this is a minimum and that i should add to it for peak loads, ensuring that it's not set below any minimum that RACOne requires.
    beyond that going to RACOne seems to be a direction for virtualized DB servers and for us 24 by 7 is not needed (although more than a few minutes outage would be an issue). in any case if needed we could go RAC on our most critical environments. I'm attracted to how patching/server maintenance can be achieved with RACOne.Okay, you either need high availability or you don't. if having a db go down "for more than a few minutes" is a problem then, don't you really need 24x7 ? And in that case, isn't the high availability offered by RAC is your only option? For me, having a mission critical database (and it looks like this qualifies) on anything "vitualized" is a disaster waiting to happen. I find it lunacy to have, say, 4 virtual failover servers (RACOne) on the same physical hardware. When that server crashes so does your entire failover scenario.
    >
    Edited by: user10489842 on Sep 13, 2012 2:04 PM

  • Failover from RAC to Single Instance db

    Hi
    i know that i can have a single instance db , as the failover option in my dataguard system
    do i need to change some parameters while i am failing over from Rac DB to Single Instance DB
    something like cluster_db = false
    Thanks

    RonHeeb wrote:
    thanks for the response. i have been regularly taking current sizes of each SGA from gv$sgastat to see what's being allocated. my thinking is that this is a minimum and that i should add to it for peak loads, ensuring that it's not set below any minimum that RACOne requires.
    beyond that going to RACOne seems to be a direction for virtualized DB servers and for us 24 by 7 is not needed (although more than a few minutes outage would be an issue). in any case if needed we could go RAC on our most critical environments. I'm attracted to how patching/server maintenance can be achieved with RACOne.Okay, you either need high availability or you don't. if having a db go down "for more than a few minutes" is a problem then, don't you really need 24x7 ? And in that case, isn't the high availability offered by RAC is your only option? For me, having a mission critical database (and it looks like this qualifies) on anything "vitualized" is a disaster waiting to happen. I find it lunacy to have, say, 4 virtual failover servers (RACOne) on the same physical hardware. When that server crashes so does your entire failover scenario.
    >
    Edited by: user10489842 on Sep 13, 2012 2:04 PM

  • Data Guard Related Problem

    We are using oracle 9.2, i am facing a problem in the dataguard is that i want to know whether the log have been applied or not.....below are the outputs....
    we are using manual data guard......
    SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;
    no rows selectedwhen i fire the above query it is not showing any result......
    pls suggest me ..
    SQL> show parameter stand
    NAME                                 TYPE        VALUE
    standby_archive_dest                 string      /arch/log
    standby_file_management              string      MANUAL
    SQL> SELECT THREAD#, MAX(SEQUENCE#) AS "LAST_APPLIED_LOG"
      2  FROM V$LOG_HISTORY
      3  GROUP BY THREAD#;
       THREAD# LAST_APPLIED_LOG
             1             1724
             2             1537
    SELECT THREAD#, SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;
    no rows selected

    we are using a manual standby database.......
    SQL> select  DATABASE_ROLE, SWITCHOVER_STATUS,DATAGUARD_BROKER from v$database;
    DATABASE_ROLE    SWITCHOVER_STATUS  DATAGUAR
    PHYSICAL STANDBY SESSIONS ACTIVE    DISABLED
    SQL> show parameter stand
    NAME                                 TYPE        VALUE
    standby_file_management              string      MANUALso pls suggest me how would i know whether the archives have been applied or not.....
    although i have posted the query which i was using for the same
    and one more thing i am getting an error in alert log file also...
    Sun Aug 10 12:28:09 2008
    ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
    Sun Aug 10 12:28:09 2008
    ALTER DATABASE RECOVER    CONTINUE DEFAULT
    Sun Aug 10 12:28:09 2008
    Media Recovery Log /arch/log/1_1724.dbf
    Sun Aug 10 12:31:09 2008
    ORA-279 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
    Sun Aug 10 12:31:09 2008
    ALTER DATABASE RECOVER    CONTINUE DEFAULT
    Sun Aug 10 12:31:09 2008
    Media Recovery Log /arch/log/1_1725.dbf
    Errors with log /arch/log/1_1725.dbf
    ORA-308 signalled during: ALTER DATABASE RECOVER    CONTINUE DEFAULT  ...
    Sun Aug 10 12:31:09 2008
    ALTER DATABASE RECOVER CANCEL
    Sun Aug 10 12:31:09 2008
    Media Recovery Cancelled
    Completed: ALTER DATABASE RECOVER CANCEL
    Sun Aug 10 12:33:09 2008
    alter database open read only
    Sun Aug 10 12:33:09 2008
    SMON: enabling cache recovery
    Sun Aug 10 12:33:09 2008
    Database Characterset is WE8ISO8859P1
    replication_dependency_tracking turned off (no async multimaster replication fou
    nd)
    Completed: alter database open read only
    #

  • Oracle 11gR2 RAC (convert single instance to RAC)

    Hi,
    Using Metalink Doc [ID 747457.1] I have converted single instance database running on 11gR2 to RAC (2 node) database 11gR2 with ASM. Its running fine, I am able to see the instances running on both nodes. But I am unable to login to the instance. It is connecting to ideal instance., but the instance is already running
    oracle@hublhp1:/home/oracle$ export ORACLE_SID=cadtest1
    oracle@hublhp1:/home/oracle$ sqlplus "/as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Nov 2 11:23:43 2010
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL>
    oracle@hublhp1:/home/oracle$ srvctl status database -d cadtest
    Instance cadtest1 is not running on node hublhp1
    Instance cadtest2 is not running on node hublhp3
    oracle@hublhp1:/home/oracle$ ps -ef | grep pmon
    oracle 2407 1 0 15:28:21 ? 0:27 asm_pmon_+ASM1
    oracle 4125 1 0 15:51:18 ? 0:36 ora_pmon_cadtest1
    oracle 4973 3295 0 14:31:13 pts/1 0:00 grep pmon
    oracle@hublhp1:/home/oracle$
    I am able to stop/start the database using SRVCTL but I am not able to login to this instance. Can anyone please help me to find the issue or where & what to look at.
    - Mano

    Thank you so much, I did the following, but I still have the same issue, the instance is running on both nodes, but I am unable to stop/start the database using SRVCTL and I am unable to login using SQLPLUS.
    oracle@hublhp1:/home/oracle$ *srvctl modify database -d cadtest -n cadtest -o $ORACLE_HOME -p +asmcdb01/cadtest/spfilecadtest.ora -a ASMCDB01,ASMCFR01*
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$ srvctl config database -d cadtest -a
    Database unique name: cadtest
    Database name: cadtest
    Oracle home: /app/oracle/rdbms/product/11.2.0
    Oracle user: oracle
    Spfile: +asmcdb01/cadtest/spfilecadtest.ora
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Server pools: cadtest
    Database instances: cadtest1,cadtest2
    Disk Groups: ASMCDB01,ASMCFR01
    Services:
    Database is enabled
    Database is administrator managed
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$ srvctl stop database -d cadtest
    PRCC-1016 : cadtest was already stopped
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$ srvctl status database -d cadtest
    Instance cadtest1 is not running on node hublhp1
    Instance cadtest2 is not running on node hublhp3
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$ echo $ORACLE_SID
    cadtest1
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$ sqlplus "/as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Nov 2 15:31:55 2010
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to an idle instance.
    SQL> exit
    Disconnected
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$
    oracle@hublhp1:/home/oracle$

  • Datagurad setup from 2 Node RAC to single instance (DR site)

    Dear Expert,
    I have request from management to setup the DR site from current production RAC database using active dataguard. I have two node rac database 11.2.0.3 running on sun solaris machine. I need proper step or good document can refer to setup between production RAC database to DR site single instance standby database. I only experience setup single instance to single instance .Apreciate expert can provide me some link
    Regard
    liang

    Hello;
    This will provide a good start and overview :
    Creating a Single Instance Physical Standby for a RAC Primary : ( please note parameter changes for Oracle 11 )
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimarysingleinstance-131970.pdf
    As will this :
    http://oracleinstance.blogspot.com/2012/01/create-single-instance-standby-database.html
    Oracle 11
    Rapid Oracle RAC Standby Deployment: Oracle Database 11g Release 2
    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-rac-standby-133152.pdf
    Best Regards
    mseberg

  • RMAN + restore rac on single instance...

    Hi all,
    Can I restore a backup or RAC Database on a single instance?
    For example: DB1 is a RAC Database 10.2
    I have a RMAN backup of DB1 on host A, copy all files to host B on same directory structure.
    Can I restore this backup ??
    If someone have any documentation or link, i apreciate too much.
    Thanks

    Yes, you can. Follow the scenario 'restore the database to a different host'. Be aware you'll need BOTH archive threads!!!
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for