Streams Plus Data Guard

Hi,
Looking at http://otn.oracle.com/deploy/availability/htdocs/DataGuardStreams.html I see that Data Guard and Streams can be deployed on the same primary database.
Has anyone done this and if so are there any pitfalls.
Also, how much of the Streams environment is propagated by Data Guard, and how much needs to be re-instatiated once the standby is brought into the primary role after switchover/failover. This is for Oracle 9.
Any help/hints/pointers much appreciated.

Graham,
Streams can co-exist with Dataguard in Physical mode, but Data Guard SQL apply mode and Streams are not supported on the same database.
I have not implemented such a setup. You can review the " Streams High Availability Environments" chapter in the Oracle9i Streams Release 2 (9.2) [Part Number A96571-02] documentation for details on what to do when a failover occurs. This chapter also provides advice on when to use Streams and when to use DataGuard.
Note: The Streams manual was updated with the 9.2.0.2 release, so check the part number of the book you reference. If it doesn't match the one listed previously, view the more current documentation on TechNet.

Similar Messages

  • Can streams and data guard run together

    I need to set up a staging database to run streams from.
    Can I use data guard from the production database to the staging database and
    streams on top of that?
    Has anyone done this?

    Correction:
    Data Guard Redo Apply can be used to provide high availability/disaster recovery for a Streams database. For best practices using Dataguard Redo Apply with Streams, see http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gDataGuardRoleTransitionsStreams.pdf
    Data Guard SQL Apply cannot be used to provide high availability/disaster recovery for a Streams database.

  • Supplemental logging with Oracle 10gR2 Streams and Data Guard

    Hello,
    I have a environment with Oracle DB 10gR2 and Physical Standby with Data Guard DR Conf. Right now, this environment is going to be extended to a replication schema using 2-way Oracle Streams Replication (for replication to the central office from this branch office, other branchs will be added soon). The primary DB will be replicated to the other primary DB (in the remote central office).
    So, there is my question: It's completly necesary to specify Supplemental Logging on the sources databases (primaries) for setting 2-way Streams Replication?, and, if it's completly necesary, then, do I can set Supplemental Logging on primaries without affect theirs physical standbys, or do I need to do something special?
    Thanks in advance.

    Sorry, it's repeated. 'cus browser connection problem.

  • Can I create streams on data guard?

    I just want to know can streams source database is physical standby or logical standby?
    11g r2
    thanks
    Edited by: user482784 on 2011-2-26 下午6:25

    This is not just mac-related.  This is simple data-separation.  Losing one disk that has all of your partitions means losing all of the data.
    Some people keep TimeMachine (data backup) information on the same disk as their system disk.  But when they lose their disk, their critical backup is gone too.
    There is another reaosn you may want a separate external drive ... if you ever want to use BootCamp to load Windows on your system, you must start with a 1-partition system, and that disk must be the "internal drive" (external drives cannot hold BootCamp partitions).
    So there are many reasons for keeing your system -partition on the internal disk and use external drives to separate and keep your daa safe.

  • DATA Guard Logical Standby v.s. Streams Apply (10gR2, rac -- rac )

    Greetings -
    We currently have a 3 node cluster, doing a 'Schema' capture, with a 2 node cluster serving as the apply side. Both are on 10gR2 (solaris 10)
    We have been experiencing apply latency due to large transactions. The way logminer/streams evaluates the arch logs, it converts each updated row into a transaction as part of a transaction set, using 'decode (' statements.
    I am under the impression a physical standby will do the same thing. But what about a logical Standby ?
    (this from the Oracle Documentation)
    [10g Concepts|http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm#SBYDB00010]
    ...Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database though SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database.
    Does anyone know firsthand if SQL APPLY re-issues the original sql, or if it relies on the 'decode' command as well ?
    Thanks
    The Nets Edge

    A Physical standby does not do anything with SQL. It is running media recovery under the covers. A Logical standby uses the Logminer to read the redo, converts it to SQL and Data and applies those transactions to the Logical standby.
    The both build logical change records, figure out order, dependencies etc etc and applies the transaction. Both are susceptible to long running transactions on the Primary (source) database.
    Streams uses Logminer, Logical Standby use part of the Streams apply so as Mr. Morgan said, they are very similar :^)
    If you are having apply performance issues you may want to look at Active data Guard.
    Larry

  • Data Guard logical standby Versus Streams

    I'm referring to both Oracle 10g/9i
    If a Data Guard logical database uses similar technology to Streams (Log Mining and SQL apply), why can't you stand up a standby database on a different platform, or at least I have found nothing on the subject.
    But in 11g Oracle Data Guard (physical standby database) is a solution for same endianness platform migration.
    I will appreciate any insight on the subject.

    Yes...thats true...both uses same technology...
    REDO LOGMINER
    SQL------------------------>BLOCK LEVEL CHANGES---------------------------------->SQL
    But there are serious implemetation diff..
    1) Oracle Data Guard is designed for protecting from data failure and disasters.
    Streams is designed for information sharing and distribution but can also provide a very efficient high availability solution.
    2) Streams is configured from the bottom up — individual tables, schemas, capture processes, apply processes, queues.
    Logical Standby is configured from the top down — start with entire database, then specify only what you don’t want.
    As logical standby is top down and changes are capture at remote location (logical standby db) and for that archive log need to be shipped using FAL client/server to remote and to ship the archivelog in Data Guard configuration, all members should be running on same platform.
    As said before, Streams is configured from the bottom up, it start with tables--> Schemas--->Database and we can capture changes at local/remote location. If we capture changes locally, target streams db can be on diff platform. But for downstream capture must need same platform as logical standby database needed so that archivelogs can transport from source to downstream db.

  • Data Guard vs. Streams

    What is the difference between Data Guard and Streams?
    Which one should I choose in order that two database systems in different servers are synchronized with each other? There are two situations:
    1. One database system is in Tru64 UNIX operating system, the other in Windows 2000 Server OS.
    2. Both database systems are in Tru64 UNIX OS.
    Thanks,

    Sorry, I missed that wrinkle.
    You'll need to have machines with the same O/S to have Data Guard work. If your requirements are to have the databases completely synced, you'll need to use Data Guard, so you'll need a new machine.
    You can roll your own process using streams without buying a new machine, but it won't be as robust as the Oracle solution and won't handle all the possible failover scenarios, etc. A lot depends on the strictness of your requirements.
    Justin
    Justin

  • Oracle Streams VS Oracle Data Guard

    Hello,
    Could you please explain me what are the different between Oracle Streams VS Oracle Data Guard?
    They are completely different or similar purposes?
    Thanks.

    812322 wrote:
    Hello,
    Could you please explain me what are the different between Oracle Streams VS Oracle Data Guard?
    They are completely different or similar purposes?
    Thanks.http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:14672061404704

  • Data guard vs. oracle stream

    Our db is 10g 2 nodes RAC db and the db server is window 2003 server. Our standby db was using data guard for feeding data from primary to standby. Since our standby db was not up to date now we are planning to rebuild our standby db. We are debating if we should use data guard or oracle stream to feed data to this new standby db. Can any body give us some insights on which one is better on this purpose?
    Thanks a lot in advance!!
    Shirley

    One has
    1 physical standby
    2 logical standby
    3 streams
    Only 1 is a zero-loss, high-availability solution.
    2 and 3 do not support all data types, and will automatically suppress not-supported datatypes.
    3 apart from that is also asynchronous, where 1 and 2 can be set up to be synchronous.
    3 will be also much more difficult to troubleshoot. Basically: when you are out-of-sync you have to rebuild. You can't re-ship redolog files.
    1+2 ship redolog files to the standby server, 1 uses them to recover the database, 2 uses them to mine them and to re-execute the transaction.
    3 mines redolog files at the source, and sends statements to re-execute them.
    Only 1 is a true HA solution.
    You can not use Streams to build a standby database.
    The purpose of Streams is replication, not standby.
    Sybrand Bakker
    Senior Oracle DBA
    It is just what you want

  • Will Oracle Data Guard be replaced by Oracle Stream soon?

    Will Oracle Data Guard be replaced by Oracle Stream soon?
    In my opinion Oracle Stream can replace Oracle Data Guard completely.
    Message was edited by:
    frank.qian

    While some of the technologies that underpin Streams are being increasingly incorporated into DataGuard, it's quite unlikely that DataGuard will go away.
    Streams is the successor to Advanced Replication, which is designed to allow a source database to propagate data to a distinct database in a different environment without necessarily having to have the two databases tightly coupled. You can have different databases in different regions managed by different DBA groups who don't necessarily care whether any of the other systems are up using Streams (or Advanced Replication before it). Failing over between these systems, while possible, requires a fair amount of custom scripting, but is certainly possible.
    DataGuard, on the other hand, is designed to allow you to have multiple copies of the same database that are tightly coupled for high availability. Similar in concept, but there are very different trade-offs in the design.
    That said, Streams and Logical Standby both use very similar technologies to mine the redo information for change records. As DataGuard uses Logical Standby more and more, potentially as a replacement for physical standby, they'll use more and more of the same underlying technologies. They'll still be very different products.
    Justin

  • What is the major plus with Data Guard compare to standby

    Hi,
    We don't need active active replication between our prod and DR, our DR can be 20 minutes RPO (Recovery Point Objective) so what is the main advantage of configuring and installing oracle data guard compare to a simple standby server?
    My understanding of data guard is a that oracle will ship your log auto on the DR and apply it instead of me, doing it with 2 script (one on primary server that ship log over, the second on standby that apply the archive log).

    1. Your Primary site get hit by something you can failover to the standby.
    2. If you have a large group of "Reader" users on the Primary you can switch them to the Standby using Active Data Guard.
    3. You can do a switchover and avoid an outage if your Primary server needs work of any kind.
    4. You can preform backup at either site taking even more load off you primary if needed.
    or as the books says:
    Disaster recovery, data protection, and high availability.
    Down Side
    1. Cost
    2. Network Load (or additional load)
    Edited by: mseberg on Feb 7, 2011 10:23 AM

  • Clarification on Data Guard(Physical Standyb db)

    Hi guys,
    I have been trying to setup up Data Guard with a physical standby database for the past few weeks and I think I have managed to setup it up and also perform a switchover. I have been reading a lot of websites and even Oracle Docs for this.
    However I need clarification on the setup and whether or not it is working as expected.
    My environment is Windows 32bit (Windows 2003)
    Oracle 10.2.0.2 (Client/Server)
    2 Physical machines
    Here is what I have done.
    Machine 1
    1. Create a primary database using standard DBCA, hence the Oracle service(oradgp) and password file are also created along with the listener service.
    2. Modify the pfile to include the following:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgp'
    *.fal_server='oradgs'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgp'
    *.log_archive_dest_2='SERVICE=oradgs LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgs'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgp
    The locations on the harddisk are all available and archived redo are created (e:\archlogs)
    3. I then add the necessary (4) standby logs on primary.
    4. To replicate the db on the machine 2(standby db), I did an RMAN backup as:-
    RMAN> run
    {allocate channel d1 type disk format='M:\DGBackup\stby_%U.bak';
    backup database plus archivelog delete input;
    5. I then copied over the standby~.bak files created from machine1 to machine2 to the same directory (M:\DBBackup) since I maintained the directory structure exactly the same between the 2 machines.
    6. Then created a standby controlfile. (At this time the db was in open/write mode).
    7. I then copied this standby ctl file to machine2 under the same directory structure (M:\oracle\product\10.2.0\oradata\oradgp) and replicated the same ctl file into 3 different files such as: CONTROL01.CTL, CONTROL02.CTL & CONTROL03.CTL
    Machine2
    8. I created an Oracle service called the same as primary (oradgp).
    9. Created a listener also.
    9. Set the Oracle Home & SID to the same name as primary (oradgp) <<<-- I am not sure about the sid one.
    10. I then copied over the pfile from the primary to standby and created an spfile with this one.
    It looks like this:-
    oradgp.__db_cache_size=436207616
    oradgp.__java_pool_size=4194304
    oradgp.__large_pool_size=4194304
    oradgp.__shared_pool_size=159383552
    oradgp.__streams_pool_size=0
    *.audit_file_dest='M:\oracle\product\10.2.0\admin\oradgp\adump'
    *.background_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\bdump'
    *.compatible='10.2.0.3.0'
    *.control_files='M:\oracle\product\10.2.0\oradata\oradgp\control01.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control02.ctl','M:\oracle\product\10.2.0\oradata\oradgp\control03.ctl'
    *.core_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\cdump'
    *.db_block_size=8192
    *.db_domain=''
    *.db_file_multiblock_read_count=16
    *.db_name='oradgp'
    *.db_recovery_file_dest='M:\oracle\product\10.2.0\flash_recovery_area'
    *.db_recovery_file_dest_size=21474836480
    *.fal_client='oradgs'
    *.fal_server='oradgp'
    *.job_queue_processes=10
    *.log_archive_dest_1='LOCATION=E:\ArchLogs VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=oradgs'
    *.log_archive_dest_2='SERVICE=oradgp LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=oradgp'
    *.log_archive_format='ARC%S_%R.%T'
    *.log_archive_max_processes=30
    *.nls_territory='IRELAND'
    *.open_cursors=300
    *.pga_aggregate_target=203423744
    *.processes=150
    *.remote_login_passwordfile='EXCLUSIVE'
    *.sga_target=612368384
    *.standby_file_management='auto'
    *.undo_management='AUTO'
    *.undo_tablespace='UNDOTBS1'
    *.user_dump_dest='M:\oracle\product\10.2.0\admin\oradgp\udump'
    *.service_names=oradgs
    log_file_name_convert='junk','junk'
    11. User RMAN to restore the db as:-
    RMAN> startup mount;
    RMAN> restore database;
    Then RMAN created the datafiles.
    12. I then added the same number (4) of standby redo logs to machine2.
    13. Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    14. Ensuring the listener and Oracle service were running and that the database on machine2 was in MOUNT mode, I then started the redo apply using:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    It seems to have started the redo apply as I've checked the alert log and noticed that the sequence# was all "YES" for applied.
    ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    So copied over the REDO logs from the primary machine and placed them in the same directory structure of the standby.
    ########Q1. I understand that the standby database does not need online REDO Logs but why is it reporting in the alert log then??########
    I wanted to enable realtime apply so, I cancelled the recover by :-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    and issued:-
    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    This too was successful and I noticed that the recovery mode is set to MANAGED REAL TIME APPLY.
    Checked this via the primary database also and it too reported that the DEST_2 is in MANAGED REAL TIME APPLY.
    Also performed a log swith on primary and it got transported to the standby and was applied (YES).
    Also ensured that there are no gaps via some queries where no rows were returned.
    15. I now wanted to perform a switchover, hence issued:-
    Primary_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
    All the archivers stopped as expected.
    16. Now on machine2:
    Stdby_SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
    17. On machine1:
    Primary_Now_Standby_SQL>SHUTDOWN IMMEDIATE;
    Primary_Now_Standby_SQL>STARTUP MOUNT;
    Primary_Now_Standby_SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
    17. On machine2:
    Stdby_Now_Primary_SQL>ALTER DATABASE OPEN;
    Checked by switching the logfile on the new primary and ensured that the standby received this logfile and was applied (YES).
    However, here are my questions for clarifications:-
    Q1. There is a question about ONLINE REDO LOGS within "#" characters.
    Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    SQL> SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS,FROM V$MANAGED_STANDBY;
    MRP0 APPLYING_LOG 1 47 452 1024000
    but :
    SQL> select max(sequence#) from v$archived_log;
    46
    Why is that? Also I have noticed that one of the sequence#s is NOT applied but the later ones are:-
    SQL> SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
    42 NO
    43 YES
    44 YES
    45 YES
    46 YES
    What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Sorry if I have missed out something guys but I tried to put in as much detail as I remember...
    Thank you very much in advance.
    Regards,
    Bharath
    Edited by: Bharath3 on Jan 22, 2010 2:13 AM

    Parameters:
    Missing on the Primary:
    DB_UNIQUE_NAME=oradgp
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    Missing on the Standby:
    DB_UNIQUE_NAME=oradgs
    LOG_ARCHIVE_CONFIG=DG_CONFIG=(oradgp, oradgs)
    You said: Also added a tempfile though the temp tablespace was created per the restore via RMAN, I think the actual file (temp01.dbf) didn't get created, so I manually created the tempfile.
    RMAN should have also added the temp file. Note that as of 11g RMAN duplicate for standby will also add the standby redo log files at the standby if they already existed on the Primary when you took the backup.
    You said: ****However I noticed that in the alert log the standby was complaining about the online REDO log not being present****
    That is just the weird error that the RDBMS returns when the database tries to find the online redo log files. You see that at the start of the MRP because it tries to open them and if it gets the error it will manually create them based on their file definition in the controlfile combined with LOG_FILE_NAME_CONVERT if they are in a different place from the Primary.
    Your questions (Q1 answered above):
    You said: Q2. Do you see me doing anything wrong in regards to naming the directory structures? Should I have renamed the dbname directory in the Oracle Home to oradgs rather than oradgp?
    Up to you. Not a requirement.
    You said: Q3. When I enabled real time apply does that mean, that I am not in 'MANAGED' mode anymore? Is there an un-managed mode also?
    You are always in MANAGED mode when you use the RECOVER MANAGED STANDBY DATABASE command. If you use manual recovery "RECOVER STANDBY DATABASE" (NOT RECOMMENDED EVER ON A STANDBY DATABASE) then you are effectively in 'non-managed' mode although we do not call it that.
    You said: Q4. After the switchover, I have noticed that the MRP0 process is "APPLYING LOG" status to a sequence# which is not even the latest sequence# as per v$archived_log. By this I mean:-
    Log 46 (in your example) is the last FULL and ARCHIVED log hence that is the latest one to show up in V$ARCHIVED_LOG as that is a list of fully archived log files. Sequence 47 is the one that is current in the Primary online redo log and also current in the standby's standby redo log and as you are using real time apply that is the one it is applying.
    You said: What could be the possible reasons why sequence# 42 didn't get applied but the others did?
    42 was probably a gap. Select the FAL columns as well and it will proably say 'YES' for FAL. We do not update the Primary's controlfile everytime we resolve a gap. Try the same command on the standby and you will see that 42 was indeed applied. Redo can never be applied out of order so the max(sequence#) from v$archived_log where applied = 'YES' will tell you that every sequence before that number has to have been applied.
    You said: After reading several documents I am confused at this stage because I have read that you can setup standby databases using 'standby' logs but is there another method without using standby logs?
    Yes, If you do not have standby redo log files on the standby then we write directly to an archive log. Which means potential large data loss at failover and no real time apply. That was the old 9i method for ARCH. Don't do that. Always have standby redo logs (SRL)
    You said: Q5. The log switch isn't happening automatically on the primary database where I could see the whole process happening on it own, such as generation of a new logfile, that being transported to the standby and then being applied on the standby.
    Could this be due to inactivity on the primary database as I am not doing anything on it?
    Log switches on the Primary happen when the current log gets full, a log switch has not happened for the number of seconds you specified in the ARCHIVE_LAG_TARGET parameter or you say ALTER SYSTEM SWITCH LOGFILE (or the other methods for switching log files. The heartbeat redo will eventually fill up an online log file but it is about 13 bytes so you do the math on how long that would take :^)
    You are shipping redo with ASYNC so we send the redo as it is commited, there is no wait for the log switch. And we are in real time apply so there is no wait for the log switch to apply that redo. In theroy you could create an online log file large enough to hold an entire day's worth of redo and never switch for the whole day and the standby would still be caught up with the primary.

  • Location of log directory with respect to data guard

    I am working with Oracle 10g in Linux platform
    I was during a switchover from primary to standby which failed due to the incorrect settings of log_file_name_covert parameter . To debug this case I was going through Oracle Documentation material for Data Guard ....http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14239/troubleshooting.htm
    There I came across this para
    When the switchover to change the role from primary to physical standby was
    initiated, a trace file was written in the log directory. This trace file contains the
    SQL statements required to re-create the original primary control file. Locate the
    trace file and extract the SQL statements into a temporary file. Execute the
    temporary file from SQL*Plus. This will revert the new standby database back to
    the primary role.
    I was not able to understand what the trace file and log directory means ? Which is the location of log directory ?
    Can anyone please explain ?

    It was talking about a trace will be placed under your background dump destination. As you would have using
    alter database backup controlfile to trace;Check your background dump destination where your alert log is.

  • Data Guard with Oracle 9i and 10g -- have you done it?

    We have an ERP system (JDEdwards, db size approx 600 GB) and we'd like to copy the data in this system to a database on another computer. We'd use the remote database for reporting, so we need it to be available. We are looking at using Data Guard running in Logical Standby mode.
    The ERP database is Oracle 9i, and we are considering having the Logical Standby database be 10g. Have you done this and if so, what was your experience?
    Here's why we want to do this. There are many tables in the ERP database that contain descending indexes. We can't change this, and it is not supported with Data Guard under Oracle 9.2. Tables with descending indexes do not copy to the logical standby database.
    We were told that this is corrected in a newer release of Oracle, and we were wondering if a 10g database would properly copy tables with descending indexes.
    Thanks in advance for any experiences you can share.
    Best Regards,
    Mike

    What you are trying to do might not be possible. Because when you create logical standby you have to create a physical standby first and convert it. Primary and standby have to be same version.
    In furture it might be possible because 10G will support rolling upgrade of logical standby.
    However even it's possible you have to go through a lot of pain to setup and maintain it because Oracle don't support the setup.
    What you could try are Stream and Replication, I will say Stream is very interesting one. Because Oracle say : "Oracle Streams and Oracle Data Guard (including Data Guard SQL Apply) are independent features based on some common underlying infrastructure and technology. "
    http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardStreams.html

  • Grid control agent and data guard in mount mode

    Hello,
    I would like to know how you manage your data guards when you do not have the license for active data guard with the grid control agents. The standby database is in mount mode, so the agents cannot query the database.
    What do you guys do in such cases? Remove the agent? Or wait till a switchover?

    Check this out with a mounted database
    [lo***p02].oracle:/home/oracle > sqlplus dbsnmp
    SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 15:52:11 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Enter password:
    ERROR:
    ORA-01033: ORACLE initialization or shutdown in progress
    Process ID: 0
    Session ID: 0 Serial number: 0It cannot connect.
    And then
    [lo****p02].oracle:/home/oracle > sqlplus / as sysdba
    SQL*Plus: Release 11.2.0.3.0 Production on Mon Jun 4 15:57:49 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> select open_mode from v$database;
    OPEN_MODE
    MOUNTED

Maybe you are looking for

  • Two HP Pavilion 23fi 23-inch dead short after end of warranty!!

    Dear HP, I live in Australia and it is almost impossible to find the right information or even support from your local web site. I have two HP Pavilion 23fi 23-inch Diagonal IPS LED Backlit Monitor (C7T77A2) here on my desk and a few days apart BOTH

  • Very strange issue when displaying the Account PDF Factsheet.

    We have a very odd situation. We have one users (user1) who has the same business role as everyone else in the office. They can enter the UI (CRM 7), display an account and then click on the PDF Factsheet option. It seems to process and then simple s

  • Sqlldr Issue v:oracle10g

    Hi I am trying to user sqlldr with my following code, sqlldr sqlplus username/password@server control='D:\Folder_name\loader.ctl' loader.ctl load data infile 'sample.csv' into table_test fields terminated by "," skip 1 optionally enclosed by '"'     

  • Hurry! Communication API's Application?

    Hello everyone! I have some questions for java.lang.object and javax.comm.*. I have edited a serial communication program which use jb7.0e to act a serial equipment. That program is used java's communication API. And I do it well in my computer. But

  • Bridge wants something I can't give it

    Every time I start Bridge it asks me to put a disc into the G drive:, which is a jump drive, I assume, but not one I have.Since it refuses to go any further without this disc, I need to know I can get it to stopping asking for this disk? Photoshop it