Data Guard Administration Question.... (10gR2)

After considerable trial and error, I have a running logical standby between 2 10gR2 databases.
1) During the install of the primary database, I didn't comply fully to the OFA standard (I was slightly off on the placement of my database devices). During the Data Guard configuration, the option of "converting to ofa" was selected (per a metalink article that I read regarding a problem choosing to keep the filenames of the primary the same). Of course now I have an issue creating a tablespace on the Primary when keeping the non-OFA directory stucture. When it attempts to do the same on the Standby I'm getting the error that it cannot create the datafile. Makes sense, but what should I do in the future? Create the non-OFA directory structure on the Standby (assuming it would then create the file)? Is't there a filename conversion parameter that handles this as well?
2) I got myself into a pinch this afternoon, partly due to #1. I am importing a file from another instance onto the Primary to begin testing reports on the Secondary. Prior to the import I created a tablespace (which is what got me to problem #1), proceeded to create the owner of the schema that's going to be imported, then performed the import. Now the apply process is erroring and going off line every few seconds as it works it's way through the "cannot create table" errors that the import is running into on the Secondary. How do I handle a large batch of transactions like this? Ultimately I would like to get back to square 1... no user, and no imported data in the Primary and the apply process online.
Thanks:
Chris

So what I finally did was turned dg offline. Created the tablespace on the secondary, and then the user and then turned apply back online. The import proceeded fairly smoothly. Problem resolved.
However, that I still need some insight as to exactly how the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters work. I have LOG_FILE_NAME_CONVERT setup (correctly I think) but I get a warning message in DG that sez the configuration is inconsistent with the actual setup.
Here's the way things are setup:
I have 3 redo logs:
primary (non-ofa):
/opt/oracle10/product/oradata/ICCORE10G2/redo01.log
... redo02.log
... redo03.log
secondary (ofa):
/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/redo01.log
... redo02.log
... redo03.log
LOG_FILE_NAME_CONVERT=('/opt/oracle10/product/oradata/ICCORE10G2/', '/opt/oracle10/product/10.2.0.1.0/oradata/ICCDG2/')
Is the above parameter set correctly?
DB_NAME_FILE_CONVERT is unset as of now, but the directory structure above is the same. I assume the parameter needs to be set just like LOG_FILE_NAME_CONVERT above.
Thanks

Similar Messages

  • Upgrading 9i Data Guard configuration to 10gR2

    I have a question on upgrading to 10gR2 with respect to Data Guard. We currently have physical standby databases for our production databases in 9i (9.2.0.7). We are about to upgrade to 10g within the next quarter. At this time we are not using RMAN for backups. I recently attended a 10g Grid Control class and I believe that the instructor mentioned that to use Data Guard in 10g/Grid Control that we must use RMAN. Is this a correct statement? Is RMAN required to use Data Guard in 10gR2?

    Never done a upgrade like that, so I would never know.
    But Maybe rman is required.
    SS

  • 10g Data Guard Install Questions... Solaris 9

    Firstly, I've done several Failsafe installes on Wintel platforms, but I'm having a tough time getting started installing Dataguard on Solaris. According to the manual:
    Oracle® Data Guard Broker
    10g Release 1 (10.1)
    Part Number B10822-01
    "The Oracle Data Guard graphical user interface (GUI), which you can use to manage broker configurations, is installed with the Oracle Enterprise Manager software."
    I don't see any link or otherwise access to Data Guard via the 10g Enterprise Manager. Is there something that I missed during install that will allow me access Data Guard GUI?
    I'm stuck
    http://download-east.oracle.com/docs/cd/B14117_01/server.101/b10822/concepts.htm#sthref14

    rajeysh wrote:
    refer the link:- hope this will help you.
    http://blogs.oracle.com/AlejandroVargas/gems/DataGuardBrokerandobserverst.pdf
    http://oracleinstance.blogspot.com/2010/01/configuration-of-10g-data-guard-broker.html
    http://gjilevski.wordpress.com/2010/03/06/configuring-10g-data-guard-broker-and-observer-for-failover-and-switchover/
    Good luck.
    SQL> show parameter broker
    NAME                                 TYPE        VALUE
    dg_broker_config_file1               string      /u03/KMC/db/tech_st/10.2.0/dbs
                                                     /dr1KMC_PROD.dat
    dg_broker_config_file2               string      /u03/KMC/db/tech_st/10.2.0/dbs
                                                     /dr2KMC_PROD.dat
    dg_broker_start                      boolean     FALSE
    SQL>so i need only:
    ALTER SYSTEM SET DG_BROKER_START=true scope=both;only to act in dgmgrl.
    please confirm me ......

  • Best Practice for monitoring database targets configured for Data Guard

    We are in the process of migrating our DB targets to 12c Cloud Control. 
    In our current 10g environment the Primary Targets are monitored and administered by OEM GC A, and the Standby Targets are monitored by OEM GC B.  Originally, I believe this was because of proximity and network speed, and over time it evolved to a Primary/Standby separation.  One of the greatest challenges in this configuration is keeping OEM jobs in sync on both sides (in case of switchover/failover).
    For our new OEM CC environment we are setting up CC A and CC B.  However, I would like to determine if it would be smarter to monitor all DB targets (Primary and Standby) from the same CC console.  In other words, monitor and administer DB Primary and Standby from the same OEM CC Console.   I am trying to determine the best practice.  I am not sure if administering a swichover from Cloud Control from Primary to Standby requires that both targets are monitored in the same environment or not.
    I am interested in feedback.   I am also interested in finding good reference materials (I have been looking at Oracle documentation and other documents online).   Thanks for your input and thoughts.  I am deliberately trying to keep this as concise as possible.

    OMS is a tool it is not need to monitor your primary and standby what is what I meant by the comment.
    The reason you need the same OMS to monitor both the primary and the standby is in the Data Guard administration screen it will show both targets. You also will have the option of doing switch-overs and fail-overs as well as convert the primary or standby. One of the options is also to move all the jobs that are scheduled with primary over to the standby during a switch-over or fail-over.
    There is no document that states that you need to have all targets on one OMS but that is the best method for the reason of having OMS. OMS is a tool to have all targets in a central repository. If you start have different OMS server and OMS repository you will need to log into separate OMS to administrator the targets.

  • Data guard installation notes required.

    can anybody help me with dataguard notes for oracle 11g.

    Dear user3482841,
    Welcome to the OTN forums.
    Please refer to the http://tahiti.oracle.com and search for data guard administration topic. You can also search google and find some important white papers about the data guard in 11g.
    Regards.
    Ogan

  • Question related to Physical Data Guard (Oracle 10gR2)

    Hi,
    I have a question regarding Physical Data Guard in a RAC environment (Oracle 10g Release 2).
    Say we have 4-node RAC in production and DG is also configured for RAC but number of nodes differ in production and DG. Which node in DG will apply log from production if?
    1) If there is 2 node RAC setup in DG?
    2) If there is 4 node RAC setup in DG?
    3) If there is 5/6/7/... node RAC setup in DG?
    Probably, this is a very simple and basic question but your expertise would be of great help.
    Regards

    Hi - Only one instance performs the recovery, but more than one standby node can be an archive log destination for the primary instances.

  • Supplemental logging with Oracle 10gR2 Streams and Data Guard

    Hello,
    I have a environment with Oracle DB 10gR2 and Physical Standby with Data Guard DR Conf. Right now, this environment is going to be extended to a replication schema using 2-way Oracle Streams Replication (for replication to the central office from this branch office, other branchs will be added soon). The primary DB will be replicated to the other primary DB (in the remote central office).
    So, there is my question: It's completly necesary to specify Supplemental Logging on the sources databases (primaries) for setting 2-way Streams Replication?, and, if it's completly necesary, then, do I can set Supplemental Logging on primaries without affect theirs physical standbys, or do I need to do something special?
    Thanks in advance.

    Sorry, it's repeated. 'cus browser connection problem.

  • Administrating Data Guard without complexity of Grid Control.  Possible?

    I wonder if someone can shed some wisdom about implementing and administrating Data Guard without the complexity of Grid Control. Don't get me wrong, I love the Data Guard feature provided by Grid Control, but installing Grid Control just for the sake of administrating Data Guard sounds a bit overkilling. Not to mention that I still have hard time getting Grid Control properly installed on a Windows Server 2003 box (keeps getting 503 Service Unavailable and the Servlet error).
    I was told by a friend that Oracle 9 has something called EMCA (Control Assistant) that allows you to administrate Data Guide. Searching for any file containing the phrase "emca" under the Oracle directory ("c:\Oracle\product\10.2.0\db_1\BIN"), I found emca.bat and some related files. Does it mean the EMCA is actually existing in Oracle 10.2G (for Microsoft Windows Server)?
    Any comment? Feeling clueless right now. :-I ....
    Deecay

    I have set up Dataguard 9iR2 on Linux SLES8 and use Data Guard Broker to manage switchover and failover operations. It comes with the database and is command-line based.
    The documentation walks you through the setup phases quite nicely.
    http://www.oracle.com/pls/db92/db92.to_toc?pathname=server.920%2Fa96629%2Ftoc.htm&remark=docindex
    I would suggest a read of some of the documentation on metalink surrounding dataguard and the broker before attempting to use either ;)

  • DATA Guard Logical Standby v.s. Streams Apply (10gR2, rac -- rac )

    Greetings -
    We currently have a 3 node cluster, doing a 'Schema' capture, with a 2 node cluster serving as the apply side. Both are on 10gR2 (solaris 10)
    We have been experiencing apply latency due to large transactions. The way logminer/streams evaluates the arch logs, it converts each updated row into a transaction as part of a transaction set, using 'decode (' statements.
    I am under the impression a physical standby will do the same thing. But what about a logical Standby ?
    (this from the Oracle Documentation)
    [10g Concepts|http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm#SBYDB00010]
    ...Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database though SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executing the SQL statements on the standby database.
    Does anyone know firsthand if SQL APPLY re-issues the original sql, or if it relies on the 'decode' command as well ?
    Thanks
    The Nets Edge

    A Physical standby does not do anything with SQL. It is running media recovery under the covers. A Logical standby uses the Logminer to read the redo, converts it to SQL and Data and applies those transactions to the Logical standby.
    The both build logical change records, figure out order, dependencies etc etc and applies the transaction. Both are susceptible to long running transactions on the Primary (source) database.
    Streams uses Logminer, Logical Standby use part of the Streams apply so as Mr. Morgan said, they are very similar :^)
    If you are having apply performance issues you may want to look at Active data Guard.
    Larry

  • A Data Guard Question

    Dear experts,
    This time I have a question regarding Data Guard.
    Database:      Oracle 10g Release 2 (10.2.0.3)
    OS:          IBM - AIX 5.3 - ML-5
    Data Guard:     Physical Standby
    We have multiple data guard configuraiton in place and all of them are configured in "MAXIMUM PERFORMANCE" mode.
    Currently, we have a separate mount point for archive logs (say /dbarch) on both primary and standby servers.
    Once log is archived on primary, it is shipped to standby server and applied.
    I think we are wasting space by allocating (/dbarch) on standby server, instead we can share "/dbarch" of primary with standby using NFS.
    I remember reading such document. I tried to search in Oracle documentation, google, and metalink for the same but failed :((
    Any help in this regard will be very helpful.
    Thanks in advance.
    Regards

    From a DR perpespective, this sounds like a recipe for losing data.
    If your primary site has a disaster, and there are logs that have not been applied to the standby then you will never be able to apply them as they will have been lost in the crash.
    The point of having the standby is to eliminate a single point of failure - and this mechanism is reintroducing it!
    jason.
    http://jarneil.wordpress.com

  • Data Guard Logical Standby DB Questions

    Hi,
    I am creating Oracle 9i Logical Standby database on AIX 5 on same server ( for
    testing only ).
    In Data Guard Manual ( PDF file page no : 86 ) it is said that
    Step 4 Create a backup copy of the control file for the standby database. On the
    primary database, create a backup copy of the control file for the standby database:
    SQL> ALTER DATABASE BACKUP CONTROLFILE TO
    2> '/disk1/oracle/oradata/payroll/standby/payroll3.ctl';
    My Questions regarding this are
    1. Does it missing "backup standby controlfile" word ? if not what is the use of
    this control file ? Because as per this manual section 4.2.2 step no 1 and 2 I
    have stopped the primary database and copied datafiles,logfiles and controlfiles
    as specified which are having consistent stage.
    2. On primary database I am mirroring 2 controlfiles at two diffrent locations.
    On Standby Database I am going to keep same policy. Then on standby database do I have to copy this controlfile to two locations and rename datafiles or I have
    to user controlfiles which I have backed up along with datafiles ?
    3. Suppose my primary database is "infod" and standby database is "standby".
    Then what should be value of log_file_format parameter of standby database ? On
    primary db I have defined "infod LOG_ARCHIVE_FORMAT=infod_log%s_%t.arc. Do I have to keep same or change it ? If on standby db I change the value do I
    require other parameters to be set ?
    regards & thanks
    pjp

    Q/A1) Its correct you dont need the standby keyword. Not sure abut the reason why but we have created logical standby running in prod using the oracle doc.
    Q/A2) Yo can specify any location and name for your controlfiles as it is instance specific and not depending on primary database.
    Q/A3) You can set any format for your archlogs on standby side. It doesnt bother the primary db or DG configuration.
    Regards,
    http://askyogesh.com

  • Data Guard questions

    Hello all,
    I am a newbie to data guard.
    1) Unless using the real time apply feature, standby redo logs must be archived before the data can be applied to the standby database. Am I correct in my understanding?
    2) Can we keep standby database in higher version(11i) and primary database in 10g?
    3) Do I need to left blank LOG_ARCHIVE_DEST_2 parameter on Standby Database init parameter?
    4) Can we face any problem if we have different LOG_ARCHIVE_FORMAT on the primary and standby site?
    Waiting for your valuable reply.
    With regards

    994269 wrote:
    Hello all,
    I am a newbie to data guard.
    Wel come to the data guard>
    1) Unless using the real time apply feature, standby redo logs must be archived before the data can be applied to the standby database. Am I correct in my understanding?
    Not uderstand your question.
    2) Can we keep standby database in higher version(11i) and primary database in 10g?
    No. Both the version should be same.
    3) Do I need to left blank LOG_ARCHIVE_DEST_2 parameter on Standby Database init parameter?
    Yes.Until you dont required to send archived log from standby database to any other database. If you have cascaded setup then you would required to set the parameter as required.
    4) Can we face any problem if we have different LOG_ARCHIVE_FORMAT on the primary and standby site?
    Yes. We should keep default log format or shourl be same.
    Waiting for your valuable reply.
    With regards

  • Data Guard, Grid Control 11, GI 11gR2, and DB 10gR2

    Hi,
    I'm seeing a rather odd behavior, and I'm hoping someone else has already been through this and knows what I'm doing wrong.
    I have an installation of Grid Control 11.1 (recently patched) which I am using to create Data Guard replication of some 10gR2 database instances. For some confusing reason, sometimes when I use the GC wizard to add the standby database, it creates the listener for the new database in the EM Agent's OH/network/admin/listener.ora. Other times, it creates it in the 10gR2 DB's OH/network/admin/listener.ora. When it does the former, creation of the standby DB seems to work fine, but when it does the latter, not so much.
    If it matters, my primary DB is 10.2.0.4 on 10.2 grid infrastructure, and my standbys are 10.2.0.4 on GI 11.2.0.2.
    I have applied the latest PSU to my agents, so they are all the same version, so I'm confused as to why this might be happening. Has anyone else seen this issue?
    Thanks,
    Bill

    My recommendation would be to not use OEM Grid Control to create your standby.
    Even when it is working properly ... it does not create what I would consider a good implementation. Just as I don't consider the vanilla installation of the Oracle database by the OUI (all control files and redo logs in the same directory with the datafiles) very good.
    You'd be far better off hand crafting your standby using two listeners ... one for public and one for replication.

  • Oracle 10gR2 Data guard physical or logical standby server?

    Hi
    We are planing to implement an Oracle 10gR2 data guard standby server for DR purposes, I found out that there are two type of standby server which is logical and physical standby server. I want to know which one is preferable? in term of complexity of setup and maintenance?
    regards

    Well it depends on what you mean by maintenance. I found the physical standby to be very little trouble at all ; however the logical standby has restrictions on it that the physical standby does not. In essence the physical standby merely digest archive logs; where as the logical standby uses logminer like functionality to process sql statements much like Oracle streams.
    Hope that helps,
    -JR jr.

  • Should e-book Data Guard Concepts and Administration be useless for us?

    I think The Data Guard Broker is an easy and efficient way to manage a Data Guard configuration. The Data Guard Broker provides us with two efficient tools: EM Console and DGMGRL. We can do everything we want for Data Guard system.
    Why we spend so much time to read Oracle e-book Data Guard Concepts and Administration? I think it is useless. Only Broker is enough?

    Frank,
    The thing is GUI tool itself is also program that need some one to manage. The developer of such tools cannot take care unlimited possibilities in the real world.
    Just check how many threads in this forum about why emca, dbca and EM console not working.
    Being a DBA, we should understand the underlying technology that driven the system. It's like driving a car. A regular customer could say I don't care about mechanics of a car. As long as it's running, that's enough. But we are not regular customer, we suppose to be the auto repair mechanician in Oracle world. We could enjoy driving a car, but at same time we should know why and how it's working.

Maybe you are looking for