Dataguard in a replicated environment

Folks,
Has anyone implemented dataguard(standby database) in a replicated environment
or
worked in an environment where replication (updateable snapshots) are in place already along with dataguard?
Are there any complications I need to be aware of whilst setting-up dataguard with replication on?
Thanks
Amit

That is entirely due to the checkpoint delay. Depending on variations in hardware,
I/O configuration and workload, it can take clients longer to flush their caches than
the master. You can adjust the delay, which is 30 seconds by default, by calling
the DB_ENV->rep_set_timeout API with the DB_REP_CHECKPOINT_DELAY flag.
If you set it to 0, there will be no delay.
Sue LoVerso
Oracle

Similar Messages

  • Problem in accessing JSP in a Clustered Session replicated environment

              Hai All!
              When i tried to access Index.jsp in a clustered-Session inmemmory replicated environment
              it is throwing following exception,,
              What Might be the reason..
              <Aug 1, 2001 12:11:19 PM GMT+05:30> <Error> <HTTP> <[WebAppServletContext(6493555,DefaultWebApp_ClusterServerA)]
              Servlet failed with Exception java.lang.ClassCastException: weblogic.servlet.internal.session.MemorySessionContext
              at weblogic.rmi.internal.AbstractOutboundRequest.sendReceive(AbstractOutboundRequest.java:90)
              at weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationManager_WLStub.java:192)
              at weblogic.cluster.replication.ReplicationManager.trySecondary(ReplicationManager.java:587)
              at weblogic.cluster.replication.ReplicationManager.createSecondary(ReplicationManager.java:565)
              at weblogic.cluster.replication.ReplicationManager.register(ReplicationManager.java:344)
              at weblogic.servlet.internal.session.ReplicatedSessionData.<init>(ReplicatedSessionData.java:128)
              at weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSession(ReplicatedSessionContext.java:123)
              at weblogic.servlet.internal.session.SessionContext.getNewSessionInstance(SessionContext.java:121)
              at weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletRequestImpl.java:1552)
              at weblogic.servlet.internal.ServletRequestImpl.getSession(ServletRequestImpl.java:1415)
              at jsp_servlet._index._jspService(_index.java:80) at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:213)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:1265)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:1631)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              > <Aug 1, 2001 12:12:37 PM GMT+05:30> <Error> <HTTP> <[WebAppServletContext(6493555,DefaultWebApp_ClusterServerA)]
              Servlet failed with Exception java.lang.ClassCastException: weblogic.servlet.internal.session.MemorySessionContext
              at weblogic.rmi.internal.AbstractOutboundRequest.sendReceive(AbstractOutboundRequest.java:90)
              at weblogic.cluster.replication.ReplicationManager_WLStub.create(ReplicationManager_WLStub.java:192)
              at weblogic.cluster.replication.ReplicationManager.trySecondary(ReplicationManager.java:587)
              at weblogic.cluster.replication.ReplicationManager.createSecondary(ReplicationManager.java:565)
              at weblogic.cluster.replication.ReplicationManager.register(ReplicationManager.java:344)
              at weblogic.servlet.internal.session.ReplicatedSessionData.<init>(ReplicatedSessionData.java:128)
              at weblogic.servlet.internal.session.ReplicatedSessionContext.getNewSession(ReplicatedSessionContext.java:123)
              at weblogic.servlet.internal.session.SessionContext.getNewSessionInstance(SessionContext.java:121)
              at weblogic.servlet.internal.ServletRequestImpl.getNewSession(ServletRequestImpl.java:1552)
              at weblogic.servlet.internal.ServletRequestImpl.getSession(ServletRequestImpl.java:1415)
              at jsp_servlet._index._jspService(_index.java:80) at weblogic.servlet.jsp.JspBase.service(JspBase.java:27)
              at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:213)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:1265)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:1631)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137) at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              >
              Can anybody suggest what went wrong?
              TIA Rgds Manohar
              

    [att1.html]
              

  • Multi-master replicated environment

    hi,
    what does it mean multi-master replicated environment? How could i benefit from it?
    thanks in advance.

    Hello,
    This is the environment in which inserts, updates and deletes on objects
    in any nodes, included in the environment will be replicated to remaining
    nodes which are defined to be part of the replicated environment.
    Sinces changes to any node are replicated to all other nodes, they all work
    as Master, which gives it the name, Master-2-Master and also advanced replication.
    Tahir.

  • Problem using dbxml shell with a replicated environment

    I'm having problems opening a container from the dbxml shell when in a replicated environment. When I issue the openContainer command I get the following error.
    dbxml> openContainer LocalConfigView.dbxml
    Non-replication DB_ENV handle attempting to modify a replicated environment
    stdin:1: openContainer failed, Error: Invalid argument
    I've read the documentation and the command line arguments for dbxml but I didn't see anything related to using the shell in a replicated database environment. Everything works fine in a non-replicated environment. I'm using dbxml-2.4.16 on a windows vista platform.
    Thanks for any information you can provide.
    Tom Perry

    You cannot use dbxml shell directly because it doesn't use DB_INIT_REP flag.
    You need to write a program using DB_INIT_REP flag or modify dbxml shell.

  • Use of DataGuard in a testing environement

    Hi all,
    whenever an incident occurs on our production system, we import a dump from our production database into our test database to reproduce the incident and to find a solution for the issue.
    Taking a dump and import this into a test database takes several hours.
    Now I'm thinking about a solution using DataGuard to speed up this scenario. It is quiet important for us to solve production incidents as soon as possible (banking environement).
    So I think about the following:
    DB PROD => master
    DB TEST1 => standby
    When an incident occurs I want to
    - remove DB TEST1 from DataGuard and use this DB as an independent DB where we can reproduce the incident.
    - add a new stand by to DataGuard to have a test DB for the next incidents
    Does anyone use DataGuard in the same manner or does anyone can tell me if this would be a possible way?
    Many thanks in advance fpr your support.
    Andreas

    Its a thought, but its not the best use of Data Guard (in my humble opinion). When I think Data Guard I think:
    Recovery Plan
    Read-only services
    Minimize downtime
    You are looking for a quick way to refresh your test environment and I believe RMAN Duplicate might be a better answer. You can have everything setup in advance just like Data Guard.
    If you do decide to use Data Guard for this sooner or later you will probably trash your standby setup and have to use RMAN Duplicate to fix it.
    However if your database is really huge it might be a great idea to have the standby the way you say with a short delay on apply on the standby end.
    Best Regards
    mseberg

  • DB_REP_JOIN_FAILURE when adding a new replicated environment

    Hello,
    I'm developing a replicated system using the Base Replication API. At the moment this works quite well except for the case that I add a new replicated site (Site N).
    On Site N I start with rep_start(NULL, DB_REP_CLIENT). After some messages have been transfered between master an Site N, rep_proccess_message's return value is DB_REP_JOIN_FAILURE. Right before that I get "Client was never part of master's environment" on stderr. If I shutdown Site N and restart it while the MASTER is still alive, syncronization works without problems.
    Here are the relevant db debugging messages:
    ----CLIENT----
    CLIENT: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient, LSN [0][0] nogroup nobuf
    CLIENT: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient, LSN [0][0] nogroup nobuf
    CLIENT: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 500238513, type newsite, LSN [0][0]
    CLIENT: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type master_req, LSN [0][0] nogroup nobuf
    CLIENT: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 500238513, type newmaster, LSN [1][328836]
    CLIENT: Election done; egen 1
    CLIENT: Updating gen from 0 to 1 from master 500238513
    CLIENT: Egen: 2. RepVersion 4
    CLIENT: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid 500238513, type verify_req, LSN [1][361051] any nobuf
    CLIENT: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 500238513, type newsite, LSN [0][0]
    CLIENT: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 500238513, type newmaster, LSN [1][328836]
    CLIENT: Election done; egen 2
    CLIENT: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 500238513, type newmaster, LSN [1][328836]
    CLIENT: Election done; egen 2
    CLIENT: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 500238513, type verify, LSN [1][361051]
    CLIENT: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid 500238513, type verify_req, LSN [1][179440] any nobuf
    CLIENT: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 500238513, type verify, LSN [1][179440]
    Client was never part of master's environment
    debug: rep_process_message() threw an exception: DbEnv::rep_process_message: DB_REP_JOIN_FAILURE: Unable to join replication group
    ------MASTER------
    MASTER: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 0 eid 17802195, type newclient, LSN [0][0] nogroup
    MASTER: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid -1, type newsite, LSN [0][0] nobuf
    MASTER: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid -1, type newmaster, LSN [1][328836] nobuf
    MASTER: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 0 eid 17802195, type newclient, LSN [0][0] nogroup
    MASTER: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid -1, type newsite, LSN [0][0] nobuf
    MASTER: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid -1, type newmaster, LSN [1][328836] nobuf
    MASTER: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 0 eid 17802195, type master_req, LSN [0][0] nogroup
    MASTER: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid -1, type newmaster, LSN [1][328836] nobuf
    MASTER: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 17802195, type verify_req, LSN [1][361051]
    MASTER: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid 17802195, type verify, LSN [1][361051] nobuf
    MASTER: ../../../runtime/knowledge rep_process_message: msgv = 4 logv 13 gen = 1 eid 17802195, type verify_req, LSN [1][179440]
    MASTER: ../../../runtime/knowledge rep_send_message: msgv = 4 logv 13 gen = 1 eid 17802195, type verify, LSN [1][179440] nobuf
    Any help is greatly appreciated.
    Regards Jan

    Hello Jan,
    I have two questions for you to start.
    1. What version of BDB are you using? Since 4.6.21, we have changed the code so
    that the client would discard and restart. If you could try a more recent version
    of BDB you may find it behaves the way you expect. I'm assuming you expect
    it to discard whatever is on the client.
    2. The verbose log you posted indicates that the client has substantial amounts
    of data and log, that are completely disjoint from the replication group
    to which you are adding this client. Where does that log come from? If
    this is a brand new client, why isn't its environment and log clean and empty?
    Sue LoVerso
    Oracle

  • Replica environment corrupted - LOG_INCOMPLETE error reported.

    This is the error reported in the je.info.0
    2015-01-26 13:12:58.682 UTC SEVERE [flatterie28-partition-203-1421930195047] flatterie28-partition-203-1421930195047(11):/mnt/data/flatterie/bdb/partition-203:DETACHED flatterie28-partition-203-1421930195047(11) exited unexpectedly with exception com.sleepycat.je.EnvironmentFailureException: (JE 5.0.97) Environment must be closed, caused by: com.sleepycat.je.EnvironmentFailureException: Environment invalid because of previous exception: (JE 5.0.97) flatterie28-partition-203-1421930195047(11):/mnt/data/flatterie/bdb/partition-203 Replicated operation could  not be applied. Status= OperationStatus.NOTFOUND DEL_LN_TX/8 vlsn=2,479,982,654 isReplicated="1"  txn=-575360887 LOG_INCOMPLETE: Transaction logging is incomplete, replica is invalid. Environment is invalid and must be closed. Problem seen replaying entry DEL_LN_TX/8 vlsn=2,479,982,654 isReplicated="1"  txn=-575360887com.sleepycat.je.EnvironmentFailureException: (JE 5.0.97) Environment must be closed, caused by: com.sleepycat.je.EnvironmentFailureException: Environment invalid because of previous exception: (JE 5.0.97) flatterie28-partition-203-1421930195047(11):/mnt/data/flatterie/bdb/partition-203 Replicated operation could  not be applied. Status= OperationStatus.NOTFOUND DEL_LN_TX/8 vlsn=2,479,982,654 isReplicated="1"  txn=-575360887 LOG_INCOMPLETE: Transaction logging is incomplete, replica is invalid. Environment is invalid and must be closed. Problem seen replaying entry DEL_LN_TX/8 vlsn=2,479,982,654 isReplicated="1"  txn=-575360887
      at com.sleepycat.je.EnvironmentFailureException.wrapSelf(EnvironmentFailureException.java:210)
      at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:1594)
      at com.sleepycat.je.dbi.CursorImpl.checkEnv(CursorImpl.java:2853)
      at com.sleepycat.je.Cursor.checkEnv(Cursor.java:4181)
      at com.sleepycat.je.Cursor.close(Cursor.java:517)
      at com.sleepycat.je.rep.impl.node.Replay.applyLN(Replay.java:890)
      at com.sleepycat.je.rep.impl.node.Replay.replayEntry(Replay.java:557)
      at com.sleepycat.je.rep.impl.node.Replica$ReplayThread.run(Replica.java:945)
    Caused by: com.sleepycat.je.EnvironmentFailureException: Environment invalid because of previous exception: (JE 5.0.97) flatterie28-partition-203-1421930195047(11):/mnt/data/flatterie/bdb/partition-203 Replicated operation could  not be applied. Status= OperationStatus.NOTFOUND DEL_LN_TX/8 vlsn=2,479,982,654 isReplicated="1"  txn=-575360887 LOG_INCOMPLETE: Transaction logging is incomplete, replica is invalid. Environment is invalid and must be closed.
      at com.sleepycat.je.rep.impl.node.Replay.applyLN(Replay.java:883)
      ... 2 more

    For the record, this user contacted us separately and we were unable to determine the cause of the problem. If anyone else runs into it, please let us know.
    --mark

  • Do I need DFSR in a single server environment?

    I have a 2012 Host, running a single 2012 Guest.  Guest is running as a DC with AD, DNS, DHCP, and File Services.  DFSR is running, and it gives a warning every time my back runs (Backup is running on Host).  Warning is The DFS Replication
    service stopped replication on volume F:......and long message about Database, yada yada yada.  
    Do I need to run DFSR?  Again, single server, no file replication to different offices.  I'm not finding a clear answer to that question.
    Second, Server Manager should, according to TechNet, have under the Tools option the ability to turn off DFSR.  I cannot find that option.  So, IF I can turn it off, can I simply disable the DFS Namespace and DFS Replication services?  
    I would prefer eliminating rather than ignoring warnings.
    Thanks

    Sorry, one more time.  I have a single server environment, there is NO upstream domain controller, no replication between DC's.  There is ONE DC.  So, this is digressing into two questions.  One, why do I need to run DFSR (again, lots
    of articles talking about how to turn it off, and not as a discussion of temporary turn off https://msdn.microsoft.com/en-us/library/cc753144.aspx) in a single server, single domain, non-replicating environment.  
    Second, how do I address the warning I receive during my backup?  It appears to be caused by a replication error to downstream servers, since there is no downstream server, I should be able to resolve it by turning DFSR off.  I would like some
    documentation discussing the issue of turning it off in a non-DFS environment.
    The DFS Replication service stopped replication on volume F:. This occurs when a DFSR JET database is not
    shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication. 
    Additional Information: 
    Volume: F: 
    GUID: 65E46942-B9D6-11E3-9400-00155D325402 
    Recovery Steps 
    1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due
    to unexpected conflict resolution during the recovery of the replicated folders. 
    2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig
    class. For example, from an elevated command prompt, type the following command: 
    wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="65E46942-B9D6-11E3-9400-00155D325402"
    call ResumeReplication 
    For more information, see http://support.microsoft.com/kb/2663685.  
    Jeff Ferris

  • How to change the page size of a replicated container?

    Hi, I would like to change the page size of dbxml 2.3.10.10 containers in a replicated environment.
    This is what I tried so far:
    1) dbxml_dump/dbxml_load (bot utils not usable in a replicated environment)
    2) db_dump/db_load. db_dump works fine. db_load seems to work fine, since it succesfully creates a database/container from a dump file. But it does not correctly initialise the internal dbxml structures since the database/container can not be opened, although getContainers works...
    Besides writing my own dump/load clients, is there any other option to change the page size of an existing container?
    Regards,
    Jacobus Geluk

    Hi George,
    dbxml_dump seems to work fine, but ends with this message:
    DATA=END
    dbxml_dump: Non-replication DB_ENV handle attempting to modify a replicated environment
    dbxml_dump: dump <container name>.dbxml: Error: Unexpected error opening Configuration DB
    dbxml_load more or less has the same error...
    I am currently writing a little Java program that simply dumps all XML documents of all containers in a directory structure...
    That would work as long as the file system can handle it...
    I suppose dbxml_dump/load could be made to work quite easily by just copying their sources and specifiying the replication settings...
    All utilities that cause log activity, that currently do not support replicated environments, should support replicated environments that simply use the default replication transport mechanism. With some additional command line options like --sites it would work...
    Regards,
    Jacobus

  • Existing Replica MDF file size is increased in size than the New replica install

    Please I need help here.I really really appreciate your help.
    Here are my scenarios:
    We have an application with replicated environment setup on sql server 2012 . Users will have a replica on their machines and they will replicate to the master database. It has 3 subscriptions subscribed to
    the publications on the master db.
    1) We set up a replica(which uses sql server 2012) on a machine with no sql server on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 33gigs and ldf has grown to 41 gigs.
    I went to sql server management studion . Right click and checked the properties of the local database. over all size is around 84 gb with little empty free space available.
    2) We set up a replica(which uses sql server 2012) on a machine with sql server 2008 on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 49 gigs and ldf has grown to 41 gigs.
    I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
    3) We set up a replica(which uses sql server 2012) on a machine with sql server 2012 on it. We have dropped the local database and recreated the local db and did the initial synchronization using replmerge tool.
    The mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
    Why it is allocating the space differently? This is effecting our initiall replica set up times. Any input will be greatly appreciated.
    Thanks,
    Asha.

    https://technet.microsoft.com/en-us/library/ms151791(v=sql.110).aspx

  • RAC Dataguard Switchover timing taking more than expected time

    I have Dataguard setup in RAC environment and my dataguard is also configured and it is working fine.
    Our goal is to do the switchover using DGMGRL withing the 5 minutes. We have followed the proper setup and MAA tuning document and everything is working fine, Just the switchover timeing is 8 to 10 minutes. which varies depending on some parameters but not meeting our goal of less than 5 minutes.
    The only observation that we have seen is as follow
    After switchover to <db_name> comman in DGMGRL
    1) it will shutdown abort the 2nd instance
    2) transfter all the archivelog ( using LGWR in ASYNC mode) of instance 1
    3) Now it looks for the archive log of 2nd instance, this steps take time upto 4 minutes
    we do not know why it takes that much time and how to tune this??
    4) Now converts primary to standby
    5) Now starts the old standby as new primary
    here All steps are tunined except the step 3, that where our lot of time is going any Idea or explanation
    why it takes such a long time to find the exact archive log 2nd instance (Aborted) to transfer to standby site?
    Can any one give explanation or solution to tune this???
    Regards
    Bhushan

    Hi Robert,
    I am on 10.2.0.4 and we have used "MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf", which is available on oracle site.
    Here are by configuration details
    GMGRL> connect sys@dv01aix
    Password:
    Connected.
    DGMGRL> show configuration;
    Configuration
    Name: dv00aix_dg
    Enabled: YES
    Protection Mode: MaxPerformance
    Fast-Start Failover: DISABLED
    Databases:
    dv00aix - Physical standby database
    dv01aix - Primary database
    Current status for "dv00aix_dg":
    SUCCESS
    DGMGRL> show database verbose dv00aix
    Database
    Name: dv00aix
    Role: PHYSICAL STANDBY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    dv00aix1 (apply instance)
    dv00aix2
    Properties:
    InitialConnectIdentifier = 'dv00aix'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '4'
    ReopenSecs = '300'
    NetTimeout = '60'
    LogShipping = 'ON'
    PreferredApplyInstance = 'dv00aix1'
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '900'
    LogArchiveMaxProcesses = '5'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = ''
    LogFileNameConvert = '+SPARE1/dv01aix/,+SPARE/dv00aix/'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    LocalListenerAddress(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "dv00aix":
    SUCCESS
    DGMGRL> show database verbose dv01aix
    Database
    Name: dv01aix
    Role: PRIMARY
    Enabled: YES
    Intended State: ONLINE
    Instance(s):
    dv01aix1
    dv01aix2
    Properties:
    InitialConnectIdentifier = 'dv01aix'
    ObserverConnectIdentifier = ''
    LogXptMode = 'ASYNC'
    Dependency = ''
    DelayMins = '0'
    Binding = 'OPTIONAL'
    MaxFailure = '0'
    MaxConnections = '4'
    ReopenSecs = '300'
    NetTimeout = '60'
    LogShipping = 'ON'
    PreferredApplyInstance = 'dv01aix1'
    ApplyInstanceTimeout = '0'
    ApplyParallel = 'AUTO'
    StandbyFileManagement = 'AUTO'
    ArchiveLagTarget = '0'
    LogArchiveMaxProcesses = '2'
    LogArchiveMinSucceedDest = '1'
    DbFileNameConvert = '+SPARE/dv00aix/, +SPARE1/dv01aix/'
    LogFileNameConvert = '+SPARE/dv00aix/,+SPARE1/dv01aix/'
    FastStartFailoverTarget = ''
    StatusReport = '(monitor)'
    InconsistentProperties = '(monitor)'
    InconsistentLogXptProps = '(monitor)'
    SendQEntries = '(monitor)'
    LogXptStatus = '(monitor)'
    RecvQEntries = '(monitor)'
    HostName(*)
    SidName(*)
    LocalListenerAddress(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    LatestLog(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value
    Current status for "dv01aix":
    SUCCESS
    DGMGRL>
    log_archive_dest_2 string service="(DESCRIPTION=(ADDRESS
    _LIST=(ADDRESS=(PROTOCOL=TCP)(
    HOST=*****-vip0)(PORT=1527))
    )(CONNECT_DATA=(SERVICE_NAME=d
    v00aix_XPT)(INSTANCE_NAME=dv00
    aix1)(SERVER=dedicated)))",
    LGWR ASYNC NOAFFIRM delay=0 O
    PTIONAL max_failure=0 max_conn
    ections=4 reopen=300 db_uniq
    ue_name="dv00aix" register net
    NAME TYPE VALUE
    timeout=60  validfor=(online
    logfile,primaryrole)
    NAME TYPE VALUE
    fal_client string (DESCRIPTION=(ADDRESS_LIST=(AD
    DRESS=(PROTOCOL=TCP)(HOST=*****
    -vip0)(PORT=1527)))(CONNECT
    DATA=(SERVICENAME=dv01aix_XP
    T)(INSTANCE_NAME=dv01aix1)(SER
    VER=dedicated)))
    fal_server string (DESCRIPTION=(ADDRESS_LIST=(AD
    DRESS=(PROTOCOL=TCP)(HOST=*****
    -vip0)(PORT=1527))(ADDRESS=
    (PROTOCOL=TCP)(HOST=*****-vi
    p0)(PORT=1527)))(CONNECT_DATA=
    (SERVICE_NAME=dv00aix_XPT)(SER
    VER=dedicated)))
    db_recovery_file_dest string +SPARE1
    db_recovery_file_dest_size big integer 100G
    recovery_parallelism integer 0
    fast_start_parallel_rollback string LOW
    parallel_adaptive_multi_user boolean TRUE
    parallel_automatic_tuning boolean FALSE
    parallel_execution_message_size integer 2152
    parallel_instance_group string
    parallel_max_servers integer 8
    parallel_min_percent integer 0
    parallel_min_servers integer 0
    parallel_server boolean TRUE
    parallel_server_instances integer 2
    parallel_threads_per_cpu integer 2
    recovery_parallelism integer 0

  • Recommended order for running clean-up wizard in wsus upstream / replica hierarchy

    Hi :)
    Is there anywhere documented recommended order for running clean-up wizard in wsus upstream / replica hierarchy ?
    Should I first run on Upstream and than on Replica or vice versa ?
    I've found on this blog
    http://www.madanmohan.com/2010/10/sms-wsus-synchronization-failed.html that clean-up should be run always from bottom (replica) to the top (upstream).
    Also, is there any news for WSUS 4 :) ?
    Best regards

    YES,you'd better run the WSUS Server Clean-up Wizard from the bottom of the WSUS hierarchy to the top and NEVER from the top down.
    I totally disagree! The Server Cleanup Wizard should be run on the upstream server first, to ensure that
    updates that will be deleted are no longer synchronized to the downstream server. Furthermore, if you run the SCW on the downstream server first, and it deletes updates that have had status changes on the upstream server that need to be replicated,
    this may throw a synchronization error on the downstream server. The downstream server should NEVER be intentionally put in a configuration that makes it different from the upstream server, and running the SCW on a downstream server before running it on an
    upstream server would do exactly that.
    For more details on my thoughts about how to use the Server Cleanup Wizard in a replica environment, please view my webcast from Sep 2010
    Using the WSUS Server Cleanup Wizard - Expert Tips and Tricks".
    Lawrence Garvin, M.S., MCITP:EA, MCDBA, MCSA
    Microsoft MVP - Software Distribution (2005-2012)
    My MVP Profile: http://mvp.support.microsoft.com/profile/Lawrence.Garvin

  • Delete+attr+value operations not replicated

    Hi,
    I have a replicated environment and a really weird situation in it. Replication agreements works properly except for operations like:
    changetype: modify
    delete: profileoutputfrequency
    profileoutputfrequency: 1
    When I send to the server the same operation without the attribute value
    changetype: modify
    delete: profileoutputfrequency
    changes are replicated without any problem. If instead of using 'profileoutputfrequency' I try to delete 'description'
    changetype: modify
    delete: description
    description: 1
    it works.
    I saw in the consumer audit log that the operation is being replicated and it seems like the change is made successfully. Later, when I check the object I changed, the profileoutputfrequency attribute is still there.
    If I disable the replica and I send modify operations directly against the consumer it works fine so I can't say consumer database is corrupted. I also tried to re-generate indexes for that attribute, re-initialize replication agreement and consumer, etc.
    As you can see, it's a really weird error. Any idea about what to do?
    Thanks in advance
    Jorge

    Hi again,
    I installed 5.1 SP2 and the error is still there. Even deleting the attribute value with ldapmodify the change is not propagated to the consumer. Actually, it is propagated (I can see it in the audit log) but it's not made effective although the consumer access log says there is no error in the update.
    Any other idea?
    Jorge

  • LOG_INCOMPLETE: Transaction logging is incomplete, replica is invalid

    I'm running into OperationStatus.NOTFOUND DEL_LN_TX/8 errors when replaying the replica stream, resulting in a LOG_INCOMPLETE exception and a bad environment that will require removal then network restore. Cleaner interaction?

    This is a duplicate of: Replica environment corrupted - LOG_INCOMPLETE error reported.

  • Prevention of invalid startup in an Datamirror/Mimix etc environment

    Hi guys, we are currently implementing our production servers in a datamirror replicated environment.
    Could any of you share the protective measures that you have put in place to prevent the startup of both sides of the replicated environment at the same time by accident? i.e. if primary is up and running, how do you prevent someone from inadvertently running startsap on the secondary environment?
    Thanks, Andy.

    We're using MIMIX and it can be configured to have an exclusive lock.  So if MIMIX is running, SAP can't normally be started on the secondary box.

Maybe you are looking for

  • Add data to  custom fields

    hi, i have some data to be added to a custon field in PROD.... how sholuld i go around this? can i do this manually in PROD< r is there any standard procedure for this?

  • Iphone shuffle Not Working Correctly

    Since doing a full restore of my iphone which reloaded all of my songs, my shuffle doesn't work correctly any more.  I'm on 5.1.1.  I start a shuffle and 2 or max 3 songs play and then it quits.  The screen shows the next song but no sound, either wi

  • What's the correct way to do the update of AddOn

    Hi, Let's say I install the Add-On on server and pushed to each client, if I have to change something in AddOn, do I need to unstall this Add-ON and wait till all clients uninstall current AddOn and then i deploy new version to server and each client

  • After 10.8.3 update, time machine rebackups all the files in the home

    Yesterday evening I have updated my MacBook Pro'11 from 10.8.2 to 10.8.3 Next time machine backup works for the whole night and now I can see that it rebackups the whole home user folder. It shows all the files in my home directory have been changed

  • ESS Who's Who How do I remove photo

    Hi all, How do I stop the photo box from appearing on the ESS Who's Who webdynpro iView? Rob