Replace server in 2 nodes RAC

Hello!
I have question about replacing server in 2 nodes cluster (11r2 standard edition + linux)
If one node (node 1) fails (hardware issue), can I get local disk (on local disks i have operating system - OEL6 and binaries of oracle) and put them into new phisical server (but hardware identical to failed one).
After this operation, I have to add this node or this "new node" will be recognized as old node 1?
If this node wiil be recognized as "old" node 1, this operation is recommended by oracle?
Another question:
What would happen if new serwer has diffrent hardware?(another processor, mainbord)
Of coure my question is about oracle clusterware, not OEL.

Hi,
You should use delete node and node procedure for replacing nodes in cluster. Details can be found from Oracle Clusterware deployment and admin guide.
Generally adding node with different capacity is supported as long as CPU architecture, network names remain the same as existing node in cluster.
Regards,
Sharma

Similar Messages

  • 2x 10g Application Server on 2-node RAC (Linux) - dual solution

    I am looking to have a chat about setting up a 10g Apps Server talking to RAC.
    Its a proof of concept for us right now - but likely to go further.
    Essentially 2 stages
    1) 2 Apps Servers talking to RAC (non clustered - single point of failure at Webcache load balancer as well as at ID Mangament layer)
    2) Full cluster (either active active or active passive) using Oracle Clusterware along the full APP stack and IDM talking to RAC.
    Want to discuss options/best practice etc

    Hello All,
    My target is to achieve failover for oracle forms based application. Now this is what i have currently in my setup :
    1) 2 nodes with win2003 r2 (32 - bit)
    2) Oracle RAC DB 10.2.0.5.0
    3) Oracle Application Server IAS 10.1.2.3.0 on each node
    I have included TNS entries for TAF under Oracle Infrastructure home of IAS and also under Oracle MiddleTier home of IAS.
    After completing all the setup when i test failover for the clients like TOAD, SQL for the RAC DB it is absolutely fine. Unfortunately when i test my forms application for failover by switching off the DB to which forms client is connected it simply errors out "UNHANDLED EXCEPTION" and the session closes abruptly. Are there any specific settings i have to make in order to ensure forms client failsover to the next running node rather than getting disconnected with loss of transactions.\
    I also look forward to get an input with respect to failing over Application Server 10g to the next node.
    Thank you.
    Ali

  • Setup 2-Node RAC 11.2.0.1.0 on Windows Server 2008

    I have been setting up RAC environments in VirtualBox and VMWare in my local machine and in our test server. I have done the following setup:
    1. RAC 10g on RHEL 5.4 (VMWare / VBox)
    2. RAC 11g on RHEL 5.4 (VBox)
    3. RAC 11g on Windows Server 2008 (VBox)
    Now, our management wants me to setup a 2-node RAC in real world. Cost is not an issue here as this will be financed by a big private group.
    I am excited to do the project as I am really enthusiastic in database clustering. Of course, there is a little nervous feeling since this is
    my first time doing it in real world (as the best RAC expert started from his first deployment :) ).
    I am going to build RAC 11.2.0.1.0 in Windows Server 2008
    I would like to seek advise on:
    - the best practices, what are the things that I have to consider?
    - any straight forward real world deployment guide or technical papers that can serve as my reference
    - any issues that I might encounter
    - any help and feedback
    I know I can be successful in this first time project if ill seek advise to the experts.
    I hope you can help me and I will be glad to read and review your contributions.
    Thanks a lot.....

    Hi,
    Few months ago I had a project to setup RAC on Windows 2008 R2. For six months now there are no problems and it's working fine, but I have to say that installation wasn't easy and took me a lot of time. Another thing is the troubleshooting, I feel completely helpless when (if) something screws up. It takes between 10 to 15 mins to have database running in case of reboot of the servers.
    So here are few things to consider:
    - Disable write cache on shared disks.
    - Disable User Access Control!
    - Disable firewall (really important)!
    - Use diskpart command to create extended and logical partition on all disks.
    - There is also nasty bug we hit, I've blogged about it:
    http://sve.to/2011/09/29/exhaust-of-windows-2008-heap-memory-with-oracle-database-11-2-0-2/
    - I had a terrible problems with user equivalence. Verify Privileges for copying files in the cluster:
    net use \\nodeX\c$
    - Once installation is completed apply latest Bundle Patch - currently this is BP6 (Patch 13965211):
    DB 11.2.0.3 Patch 6 includes all bugs fixed in 11.2.0.3 Patch 1 to Patch 5 and also includes CPU2012. It must be applied on top of the 11.2.0.3.
    Useful MOS notes:
    RAC and Oracle Clusterware Best Practices and Starter Kit (Windows) [ID 811271.1]
    Windows: CLUVFY Fails with TCP Check PRVF-7617 Due to Case of Node Names [ID 1286394.1]
    Finally if you have a choice then go with Linux, it's robust, easy to install and maintain. You have better control on the system and user processes, it's more flexible and easy to troubleshoot.
    Regards,
    Sve

  • Multiple copies of same database on a 2 node RAC server - How to merge ?

    I currently have multiple copies of the same database running on a 2 node Rac system. I am looking for a way to combine them into 1 large database but keeping the data separate.
    The databases are copies of production for testing, development and a yearly "historical" databases .
    All the databases are created from production, and generally have the same schema's , tables, procedures, etc however may be different versions and need to be.
    Is There a way to use one large database and logically split all the different versions of the same objects into their own space in one database ? The structure cannot change as the database is for a 3rd party's Forms application the relies on the objects not changing names etc.
    Ideally I am looking for a solution that will allow the forms application to connect to "test" and "historical" copies of our production database separately in the same database container.
    Thanks for any direction.

    I currently have multiple copies of the same database running on a 2 node Rac system. I am looking for a way to combine them into 1 large database but keeping the data separate.
    The databases are copies of production for testing, development and a yearly "historical" databases .
    All the databases are created from production, and generally have the same schema's , tables, procedures, etc however may be different versions and need to be.
    Is There a way to use one large database and logically split all the different versions of the same objects into their own space in one database ? The structure cannot change as the database is for a 3rd party's Forms application the relies on the objects not changing names etc.
    Ideally I am looking for a solution that will allow the forms application to connect to "test" and "historical" copies of our production database separately in the same database container.
    Thanks for any direction.

  • Restore procedures of 2 node RAC

    I have 2 node RAC running 11gr2 (11.2.0.2), linux RHEL 5.x in my test server using Oracle VM. This set up has 5 diskgroups in ASM (ocrvote1, ocrvote2, ocrvote3), data, fra. Both data and fra are in externally redundancy and ocrvote1-3 in normal redundancy. As i'm getting ready to test the recovery procedures of the whole setup, I wanted to check here to see, if i have covered the necessary bits and not missing anything obvious.
    I will be taking full backup of database and md_backup of disk groups prior to testing these procedures. Below is the list of procedures i had come up, for getting the system restored in shortest time frame. If you think i can cut-down the time even further, would appreciate your inputs.
    -- Install OS via jump start on all nodes.
    -- Partition disks same set of disk and give similar names as before.
    -- Stamp disks with asmlib and present them to all nodes
    -- Install GI and RDBMS homes
    -- Bring up ASM with OCRVOTE diskgroup on all nodes. OCR and voting disks will be created as part of ASM setup.
    -- Restore data and fra dg from backup using md_restore
    -- Restore latest backup from tapes or disk in to FRA
    -- Restore and Recover database via Rman.
    Thanks
    Steve

    Very good point Sebastian. I guess i need to replace OCR created from new GI cluster install with OCR from backup right, unless you say i can restore OCR from backup while building the new GI cluster. If so, can you point to me to link or documentation in doing so.
    I was also thinking about using clones of GI and RDBMS home instead of making fresh install as part of the restore procedures. Is that an option?
    If i have all the storage partitions intact from previous install and if have to restore the system without any data loss, anything different i should be doing here.
    Steve

  • Multiple databases/instances on 4-node RAC Cluster including Physical Stand

    OS: Windows 2003 Server R2 X64
    DB: 10.2.0.4
    Virtualization: NONE
    Node Configuration: x64 architecture - 4-Socket Quad-Core (16 CPUs)
    Node Memory: 128GB RAM
    We are planning the following on the above-mentioned 4-node RAC cluster:
    Node 1: DB1 with instanceDB11 (Active-Active: Load-balancing & Failover)
    Node 2: DB1 with instanceDB12 (Active-Active: Load-balancing & Failover)
    Node 3: DB1 with instanceDB13 (Active-Passive: Failover only) + DB2 with instanceDB21 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB31 (Active-Active: Load-balancing & Failover) + DB4 with instance41 (Active-Active: Load-balancing & Failover)
    Node 4: DB1 with instanceDB14 (Active-Passive: Failover only) + DB2 with instanceDB22 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB32 (Active-Active: Load-balancing & Failover) + DB4 with instance42 (Active-Active: Load-balancing & Failover)
    Note: DB1 will be the physical primary PROD OLTP database and will be open in READ-WRITE mode 24x7x365.
    Note: DB2 will be a Physical Standby of DB1 and will be open in Read-Only mode for reporting purposes during the day-time, except for 3 hours at night when it will apply the logs.
    Note: DB3 will be a Physical Standby of a remote database DB4 (not part of this cluster) and will be mounted in Managed Recovery mode for automatic failover/switchover purposes.
    Note: DB4 will be the physical primary Data Warehouse DB.
    Note: Going to 11g is NOT an option.
    Note: Data Guard broker will be used across the board.
    Please answer/advise of the following:
    1. Is the above configuration supported and why so? If not, what are the alternatives?
    2. Is the above configuration recommended and why so? If not, what are the recommended alternatives?

    Hi,
    As far as i understand, there's nothing wrong in configuration except you need to consider below points while implementing final design.
    1. No of CPU on each servers
    2. Memory on each servers
    3. If you've RAC physical standby then apply(MRP0) will run on only one instance.
    4. Since you are configuring physical standby for on 3rd and 4th nodes of DB1 4 node cluster where DB13 and DB14 instances are used only for failver, if you've a disaster at data center or power failure in entire data center, you are losing both primary and secondary with an assumption that your primary and physical standby reside in same data center so it may not be highly available architecture. If you are going to use extended RAC for this configuration then it makes sense where Node 1 and Node 2 will reside in Datacenter A and Node 3 ,4 will reside in Datacenter B.
    Thanks,
    Keyur

  • How to create 11.2.0.2 physical standby database from 2 node RAC (11.2.0.2)

    Hi,
    Can any one please help me How to manually create 11.2.0.2 standalone physical standby database from 2 node RAC (11.2.0.2) database which is running in RHEL5 and ASM plugged in.
    DB : 11.2.0.2
    OS : RHEL5
    RMAN duplicate is causing problem with network and we decided to go for manual creation of the same.
    Thanks in Advance..

    Hi;
    Can any one please help me How to manually create 11.2.0.2 standalone physical standby database from 2 node RAC (11.2.0.2) database which is running in RHEL5 and ASM plugged in.
    DB : 11.2.0.2
    OS : RHEL5I had similar issue, what i did
    1. Used source oracle_home on standby server
    2. Created new asm instance and use same naming
    3. I took RMAN full backup on source and move it to target
    4. I edit initora file remove RAC setting and restore db(also edited listener file)
    Regard
    Helios

  • 11gR2- webutil upload file to AS on two nodes RAC?

    Hellow experts plz help with the following issue,
    we are using 11gR2 forms on two node rac,webutil configured on both nodes. upload/download files to AS folder(UP_FILES) using our forms. node 2 is actually replica of node 1(forms,reports,UP_FILES). we are now facing a problem that the form which upload file to AS, only upload that to the node from where it is running say if it is running from node1 then it upload files to folder UP_FILES on that node( as the entry in webutil.cfg file) but we want that UP_FILES folder should be synced with each other on both nodes and a form running whether from node1 or node2 upload the file to both nodes at time.
    How this will be accomplished?
    --------webutil.cfg entry
    transfer.appsrv.read.3=D:\UP_FILES
    transfer.appsrv.write.3=D:\UP_FILES
    -----FORM UPLOAD CODE
          IF :CONTROL.FILE_LOC IS NOT NULL THEN
            acyr3 := :CONTROL.TXTVOUCHERNO||'-'||acyr2 ;
           FILE_RESULT := WEBUTIL_FILE_TRANSFER.CLIENT_TO_AS_WITH_PROGRESS(CLIENTFILE=>:CONTROL.FILE_LOC,
           SERVERFILE =>'D:\UP_FILES\'||acyr3||'.PDF',
          PROGRESSTITLE=>'UPLOAD TO DATABASE IN PROGRESS',
          PROGRESSSUBTITLE=>'PLEASE WAIT' );
          END IF;
    --FORM DOWNLOAD CODE
            FILE_RESULT := WEBUTIL_FILE_TRANSFER.AS_TO_CLIENT_WITH_PROGRESS
                 CLIENTFILE=>'D:\UP_FILES\'||acyr3||'.PDF',
            SERVERFILE =>'D:\UP_FILES\'||acyr3||'.PDF',
          PROGRESSTITLE=>'DOWNLOAD FROM DATABASE IN PROGRESS',
          PROGRESSSUBTITLE=>'PLEASE WAIT'
                 CLIENT_HOST('rundll32.exe url.dll,FileProtocolHandler D:\UP_FILES\'||acyr3||'.PDF');
                 IF FILE_RESULT THEN
      message('File downloaded successfully from the Application Server');
      END IF;

    Well, you're uploading the file to one node, so this is no surprise. What you can do is to store your file on a shared folder (which might be a bit tricky on windows) or synchronize a folder between your nodes. Unfortunately Forms can't access ASM directly (assuming you are using ASM), so you can't store your files directly in ASM where it would be accessible from both nodes.
    cheers

  • How to add a second database along with existing two node RAC environment

    Hi,
    I was wondering if anyone can help me with this.
    My Environment:
    1. Two node RAC Cluster database (11.2.0.2) with ASM running perfectly (Oracle Sid = test-1)
    2. I have installed a second single instance db on node 2 (Oracle Sid = test-2) with NTFS file system for datafiles
    3. Database is up and running, but I am not able to connect it from any client.
    4. I am getting ORA-12514:  TNS:listener does not currently know of service requested in connect descriptor
    5. Database (test2) is registered with grid LISTENER
    5. TNSPING from client machine response is ok
    Am I missing something here, happy to provide more info if requested.
    Thanks,
    PS

    C:\Users\root.test_prod>lsnrctl
    LSNRCTL for 64-bit Windows: Version 11.2.0.2.0 - Production on 23-FEB-2012 11:58:05
    Copyright (c) 1991, 2010, Oracle. All rights reserved.
    Welcome to LSNRCTL, type "help" for information.
    LSNRCTL> status
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 64-bit Windows: Version 11.2.0.2.0 - Production
    Start Date 18-FEB-2012 19:51:47
    Uptime 4 days 16 hr. 6 min. 24 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File C:\app\11.2.0\grid\network\admin\listener.ora
    Listener Log File C:\app\11.2.0\grid\log\diag\tnslsnr\IRIS11G-DB-2\listener\alert\log.xml
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\LISTENERipc)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.30.0.202)(PORT=1520)))
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.30.0.215)(PORT=1520)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+asm2", status READY, has 1 handler(s) for this service...
    Service "test-11gRAC.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Service "test-11gRXDB.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Service "irisapps.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Service "test-2" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    Service "test-2XDB" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    The command completed successfully
    LSNRCTL> service
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
    Instance "+asm2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:1322 refused:0 state:ready
    LOCAL SERVER
    Service "test-11gRAC.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:1829 refused:0 state:ready
    LOCAL SERVER
    Service "test-11gRXDB.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: IRIS11G-DB-2, pid: 4496>
    (ADDRESS=(PROTOCOL=tcp)(HOST=IRIS11G-DB-2.test_prod.internal)(PORT=57695))
    Service "irisapps.test_prod.internal" has 1 instance(s).
    Instance "test-11gr2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:1829 refused:0 state:ready
    LOCAL SERVER
    Service "test-2" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "DEDICATED" established:0 refused:0 state:ready
    LOCAL SERVER
    Service "test-2XDB" has 1 instance(s).
    Instance "test-2", status READY, has 1 handler(s) for this service...
    Handler(s):
    "D000" established:0 refused:0 current:0 max:1022 state:ready
    DISPATCHER <machine: IRIS11G-DB-2, pid: 9340>
    (ADDRESS=(PROTOCOL=tcp)(HOST=IRIS11G-DB-2.test_prod.internal)(PORT=58653))
    The command completed successfully
    LSNRCTL>

  • Single instance standby for 2-node RAC

    Hi,
    Oracle Version:11.2.0.1
    Operating system:Linux
    Here i am planing to create single instance standby for my 2-node RAC database.Here i am creating my single instance standby database on 1-node of my 2-node RAC DB.
    1.) Do i need to configure any separate listener for my single instance standby in $ORACLE_HOME/network/admin in ORACLE user or need to change in Grid user login.
    2.) Below is the error when i am duplicating my primary 2-Node RAC to single instance DB. And it is shutting down my auxiliary instance.
    [oracle@rac1 ~]$ rman target / auxiliary sys/racdba123@stand
    Recovery Manager: Release 11.2.0.1.0 - Production on Sun Aug 28 13:32:29 2011
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    connected to target database: RACDB (DBID=755897741)
    connected to auxiliary database: RACDB (not mounted)
    RMAN> duplicate database racdba to stand
    2> ;
    Starting Duplicate Db at 28-AUG-11
    using target database control file instead of recovery catalog
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: SID=6 device type=DISK
    contents of Memory Script:
       sql clone "create spfile from memory";
    executing Memory Script
    sql statement: create spfile from memory
    contents of Memory Script:
       shutdown clone immediate;
       startup clone nomount;
    executing Memory Script
    Oracle instance shut down
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 08/28/2011 13:33:55
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-04006: error from auxiliary database: ORA-12514: TNS:listener does not currently know of service requested in connect descriptorAlso find my listener services.
    [oracle@rac1 ~]$ lsnrctl status
    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 29-AUG-2011 10:56:24
    Copyright (c) 1991, 2009, Oracle.  All rights reserved.
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    STATUS of the LISTENER
    Alias                     LISTENER
    Version                   TNSLSNR for Linux: Version 11.2.0.1.0 - Production
    Start Date                18-AUG-2011 10:35:07
    Uptime                    11 days 0 hr. 21 min. 17 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /u01/11.2.0/grid/network/admin/listener.ora
    Listener Log File         /u01/app/oracle/diag/tnslsnr/rac1/listener/alert/log.xml
    Listening Endpoints Summary...
      (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.8.123)(PORT=1521)))
      (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.8.127)(PORT=1521)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
      Instance "+ASM1", status READY, has 1 handler(s) for this service...
    Service "RACDB" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
    Service "RACDBXDB" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
    Service "stand" has 2 instance(s).
      Instance "stand", status UNKNOWN, has 1 handler(s) for this service...
      Instance "stand", status BLOCKED, has 1 handler(s) for this service...
    Service "testdb" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
    Service "testdb1" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
    The command completed successfully
    [oracle@rac1 ~]$ lsnrctl services
    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 29-AUG-2011 10:56:35
    Copyright (c) 1991, 2009, Oracle.  All rights reserved.
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    Services Summary...
    Service "+ASM" has 1 instance(s).
      Instance "+ASM1", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:0 refused:0 state:ready
             LOCAL SERVER
    Service "RACDB" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:3 refused:0 state:ready
             LOCAL SERVER
    Service "RACDBXDB" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
        Handler(s):
          "D000" established:0 refused:0 current:0 max:1022 state:ready
             DISPATCHER <machine: rac1.qfund.net, pid: 3975>
             (ADDRESS=(PROTOCOL=tcp)(HOST=rac1.qfund.net)(PORT=43731))
    Service "stand" has 2 instance(s).
      Instance "stand", status UNKNOWN, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:0 refused:0
             LOCAL SERVER
      Instance "stand", status BLOCKED, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:669 refused:0 state:ready
             LOCAL SERVER
    Service "testdb" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:3 refused:0 state:ready
             LOCAL SERVER
    Service "testdb1" has 1 instance(s).
      Instance "RACDB1", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:3 refused:0 state:ready
             LOCAL SERVER
    The command completed successfully
    [oracle@rac1 ~]$Tnsnames.ora file content.
    RACDB =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = racdb-scan.qfund.net)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = RACDB)
    #QFUNDRAC =
    stand =
      (DESCRIPTION =
        (ADDRESS_LIST =
          (ADDRESS = (PROTOCOL = TCP)(HOST= racdb-scan.qfund.net)(PORT = 1521))
        (CONNECT_DATA =
          (SERVICE_NAME = stand)
          (UR = A)
      )Please help me how to solve this problem.
    Thanks & Regards,
    Poorna Prasad.S

    Hi,
    Please find the output from v$dataguard_status from primary and standby
    Primary
    SQL> select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARCH: LGWR is scheduled to archive destination LOG_ARCHIVE_DEST_2 after log swit
    ch
    ARCH: Beginning to archive thread 1 sequence 214 (4604093-4604095)
    Error 12514 received logging on to the standby
    ARCH: Error 12514 Creating archive log file to 'stand'
    ARCH: Completed archiving thread 1 sequence 214 (4604093-4604095)
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    MESSAGE
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    MESSAGE
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    MESSAGE
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARC1: Becoming the 'no FAL' ARCH
    ARC1: Becoming the 'no SRL' ARCH
    ARC2: Becoming the heartbeat ARCH
    ARC7: Beginning to archive thread 1 sequence 215 (4604095-4604191)
    ARC7: Completed archiving thread 1 sequence 215 (4604095-4604191)
    ARC5: Beginning to archive thread 1 sequence 216 (4604191-4604471)
    ARC5: Completed archiving thread 1 sequence 216 (4604191-4604471)
    ARCt: Archival started
    MESSAGE
    ARC3: Beginning to archive thread 1 sequence 217 (4604471-4605358)
    ARC3: Completed archiving thread 1 sequence 217 (4604471-4605358)
    LNS: Standby redo logfile selected for thread 1 sequence 217 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 217
    LNS: Completed archiving log 1 thread 1 sequence 217
    LNS: Standby redo logfile selected for thread 1 sequence 218 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 218
    MESSAGE
    LNS: Completed archiving log 2 thread 1 sequence 218
    ARC4: Beginning to archive thread 1 sequence 218 (4605358-4625984)
    ARC4: Completed archiving thread 1 sequence 218 (4605358-4625984)
    LNS: Standby redo logfile selected for thread 1 sequence 219 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 219
    LNS: Completed archiving log 1 thread 1 sequence 219
    ARC5: Beginning to archive thread 1 sequence 219 (4625984-4641358)
    ARC5: Completed archiving thread 1 sequence 219 (4625984-4641358)
    LNS: Standby redo logfile selected for thread 1 sequence 220 for destination LOG
    MESSAGE
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 220
    LNS: Completed archiving log 2 thread 1 sequence 220
    ARC6: Beginning to archive thread 1 sequence 220 (4641358-4644757)
    ARC6: Completed archiving thread 1 sequence 220 (4641358-4644757)
    LNS: Standby redo logfile selected for thread 1 sequence 221 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 221
    LNS: Completed archiving log 1 thread 1 sequence 221
    MESSAGE
    ARC7: Beginning to archive thread 1 sequence 221 (4644757-4648306)
    ARC7: Completed archiving thread 1 sequence 221 (4644757-4648306)
    LNS: Standby redo logfile selected for thread 1 sequence 222 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 222
    LNS: Completed archiving log 2 thread 1 sequence 222
    ARC8: Beginning to archive thread 1 sequence 222 (4648306-4655287)
    ARC8: Completed archiving thread 1 sequence 222 (4648306-4655287)
    LNS: Standby redo logfile selected for thread 1 sequence 223 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 1 thread 1 sequence 223
    LNS: Completed archiving log 1 thread 1 sequence 223
    ARC9: Beginning to archive thread 1 sequence 223 (4655287-4655307)
    ARC9: Completed archiving thread 1 sequence 223 (4655287-4655307)
    LNS: Standby redo logfile selected for thread 1 sequence 224 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 224
    LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
    MESSAGE
    Error 3135 for archive log file 2 to 'stand'
    LNS: Failed to archive log 2 thread 1 sequence 224 (3135)
    ARC3: Beginning to archive thread 1 sequence 224 (4655307-4660812)
    ARC3: Completed archiving thread 1 sequence 224 (4655307-4660812)
    LNS: Standby redo logfile selected for thread 1 sequence 224 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 224
    LNS: Completed archiving log 2 thread 1 sequence 224
    LNS: Standby redo logfile selected for thread 1 sequence 225 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 1 thread 1 sequence 225
    LNS: Completed archiving log 1 thread 1 sequence 225
    ARC4: Beginning to archive thread 1 sequence 225 (4660812-4660959)
    ARC4: Completed archiving thread 1 sequence 225 (4660812-4660959)
    LNS: Standby redo logfile selected for thread 1 sequence 226 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 226
    LNS: Completed archiving log 2 thread 1 sequence 226
    ARC5: Beginning to archive thread 1 sequence 226 (4660959-4664925)
    MESSAGE
    LNS: Standby redo logfile selected for thread 1 sequence 227 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 227
    ARC5: Completed archiving thread 1 sequence 226 (4660959-4664925)
    LNS: Completed archiving log 1 thread 1 sequence 227
    LGWR: Error 1089 closing archivelog file 'stand'
    ARC6: Beginning to archive thread 1 sequence 227 (4664925-4668448)
    ARC6: Completed archiving thread 1 sequence 227 (4664925-4668448)
    ARC5: Beginning to archive thread 1 sequence 228 (4668448-4670392)
    ARC5: Completed archiving thread 1 sequence 228 (4668448-4670392)
    MESSAGE
    LNS: Standby redo logfile selected for thread 1 sequence 228 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 228
    LNS: Completed archiving log 2 thread 1 sequence 228
    ARC4: Standby redo logfile selected for thread 1 sequence 227 for destination LO
    G_ARCHIVE_DEST_2
    LNS: Standby redo logfile selected for thread 1 sequence 229 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 1 thread 1 sequence 229
    LNS: Completed archiving log 1 thread 1 sequence 229
    ARC3: Beginning to archive thread 1 sequence 229 (4670392-4670659)
    ARC3: Completed archiving thread 1 sequence 229 (4670392-4670659)
    LNS: Standby redo logfile selected for thread 1 sequence 230 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 230
    LNS: Completed archiving log 2 thread 1 sequence 230
    ARC4: Beginning to archive thread 1 sequence 230 (4670659-4670679)
    ARC4: Completed archiving thread 1 sequence 230 (4670659-4670679)
    MESSAGE
    LNS: Standby redo logfile selected for thread 1 sequence 231 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 231
    LNS: Completed archiving log 1 thread 1 sequence 231
    ARC5: Beginning to archive thread 1 sequence 231 (4670679-4690371)
    ARC5: Completed archiving thread 1 sequence 231 (4670679-4690371)
    LNS: Standby redo logfile selected for thread 1 sequence 232 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 232
    MESSAGE
    LNS: Completed archiving log 2 thread 1 sequence 232
    ARC6: Beginning to archive thread 1 sequence 232 (4690371-4712566)
    ARC6: Completed archiving thread 1 sequence 232 (4690371-4712566)
    LNS: Standby redo logfile selected for thread 1 sequence 233 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 233
    LNS: Completed archiving log 1 thread 1 sequence 233
    ARC7: Beginning to archive thread 1 sequence 233 (4712566-4731626)
    LNS: Standby redo logfile selected for thread 1 sequence 234 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 2 thread 1 sequence 234
    ARC7: Completed archiving thread 1 sequence 233 (4712566-4731626)
    LNS: Completed archiving log 2 thread 1 sequence 234
    ARC8: Beginning to archive thread 1 sequence 234 (4731626-4753780)
    LNS: Standby redo logfile selected for thread 1 sequence 235 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 235
    ARC8: Completed archiving thread 1 sequence 234 (4731626-4753780)
    LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    MESSAGE
    LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
    Error 3135 for archive log file 1 to 'stand'
    LNS: Failed to archive log 1 thread 1 sequence 235 (3135)
    ARC9: Beginning to archive thread 1 sequence 235 (4753780-4765626)
    ARC9: Completed archiving thread 1 sequence 235 (4753780-4765626)
    LNS: Standby redo logfile selected for thread 1 sequence 235 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 235
    LNS: Completed archiving log 1 thread 1 sequence 235
    LNS: Standby redo logfile selected for thread 1 sequence 236 for destination LOG
    MESSAGE
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 236
    LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
    Error 3135 for archive log file 2 to 'stand'
    LNS: Failed to archive log 2 thread 1 sequence 236 (3135)
    ARCa: Beginning to archive thread 1 sequence 236 (4765626-4768914)
    ARCa: Completed archiving thread 1 sequence 236 (4765626-4768914)
    LNS: Standby redo logfile selected for thread 1 sequence 236 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 2 thread 1 sequence 236
    LNS: Completed archiving log 2 thread 1 sequence 236
    LNS: Standby redo logfile selected for thread 1 sequence 237 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 237
    LNS: Completed archiving log 1 thread 1 sequence 237
    ARCb: Beginning to archive thread 1 sequence 237 (4768914-4770603)
    ARCb: Completed archiving thread 1 sequence 237 (4768914-4770603)
    LNS: Standby redo logfile selected for thread 1 sequence 238 for destination LOG
    MESSAGE
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 238
    LNS: Completed archiving log 2 thread 1 sequence 238
    ARCc: Beginning to archive thread 1 sequence 238 (4770603-4770651)
    ARCc: Completed archiving thread 1 sequence 238 (4770603-4770651)
    LNS: Standby redo logfile selected for thread 1 sequence 239 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 239
    LNS: Completed archiving log 1 thread 1 sequence 239
    MESSAGE
    ARCd: Beginning to archive thread 1 sequence 239 (4770651-4773918)
    ARCd: Completed archiving thread 1 sequence 239 (4770651-4773918)
    LNS: Standby redo logfile selected for thread 1 sequence 240 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 240
    LNS: Completed archiving log 2 thread 1 sequence 240
    ARCe: Beginning to archive thread 1 sequence 240 (4773918-4773976)
    ARCe: Completed archiving thread 1 sequence 240 (4773918-4773976)
    LNS: Standby redo logfile selected for thread 1 sequence 241 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 1 thread 1 sequence 241
    LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
    Error 3135 for archive log file 1 to 'stand'
    LNS: Failed to archive log 1 thread 1 sequence 241 (3135)
    ARC3: Beginning to archive thread 1 sequence 241 (4773976-4774673)
    ARC3: Completed archiving thread 1 sequence 241 (4773976-4774673)
    LNS: Standby redo logfile selected for thread 1 sequence 241 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 1 thread 1 sequence 241
    LNS: Completed archiving log 1 thread 1 sequence 241
    LNS: Standby redo logfile selected for thread 1 sequence 242 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 242
    LNS: Completed archiving log 2 thread 1 sequence 242
    ARC4: Beginning to archive thread 1 sequence 242 (4774673-4776045)
    ARC4: Completed archiving thread 1 sequence 242 (4774673-4776045)
    LNS: Standby redo logfile selected for thread 1 sequence 243 for destination LOG
    _ARCHIVE_DEST_2
    MESSAGE
    LNS: Beginning to archive log 1 thread 1 sequence 243
    LNS: Completed archiving log 1 thread 1 sequence 243
    ARC5: Beginning to archive thread 1 sequence 243 (4776045-4776508)
    ARC5: Completed archiving thread 1 sequence 243 (4776045-4776508)
    LNS: Standby redo logfile selected for thread 1 sequence 244 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 244
    LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
    MESSAGE
    Error 3135 for archive log file 2 to 'stand'
    LNS: Failed to archive log 2 thread 1 sequence 244 (3135)
    ARC6: Beginning to archive thread 1 sequence 244 (4776508-4778741)
    ARC6: Completed archiving thread 1 sequence 244 (4776508-4778741)
    ARC7: Beginning to archive thread 1 sequence 245 (4778741-4778781)
    ARC7: Completed archiving thread 1 sequence 245 (4778741-4778781)
    ARC8: Beginning to archive thread 1 sequence 246 (4778781-4778787)
    ARC8: Completed archiving thread 1 sequence 246 (4778781-4778787)
    ARC9: Standby redo logfile selected for thread 1 sequence 244 for destination LO
    G_ARCHIVE_DEST_2
    MESSAGE
    ARC3: Beginning to archive thread 1 sequence 247 (4778787-4778934)
    LNS: Standby redo logfile selected for thread 1 sequence 247 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 247
    ARC3: Completed archiving thread 1 sequence 247 (4778787-4778934)
    LNS: Completed archiving log 1 thread 1 sequence 247
    LNS: Standby redo logfile selected for thread 1 sequence 248 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 248
    MESSAGE
    ARC4: Beginning to archive thread 1 sequence 248 (4778934-4781018)
    LNS: Completed archiving log 2 thread 1 sequence 248
    ARC4: Completed archiving thread 1 sequence 248 (4778934-4781018)
    LNS: Standby redo logfile selected for thread 1 sequence 249 for destination LOG
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 1 thread 1 sequence 249
    LNS: Completed archiving log 1 thread 1 sequence 249
    ARC5: Beginning to archive thread 1 sequence 249 (4781018-4781033)
    ARC5: Completed archiving thread 1 sequence 249 (4781018-4781033)
    LNS: Standby redo logfile selected for thread 1 sequence 250 for destination LOG
    MESSAGE
    _ARCHIVE_DEST_2
    LNS: Beginning to archive log 2 thread 1 sequence 250
    233 rows selected.
    SQL>Standby
    SQL> select message from v$dataguard_status;
    MESSAGE
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    MESSAGE
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    MESSAGE
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARC1: Becoming the 'no FAL' ARCH
    ARC2: Becoming the heartbeat ARCH
    Error 1017 received logging on to the standby
    FAL[client, ARC2]: Error 16191 connecting to RACDB for fetching gap sequence
    MESSAGE
    ARCt: Archival started
    Attempt to start background Managed Standby Recovery process
    MRP0: Background Managed Standby Recovery process started
    Managed Standby Recovery starting Real Time Apply
    Media Recovery Log /u02/stand/archive/1_119_758280976.arc
    Media Recovery Waiting for thread 2 sequence 183
    RFS[1]: Assigned to RFS process 30110
    RFS[1]: Identified database type as 'physical standby': Client is ARCH pid 25980
    RFS[2]: Assigned to RFS process 30118
    RFS[2]: Identified database type as 'physical standby': Client is ARCH pid 26008
    RFS[3]: Assigned to RFS process 30124
    MESSAGE
    RFS[3]: Identified database type as 'physical standby': Client is ARCH pid 26029
    RFS[4]: Assigned to RFS process 30130
    RFS[4]: Identified database type as 'physical standby': Client is ARCH pid 26021
    ARC4: Beginning to archive thread 1 sequence 244 (4776508-4778741)
    ARC4: Completed archiving thread 1 sequence 244 (0-0)
    RFS[5]: Assigned to RFS process 30144
    RFS[5]: Identified database type as 'physical standby': Client is LGWR ASYNC pid
    26128
    Primary database is in MAXIMUM PERFORMANCE mode
    ARC5: Beginning to archive thread 1 sequence 247 (4778787-4778934)
    MESSAGE
    ARC5: Completed archiving thread 1 sequence 247 (0-0)
    ARC6: Beginning to archive thread 1 sequence 248 (4778934-4781018)
    ARC6: Completed archiving thread 1 sequence 248 (0-0)
    ARC7: Beginning to archive thread 1 sequence 249 (4781018-4781033)
    ARC7: Completed archiving thread 1 sequence 249 (0-0)
    58 rows selected.
    SQL>also find the output for the primary alertlog file.
    Tue Aug 30 10:45:41 2011
    LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3135)
    LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
    Errors in file /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/RACDB1_nsa2_26128.trc:
    ORA-03135: connection lost contact
    Error 3135 for archive log file 2 to 'stand'
    Errors in file /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/RACDB1_nsa2_26128.trc:
    ORA-03135: connection lost contact
    LNS: Failed to archive log 2 thread 1 sequence 244 (3135)
    Errors in file /u01/app/oracle/diag/rdbms/racdb/RACDB1/trace/RACDB1_nsa2_26128.trc:
    ORA-03135: connection lost contact
    Tue Aug 30 10:50:25 2011
    Thread 1 advanced to log sequence 245 (LGWR switch)
      Current log# 1 seq# 245 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
      Current log# 1 seq# 245 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
    Tue Aug 30 10:50:25 2011
    Archived Log entry 612 added for thread 1 sequence 244 ID 0x2d0e0689 dest 1:
    Thread 1 cannot allocate new log, sequence 246
    Checkpoint not complete
      Current log# 1 seq# 245 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
      Current log# 1 seq# 245 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
    Thread 1 advanced to log sequence 246 (LGWR switch)
      Current log# 2 seq# 246 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
      Current log# 2 seq# 246 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
    Tue Aug 30 10:50:27 2011
    Archived Log entry 613 added for thread 1 sequence 245 ID 0x2d0e0689 dest 1:
    Thread 1 cannot allocate new log, sequence 247
    Checkpoint not complete
      Current log# 2 seq# 246 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
      Current log# 2 seq# 246 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
    Thread 1 advanced to log sequence 247 (LGWR switch)
      Current log# 1 seq# 247 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
      Current log# 1 seq# 247 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
    Tue Aug 30 10:50:30 2011
    Archived Log entry 614 added for thread 1 sequence 246 ID 0x2d0e0689 dest 1:
    Tue Aug 30 10:51:37 2011
    ARC9: Standby redo logfile selected for thread 1 sequence 244 for destination LOG_ARCHIVE_DEST_2
    Tue Aug 30 10:51:39 2011
    Thread 1 advanced to log sequence 248 (LGWR switch)
      Current log# 2 seq# 248 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
      Current log# 2 seq# 248 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
    Tue Aug 30 10:51:39 2011
    Archived Log entry 620 added for thread 1 sequence 247 ID 0x2d0e0689 dest 1:
    Tue Aug 30 10:51:39 2011
    LNS: Standby redo logfile selected for thread 1 sequence 247 for destination LOG_ARCHIVE_DEST_2
    LNS: Standby redo logfile selected for thread 1 sequence 248 for destination LOG_ARCHIVE_DEST_2
    Tue Aug 30 11:08:27 2011
    Thread 1 advanced to log sequence 249 (LGWR switch)
      Current log# 1 seq# 249 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
      Current log# 1 seq# 249 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
    Tue Aug 30 11:08:27 2011
    Archived Log entry 622 added for thread 1 sequence 248 ID 0x2d0e0689 dest 1:
    Tue Aug 30 11:08:27 2011
    LNS: Standby redo logfile selected for thread 1 sequence 249 for destination LOG_ARCHIVE_DEST_2
    Thread 1 cannot allocate new log, sequence 250
    Checkpoint not complete
      Current log# 1 seq# 249 mem# 0: +ASM_DATA1/racdb/onlinelog/group_1.268.758280977
      Current log# 1 seq# 249 mem# 1: +ASM_DATA2/racdb/onlinelog/group_1.265.758280979
    Thread 1 advanced to log sequence 250 (LGWR switch)
      Current log# 2 seq# 250 mem# 0: +ASM_DATA1/racdb/onlinelog/group_2.269.758280979
      Current log# 2 seq# 250 mem# 1: +ASM_DATA2/racdb/onlinelog/group_2.266.758280981
    Tue Aug 30 11:08:31 2011
    Archived Log entry 624 added for thread 1 sequence 249 ID 0x2d0e0689 dest 1:
    LNS: Standby redo logfile selected for thread 1 sequence 250 for destination LOG_ARCHIVE_DEST_2Thanks & Regards,
    Poorna Prasad.S

  • OEM Issue: oemagent stops after some time on specific node on a 3 node RAC system.

    We have a 3 node RAC system running 11.2.0.3 .
    The oemagent keeps failing on 1 particular node and needs to be brought up every now and then. What could be the probable cause of this. Also, I have not seen anything suspicious in the agent logs.
    Oracle Enterprise Manager Cloud Control 12c Release 2
    Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
    Agent Version     : 12.1.0.2.0
    OMS Version       : 12.1.0.2.0
    Protocol Version  : 12.1.0.1.0
    Last successful upload                       : 2015-03-10 07:28:10
    Last attempted upload                        : 2015-03-10 07:28:10
    Total Megabytes of XML files uploaded so far : 0.08
    Number of XML files pending upload           : 0
    Size of XML files pending upload(MB)         : 0
    Available disk space on upload filesystem    : 78.16%
    Collection Status                            : Collections enabled
    Heartbeat Status                             : Ok
    Last attempted heartbeat to OMS              : 2015-03-10 07:28:25
    Last successful heartbeat to OMS             : 2015-03-10 07:28:25
    Next scheduled heartbeat to OMS              : 2015-03-10 07:29:25
    Agent is Running and Ready
    Sometimes while checking status we often get the followinG:
    Oracle Enterprise Manager Cloud Control 12c Release 2
    Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
    Status agent Failure:unable to connect to http server at https://<servername>:<port>/emd/lifecycle/main/ [peer not authenticated]
    Agent is Not Running
    Any help?

    We have a 3 node RAC system running 11.2.0.3 .
    The oemagent keeps failing on 1 particular node and needs to be brought up every now and then. What could be the probable cause of this. Also, I have not seen anything suspicious in the agent logs.
    Oracle Enterprise Manager Cloud Control 12c Release 2
    Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
    Agent Version     : 12.1.0.2.0
    OMS Version       : 12.1.0.2.0
    Protocol Version  : 12.1.0.1.0
    Last successful upload                       : 2015-03-10 07:28:10
    Last attempted upload                        : 2015-03-10 07:28:10
    Total Megabytes of XML files uploaded so far : 0.08
    Number of XML files pending upload           : 0
    Size of XML files pending upload(MB)         : 0
    Available disk space on upload filesystem    : 78.16%
    Collection Status                            : Collections enabled
    Heartbeat Status                             : Ok
    Last attempted heartbeat to OMS              : 2015-03-10 07:28:25
    Last successful heartbeat to OMS             : 2015-03-10 07:28:25
    Next scheduled heartbeat to OMS              : 2015-03-10 07:29:25
    Agent is Running and Ready
    Sometimes while checking status we often get the followinG:
    Oracle Enterprise Manager Cloud Control 12c Release 2
    Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
    Status agent Failure:unable to connect to http server at https://<servername>:<port>/emd/lifecycle/main/ [peer not authenticated]
    Agent is Not Running
    Any help?

  • Dbms_schduler job is not running on a 2 node rac when 1st node fails

    Hi,
    I want to create a dbms_scheduler job in a 2 node RAC and the job should always run on the node1 and if node1 is down then it should run on node2. This is Oracle 10gR2 (10.2.0.3 in WINDOWS) .In order to do the same I did following
    -- First Step
    Using DBCA- Service Managment - Created a service (BATCH_SERVICE) and given node1 as preferred and node2 as available. This created following entry in tnsnames.ora in both nodes.
    BATCH_SERVICE =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = BATCH_SERVICE)
    (FAILOVER_MODE =
    (TYPE = SELECT)
    (METHOD = BASIC)
    (RETRIES = 180)
    (DELAY = 5)
    --- Step 2
    -- Created BATCH job classes.
    BEGIN
    DBMS_SCHEDULER.create_job_class(
    job_class_name => 'BATCH_JOB_CLASS',
    service => 'BATCH_SERVICE');
    END;
    -- Step 3 -- created a job using job_class as BATCH_JOB_CLASS
    begin
    dbms_scheduler.create_job(
    job_name => 'oltp_job_test'
    ,job_type => 'STORED_PROCEDURE'
    ,job_action => 'schema1.P1'
    ,start_date => systimestamp at time zone 'US/Central'
    ,repeat_interval => 'FREQ=DAILY;BYHOUR=11;BYMINUTE=30;'
    ,job_class => 'BATCH_JOB_CLASS'
    ,enabled => TRUE
    ,comments => 'New Job.');
    end;
    Now when I monitor this job it runs on node1. Now I started testing for failover. I manually shutdown 1st instance. Then as per my understanding job should run on 2nd node. But job is not picking up.
    when I run the followign command
    srvctl status service -d db -s BATCH_SERVICE
    service BATCH_SERVICE is running on instance node2.
    Any help is really appreciated.

    It does not show that whether job is running or broken.

  • One node RAC pause/hang/block on other node shutdown

    Hi,
    We have a Java application running on Linux servers connecting to a 10.2.0.1 RAC cluster, also Linux. When the application starts it opens up a pool of connections to the databsae, and these are used throughout the life time of the application. One server connects to one RAC node.
    AppA - DBA
    AppB - DBB
    When we shutdown one node, the application connecting to that node stops, which is what we would expect in this configuration.
    What is strange is that the other application blocks for 63 seconds and then continues. So it is like the database is blocking, or the database connections are blocking.
    We are not using TAF, FAN, FCN, LB, VIPs or any special features, just simple lightweight JDBC from one server to one database. In fact I do not thing we are unwittingly using any of these features, we have them switched off.
    john

    user1788323 wrote:
    What is strange is that the other application blocks for 63 seconds and then continues. So it is like the database is blocking, or the database connections are blocking.How have you determined/diagnosed the 63s blocking? (more details in this regard may shed some light on the problem)
    Assuming that the block is server side, then two basic reasons comes to mind.
    Networking issue - the CRS on the surviving node has to perform certain functions, like switching the VIP of the node that left the cluster to a surviving cluster node. The listener may need to re-register services. A local firewall may need to be dynamically reconfigured for supporting the new failed-over VIP. Etc.
    Thus these could result in some kind of delay or issue in the network layer that you are seeing from the client side.
    Infrastructure issue. If the actual client request via JDBC reaches the server process, and it is slow in responding, then that is not a network issue - instead some underlying service or s/w layer that the server process needs to use to perform the client request is busy for those 63s.
    This could be related to the Interconnect, the shared I/O storage layer or something along those lines. For example, how does the Interconnect and/or SAN switch re-act when a server node is powered down or rebooted?
    There's not really sufficient information to make anything but a guesses.. You will need to isolate the problem with further testing.
    I have seen similar problems with 10.1.0.3 CRS and RAC when a node is evicted from the cluster. In this case the "hung" period was in excess of 15 minutes and only for new connections (Listener unable to hand off to dedicated servers or dispatchers). Existing connections worked fine however and were unaware of any problems. But part of the issue in this case was a poor (outdated) driver layer - and also the last time we used proprietary binary drivers (kernel modules) from 3rd party vendors that results in a tainted (and very fixed and rigid) Linux kernel. Today we're sticking with an OpenSource driver layer only for Linux.

  • All connections are connecting to 2nd node only in a 2 Node RAC Cluster

    Hello,
    I have a 10.2.0.3 database on a two node RAC Cluster with only one service configured. This service set to be preferred on both nodes.
    However, all the connections are falling on Node2 only. Any idea where to look.
    $> srvctl config service -d PSDB
    psdbsrv1 PREF: psdb1 psdb2 AVAIL:
    Thanks,
    MM

    Application is using the following connection string.
    jdbc:oracle:thin:@(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = PQ2-PS-db-01-vip)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = PQ2-PS-db-02-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA =(SERVER = DEDICATED)(SERVICE_NAME = PSDBSRV1)(FAILOVER_MODE =(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))
    --MM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Mutliple serveres in a two node RAC

    hello friends m i newbie in RAC and i have 7 servers in my company 4 are for oracle database used for different purposes having same instance name on each of them and 3 are for application servers used for different purposes having same instance name and i want to use two node RAC for all of them now my question is
    1) can i use all the 7 servers (4 database and 3 application) together in a two node RAC?
    2) in RAC can i have different instances having same name?
    thanx in advance

    saugat chatterjee wrote:
    hello friends m i newbie in RAC and i have 7 servers in my company 4 are for oracle database used for different purposes having same instance name on each of them and 3 are for application servers used for different purposes having same instance name and i want to use two node RAC for all of them now my question isRAC is a single database running across multiple servers (db instance on each). That is how Oracle scales.
    Oracle does not scale the converse way - running multiple databases on a single server.
    1) can i use all the 7 servers (4 database and 3 application) together in a two node RAC?That depends entirely on whether the 2 severs have the capacity to deal with the processing load currently handled by the database servers.
    Also note that in Oracle the terms "+database+" and "+schema+" differ from what are used by some other RDBMS products.
    An Oracle database is a physical entity. An Oracle schema is a (amongst other things) a logical database. So a single physical Oracle database can contain 1000's of logical databases - and these logical databases can be physically separated from one another (via tablespaces), can have different rights, can have different resource profiles, etc.
    Each RAC node is part of a shared everything cluster - meaning that the entire physical database is available on each and every RAC node.
    On RAC, you can do what is called application partitioning. You can for example configure HR clients to make use of the HR logical database (schema) on RAC node 1. RAC node 3 is for example used for the Data Mart application and node 2 handles the customer billing clients.
    In other words, you partition your applications across RAC nodes.
    2) in RAC can i have different instances having same name? The instance name is a physical name and should very seldom be explicitly used. Your physical RAC database's SID may be PROD. Your 4 RAC instances, one on each server, will have SIDs PROD1 to PROD4.
    But you should instead create database services and have clients connect (via the RAC's SCAN/Single Client Access Name) to a specific service.
    Bottom line to keep in mind - RAC is Oracle's answer to scalability. And RAC is multiple servers and a single database. Doing the converse and running multiple database on a single server (RAC or no RAC), very seldom makes any technical sense. It almost always will reduce both performance and scalability.

Maybe you are looking for