Adding nodes to cluster in 10g r2 10.1.0.3

I apologize if this is a repeat but my browser crashed before I could watch my post. I am asking a hypothetical question regarding adding nodes to the cluster. I am trying to get a feel for how much risk is involved in the operation and if there is any chance we could corrupt the current configuration?
I was reading the article from Murali Vallath and notice that he made it a point to say that you should make a full cold backup before you perfrom step 6...
Step 6: Add New Instance(s)
DBCA has all the required options to add additional instances to the cluster.
Requirements:
Make a full cold backup of the database before commencing the upgrade process.
Is there risk of corrupting the database during this step?
We are running 10.2.0.3 on linux Itanium on RHEL4 and we are running a 2 node cluster. We are using OCFS2 for the OCR and Voting devices and we are using ASM and also ASMLIB for our shared storage option. We also are running EMC Powerpath on our hosts.
Any tips or heads up would be greatly appreciated.
Thanks.

Duplicate post :- adding nodes to cluster in 10g r2 10.1.0.3

Similar Messages

  • Can't start EM after adding node to cluster

    Hi, I don't know if i choose the right forum. I have a problem with OEM after adding node to the cluster: It gives me an error:
    [oracle@nodedc3 bin]$ emctl start dbconsole
    Can't do setuid (cannot exec sperl)I think i can be a permissions issue like it was in my second thread. Thanks

    Hi, i have this package installed:
    [root@nodedc3 ~]# yum install perl-suidperl
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
    * base: centos.mirror.linuxwerk.com
    * extras: ftp.plusline.de
    * updates: mirror.optimate-server.de
    Setting up Install Process
    Package 4:perl-suidperl-5.8.8-40.el5_9.x86_64 already installed and latest version
    Nothing to doOEM on other nodes works perfectlly. Thanks

  • Adding node back into cluster after removal...

    Hi,
    I removed a cluster node using "scconf -r -h <node>" (carried out all the other usual removal steps before getting this command to work).
    Because this is a pair+1 cluster and the node i was trying to remove was physically attached to the quroum device (scsi), I had to create a dummy node before the removal command above would work.
    I reinstalled solaris, SC3.1u4 framwork, patches etc. and then tried to run scsinstall again on the node (reintroduced the node to the cluster again first using scconf -a -T node=<node>).
    However! during the scsinstall i got the following problem:
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    scrconf: RPC: Unknown host
    scinstall:  Failed communications with "bogusnode"
    scinstall: scinstall did NOT complete successfully!
    Press Enter to continue:
    Was not sure what to do at this point, but since the other clusternodes could now see my 'new' node again, i removed the dummy node, rebooted the new node and said a little prayer...
    Now, my node will not boot as part of the cluster:
    Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: CMM: Node bogusnode (nodeid = 4) with votecount = 0 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205346037.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #18 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    NOTICE: CMM: Node (nodeid = 4) with votecount = 0 removed.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #19 completed.
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    cladm: CLCLUSTER_ENABLE: No such device
    UNRECOVERABLE ERROR: Sun Cluster boot: Could not initialize cluster framework
    Please reboot in non cluster mode(boot -x) and Repair
    syncing file systems... done
    WARNING: CMM: Node being shut down.
    Program terminated
    {1} ok
    Any ideas how i can recover this situation without having to reinstall the node again?
    (have a flash with OS, sc3.1u4 framework etc... so not the end of the world but...)
    Thanks a mil if you can help here!
    - headwrecked

    Hi - got sorted with this problem...
    basically just removed (scinstall -r) the sc3.1u4 software from the node which was not booting, and then re-installed the software (this time the dummy node had been removed so it did not try to contact this node and the scinstall completed without any errors)
    I think the only problem with the procedure i used to remove and readd the node was that i forgot to remove the dummy node before re-adding the actaul cluster node again...
    If anyone can confirm this to be the case then great - if not... well its working now so this thread can be closed.
    root@n20-1-sup # /usr/cluster/bin/scinstall -r
    Verifying that no unexpected global mounts remain in /etc/vfstab ... done
    Verifying that no device services still reference this node ... done
    Archiving the following to /var/cluster/uninstall/uninstall.1036/archive:
    /etc/cluster ...
    /etc/path_to_inst ...
    /etc/vfstab ...
    /etc/nsswitch.conf ...
    Updating vfstab ... done
    The /etc/vfstab file was updated successfully.
    The original entry for /global/.devices/node@1 has been commented out.
    And, a new entry has been added for /globaldevices.
    Mounting /dev/dsk/c3t0d0s6 on /globaldevices ... done
    Attempting to contact the cluster ...
    Trying "n20-2-sup" ... okay
    Trying "n20-3-sup" ... okay
    Attempting to unconfigure n20-1-sup from the cluster ... failed
    Please consider the following warnings:
    scrconf: Failed to remove node (n20-1-sup).
    scrconf: All two-node clusters must have at least one shared quorum device.
    Additional housekeeping may be required to unconfigure
    n20-1-sup from the active cluster.
    Removing the "cluster" switch from "hosts" in /etc/nsswitch.conf ... done
    Removing the "cluster" switch from "netmasks" in /etc/nsswitch.conf ... done
    ** Removing Sun Cluster framework packages **
    Removing SUNWkscspmu.done
    Removing SUNWkscspm..done
    Removing SUNWksc.....done
    Removing SUNWjscspmu.done
    Removing SUNWjscspm..done
    Removing SUNWjscman..done
    Removing SUNWjsc.....done
    Removing SUNWhscspmu.done
    Removing SUNWhscspm..done
    Removing SUNWhsc.....done
    Removing SUNWfscspmu.done
    Removing SUNWfscspm..done
    Removing SUNWfsc.....done
    Removing SUNWescspmu.done
    Removing SUNWescspm..done
    Removing SUNWesc.....done
    Removing SUNWdscspmu.done
    Removing SUNWdscspm..done
    Removing SUNWdsc.....done
    Removing SUNWcscspmu.done
    Removing SUNWcscspm..done
    Removing SUNWcsc.....done
    Removing SUNWscrsm...done
    Removing SUNWscspmr..done
    Removing SUNWscspmu..done
    Removing SUNWscspm...done
    Removing SUNWscva....done
    Removing SUNWscmasau.done
    Removing SUNWscmasar.done
    Removing SUNWmdmu....done
    Removing SUNWmdmr....done
    Removing SUNWscvm....done
    Removing SUNWscsam...done
    Removing SUNWscsal...done
    Removing SUNWscman...done
    Removing SUNWscgds...done
    Removing SUNWscdev...done
    Removing SUNWscnmu...done
    Removing SUNWscnmr...done
    Removing SUNWscscku..done
    Removing SUNWscsckr..done
    Removing SUNWscu.....done
    Removing SUNWscr.....done
    Removing the following:
    /etc/cluster ...
    /dev/did ...
    /devices/pseudo/did@0:* ...
    The /etc/inet/ntp.conf file has not been updated.
    You may want to remove it or update it after uninstall has completed.
    The /var/cluster directory has not been removed.
    Among other things, this directory contains
    uninstall logs and the uninstall archive.
    You may remove this directory once you are satisfied
    that the logs and archive are no longer needed.
    Log file - /var/cluster/uninstall/uninstall.1036/log
    root@n20-1-sup #
    Ran the scinstall again:
    >>> Confirmation <<<
    Your responses indicate the following options to scinstall:
    scinstall -ik \
    -C N20_Cluster \
    -N n20-2-sup \
    -M patchdir=/var/cluster/patches \
    -A trtype=dlpi,name=qfe1 -A trtype=dlpi,name=qfe5 \
    -m endpoint=:qfe1,endpoint=switch1 \
    -m endpoint=:qfe5,endpoint=switch2
    Are these the options you want to use (yes/no) [yes]?
    Do you want to continue with the install (yes/no) [yes]?
    Checking device to use for global devices file system ... done
    Installing patches ... failed
    scinstall: Problems detected during extraction or installation of patches.
    Adding node "n20-1-sup" to the cluster configuration ... skipped
    Skipped node "n20-1-sup" - already configured
    Adding adapter "qfe1" to the cluster configuration ... skipped
    Skipped adapter "qfe1" - already configured
    Adding adapter "qfe5" to the cluster configuration ... skipped
    Skipped adapter "qfe5" - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Copying the config from "n20-2-sup" ... done
    Copying the postconfig file from "n20-2-sup" if it exists ... done
    Copying the Common Agent Container keys from "n20-2-sup" ... done
    Setting the node ID for "n20-1-sup" ... done (id=1)
    Verifying the major number for the "did" driver with "n20-2-sup" ... done
    Checking for global devices global file system ... done
    Updating vfstab ... done
    Verifying that NTP is configured ... done
    Initializing NTP configuration ... done
    Updating nsswitch.conf ...
    done
    Adding clusternode entries to /etc/inet/hosts ... done
    Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files
    IP Multipathing already configured in "/etc/hostname.qfe2".
    Verifying that power management is NOT configured ... done
    Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
    Ensure network routing is disabled ... done
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    Log file - /var/cluster/logs/install/scinstall.log.938
    Rebooting ...
    Mar 13 13:59:13 n20-1-sup reboot: rebooted by root
    Terminated
    root@n20-1-sup # syncing file systems... done
    rebooting...
    R
    LOM event: +103d+20h44m26s host reset
    screen not found.
    keyboard not found.
    Keyboard not present. Using lom-console for input and output.
    Sun Netra T4 (2 X UltraSPARC-III+) , No Keyboard
    Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
    OpenBoot 4.10.1, 4096 MB memory installed, Serial #52960491.
    Ethernet address 0:3:ba:28:1c:eb, Host ID: 83281ceb.
    Initializing 15MB Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    Configuring devices.
    devfsadm: minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
    Loading smf(5) service descriptions: 24/24
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205416931.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #23 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    NOTICE: CMM: Votecount changed from 0 to 1 for node n20-1-sup.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #24 completed.
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe1
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe5
    did subpath /dev/rdsk/c1t3d0s2 created for instance 2.
    did subpath /dev/rdsk/c2t3d0s2 created for instance 12.
    did subpath /dev/rdsk/c1t3d1s2 created for instance 3.
    did subpath /dev/rdsk/c1t3d2s2 created for instance 6.
    did subpath /dev/rdsk/c1t3d3s2 created for instance 7.
    did subpath /dev/rdsk/c1t3d4s2 created for instance 8.
    did subpath /dev/rdsk/c1t3d5s2 created for instance 9.
    did subpath /dev/rdsk/c1t3d6s2 created for instance 10.
    did subpath /dev/rdsk/c1t3d7s2 created for instance 11.
    did subpath /dev/rdsk/c2t3d1s2 created for instance 13.
    did subpath /dev/rdsk/c2t3d2s2 created for instance 14.
    did subpath /dev/rdsk/c2t3d3s2 created for instance 15.
    did subpath /dev/rdsk/c2t3d4s2 created for instance 16.
    did subpath /dev/rdsk/c2t3d5s2 created for instance 17.
    did subpath /dev/rdsk/c2t3d6s2 created for instance 18.
    did subpath /dev/rdsk/c2t3d7s2 created for instance 19.
    did instance 20 created.
    did subpath n20-1-sup:/dev/rdsk/c0t6d0 created for instance 20.
    did instance 21 created.
    did subpath n20-1-sup:/dev/rdsk/c3t0d0 created for instance 21.
    did instance 22 created.
    did subpath n20-1-sup:/dev/rdsk/c3t1d0 created for instance 22.
    Configuring DID devices
    t_optmgmt: System error: Cannot assign requested address
    obtaining access to all attached disks
    n20-1-sup console login:

  • Root.sh failed on second node while installing CRS 10g on centos 5.5

    root.sh failed on second node while installing CRS 10g
    Hi all,
    I am able to install Oracle 10g RAC clusterware on first node of the cluster. However, when I run the root.sh script as root
    user on second node of the cluster, it fails with following error message:
    NO KEYS WERE WRITTEN. Supply -force parameter to override.
    -force is destructive and will destroy any previous cluster
    configuration.
    Oracle Cluster Registry for cluster has already been initialized
    Startup will be queued to init within 90 seconds.
    Adding daemons to inittab
    Expecting the CRS daemons to be up within 600 seconds.
    Failure at final check of Oracle CRS stack.
    10
    and run cluvfy stage -post hwos -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking shared storage accessibility...
    Disk Sharing Nodes (2 in count)
    /dev/sda db2 db1
    and run cluvfy stage -pre crsinst -n all -verbose,it show message:
    ERROR:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check failed.
    Checking system requirements for 'crs'...
    No checks registered for this product.
    and run cluvfy stage -post crsinst -n all -verbose,it show message:
    Result: Node reachability check passed from node "DB2".
    Result: User equivalence check passed for user "oracle".
    Node Name CRS daemon CSS daemon EVM daemon
    db2 no no no
    db1 yes yes yes
    Check: Health of CRS
    Node Name CRS OK?
    db1 unknown
    Result: CRS health check failed.
    check crsd.log and show message:
    clsc_connect: (0x143ca610) no listener at (ADDRESS=(PROTOCOL=ipc)(KEY=OCSSD_LL_db2_crs))
    clsssInitNative: connect failed, rc 9
    Any help would be greatly appreciated.
    Edited by: 868121 on 2011-6-24 上午12:31

    Hello, it took a little searching, but I found this in a note in the GRID installation guide for Linux/UNIX:
    Public IP addresses and virtual IP addresses must be in the same subnet.
    In your case, you are using two different subnets for the VIPs.

  • I need to add a new node to RAC Oracle 10g R2

    I need to add a new node to RAC Oracle 10g R2.
    what is the best mode.
    cloning or step by step?
    SO: Solaris 64bit
    Message was edited by:
    ACS

    Hi All,
    I get the following error eventhough I have followed the instruction for Solaris R2. The instruction is enclosed. Please Advise! Thanks.
    /app/cluvfy/runcluvfy.sh stage -post hwos -n nod1 -verbose
    Result: User equivalence check failed for user "oracle".
    ERROR:
    User equivalence unavailable on all the nodes.
    Verification cannot proceed.
    Post-check for hardware and operating system setup was unsuccessful on all the nodes.
    =========================
    1. Log in as the oracle user.
    2. If necessary, create the .sshdirectory in the oracle user’s home directory and
    set the correct permissions on it:
    $ mkdir ~/.ssh
    $ chmod 700 ~/.ssh
    3. Enter the following commands to generate an RSA key for version 2 of the SSH
    protocol:
    $ /usr/bin/ssh-keygen -t rsa
    At the prompts:
    ¦ Accept the default location for the key file.
    ¦ Enter and confirm a pass phrase that is different from the oracle user’s
    password.
    This command writes the public key to the ~/.ssh/id_rsa.pub file and the
    private key to the ~/.ssh/id_rsafile. Never distribute the private key to anyone.
    4. Enter the following commands to generate a DSA key for version 2 of the SSH
    protocol:
    $ /usr/bin/ssh-keygen -t dsa
    At the prompts:
    ¦ Accept the default location for the key file
    Pre-Installation Tasks 2-11
    Creating Required Operating System Groups and User
    ¦ Enter and confirm a pass phrase that is different from the oracle user’s
    password
    This command writes the public key to the ~/.ssh/id_dsa.pub file and the
    private key to the ~/.ssh/id_dsa file. Never distribute the private key to
    anyone.
    Add keys to an authorized key file: Complete the following steps:
    1. On the local node, determine if you have an authorized key file
    (~/.ssh/authorized_keys). If the authorized key file already exists, then
    proceed to step 2. Otherwise, enter the following commands:
    $ touch ~/.ssh/authorized_keys
    $ cd ~/.ssh
    $ ls
    You should see the id_dsa.pub and id_rsa.pubkeys that you have created.
    2. Using SSH, copy the contents of the ~/.ssh/id_rsa.puband
    ~/.ssh/id_dsa.pubfiles to the file ~/.ssh/authorized_keys, and provide
    the Oracle user password as prompted. This process is illustrated in the following
    syntax example with a two-node cluster, with nodes node1 and node2, where the
    Oracle user path is /home/oracle:
    [oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_rsa.pub >>
    authorized_keys
    oracle@node1’s password:
    [oracle@node1 .ssh]$ ssh node1 cat /home/oracle/.ssh/id_dsa.pub >>
    authorized_keys
    oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_rsa.pub >>
    authorized_keys
    oracle@node2’s password:
    [oracle@node1 .ssh$ ssh node2 cat /home/oracle/.ssh/id_dsa.pub
    authorized_keysoracle@node2’s password:
    Note: Repeat this process for each node in the cluster                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • 2 Node Failover Cluster - ISCSI Disks as 1 volume?

    Hi,
    Not sure if I'm in the correct forum. If I am I apologize.  I need some advice.  
    I have created a 2-node failover cluster with 2 HP Blades.  I also currently have 2 NAS Servers (HP X1600 24tb servers running 2008 Storage server) -- The ultimate goal would be to combine all of the storage space from the NAS's into 1 volume addressable
    by the failover cluster. (As well as disk space from any additional NAS's added in the future.)
    Right now, I can add the ISCSI disk space from the NAS Targets as different volumes under cluster shared volumes.  Because of the 16TB limit in the ISCSI target, I essentially have 2 ISCSI disks on each NAS. One for 16TB, and the other for 4TB (The
    NAS Drives are configured for RAID 5 so there's a 4TB Loss.)  So, I have 4 ISCSI disks in the cluster, each as their own volume.
    Any thoughts on making the 4 drives addressable as one volume? 
    Regards,
    -Eric

    We're running Server 2012 Data Center on the cluster nodes.
    I was thinking the same about the 3rd party software to do what I'd like it to do.   The data  is mostly security camera video from our security system.  Since its not really critical data, i'm just looking for a way to maximize
    the available hard drive space, and make it addressable as one volume or network share...
    -Eric
    You can build Storage Spaces (simple, not clustered as it would waste 50% of your capacity, MSFT can do mirror and parity with R2 for clustered only) from iSCSI LUs. Dog slow and unsupported but you'll have linear spanned space. See:
    Rough Guide To Setting Up A Scale-Out File Server
    http://www.aidanfinn.com/?p=13176
    Creating Virtual SoFS with shared VHDX
    http://www.aidanfinn.com/?p=15145
    you don;t need SoFS (obviously) but in this article Aidan creates Storage Spaces from iSCSI LUNs.
    Good luck!
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Not able to start DB after adding node

    Hi all,
    I have successfully added node RAC3 in 11gR2 RAC in RHEL5 Platfiorm
    while starting DB I am getting the below error
    [root@rac1 bin]# ./srvctl status database -d dev
    Instance dev1 is not running on node rac1
    Instance dev2 is not running on node rac2
    Instance dev3 is not running on node rac3
    [root@rac1 bin]# ./srvctl start database -d dev
    PRCR-1079 : Failed to start resource ora.dev.db
    ORA-29760: instance_number parameter not specified
    CRS-2674: Start of 'ora.dev.db' on 'rac3' failed
    CRS-2632: There are no more servers to try to place resource 'ora.dev.db' on that would satisfy its placement policy
    [root@rac1 bin]# ./srvctl status database -d dev
    Instance dev1 is running on node rac1
    Instance dev2 is running on node rac2
    Instance dev3 is not running on node rac3
    [root@rac1 bin]#
    [root@rac1 bin]# ./srvctl start database -d dev
    PRCR-1079 : Failed to start resource ora.dev.db
    ORA-29760: instance_number parameter not specified
    CRS-2674: Start of 'ora.dev.db' on 'rac3' failed
    CRS-2632: There are no more servers to try to place resource 'ora.dev.db' on that would satisfy its placement policyPlease suggest me...

    I m getting the below :
    [grid@rac3 ~]$ srvctl start database -d dev
    PRCR-1079 : Failed to start resource ora.dev.db
    ORA-01078: failure in processing system parameters
    CRS-2674: Start of 'ora.dev.db' on 'rac2' failed
    ORA-01078: failure in processing system parameters
    CRS-2674: Start of 'ora.dev.db' on 'rac3' failed
    ORA-01078: failure in processing system parameters
    CRS-2674: Start of 'ora.dev.db' on 'rac1' failed
    CRS-2632: There are no more servers to try to place resource 'ora.dev.db' on that would satisfy its placement policy
    [grid@rac3 ~]$ srvctl status database -d dev
    Instance dev1 is not running on node rac1
    Instance dev2 is not running on node rac2
    Instance dev3 is not running on node rac3
    [grid@rac3 ~]$ Parameter file
    [oracle@rac3 dbs]$ cat init_dev2.ora
    dev3.__db_cache_size=272629760
    dev2.__db_cache_size=272629760
    dev1.__db_cache_size=281018368
    dev3.__java_pool_size=4194304
    dev2.__java_pool_size=4194304
    dev1.__java_pool_size=4194304
    dev3.__large_pool_size=4194304
    dev2.__large_pool_size=4194304
    dev1.__large_pool_size=4194304
    dev1.__oracle_base='/raczone/11.2.0'#ORACLE_BASE set from environment
    dev2.__oracle_base='/raczone/11.2.0'#ORACLE_BASE set from environment
    dev3.__oracle_base='/raczone/11.2.0'#ORACLE_BASE set from environment
    dev3.__pga_aggregate_target=343932928
    dev2.__pga_aggregate_target=343932928
    dev1.__pga_aggregate_target=343932928
    dev3.__sga_target=507510784
    dev2.__sga_target=507510784
    dev1.__sga_target=507510784
    dev3.__shared_io_pool_size=0
    dev2.__shared_io_pool_size=0
    dev1.__shared_io_pool_size=0
    dev3.__shared_pool_size=218103808
    dev2.__shared_pool_size=218103808
    dev1.__shared_pool_size=209715200
    dev3.__streams_pool_size=0
    dev2.__streams_pool_size=0
    dev1.__streams_pool_size=0
    *.audit_file_dest='/raczone/11.2.0/admin/dev/adump'
    *.audit_trail='db'
    *.cluster_database=true
    *.compatible='11.2.0.0.0'
    *.control_files='+DATA1/dev/controlfile/current.260.786380999','+FRA/dev/controlfile/current.256.786380999'
    *.db_block_size=8192
    *.db_create_file_dest='+DATA1'
    *.db_domain=''
    *.db_name='dev'
    *.db_recovery_file_dest='+FRA'
    *.db_recovery_file_dest_size=4039114752
    *.diagnostic_dest='/raczone/11.2.0'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=devXDB)'
    dev1.instance_number=1
    dev2.instance_number=2
    dev3.instance_number=3
    *.log_archive_format='%t_%s_%r.dbf'
    *.memory_target=848297984
    *.open_cursors=300
    *.processes=150
    *.remote_listener='scan-cluster.raczone.com:1521'
    *.remote_login_passwordfile='exclusive'
    dev3.thread=3
    dev1.thread=1
    dev2.thread=1
    dev3.undo_tablespace='UNDOTBS3'
    dev2.undo_tablespace='UNDOTBS2'
    dev1.undo_tablespace='UNDOTBS1'
    [oracle@rac3 dbs]$

  • How many maximum nodes possible in RAC 10g

    Hi,
    This is for my knowledge only.
    Any idea that how many RAC nodes is possible in 10g maximum.
    Thanks and regards,
    Chitrasen

    Hi Chitrasen :
    From MetaLink:
    "How many nodes are supported in a RAC Database?:
    With 10g Release 2, we support 100 nodes in a cluster using Oracle Clusterware, and 100 instances in a RAC database. Currently DBCA has a bug where it will not go beyond 63 instances. There is also a documentation bug for the max-instances parameter. With 10g Release 1 the Maximum is 63. In 9i it is platform specific due to the different clusterware support by vendors. See the platform specific FAQ for 9i."
    So, for 10g R2, looks like 63... lots more than most sites will ever use, I suspect!
    HTH.

  • SOA with Oracle 2 node RAC cluster

    Hi All,
    Just a simple doubt, I have successfully installed and configured SOA suite 11.1.1.3 & BAM in one wls Domain 10.3.3 in a linux box and could access all the application like BAM console, BPEL console etc .... also could see all my data-sources deployed in the Data Sources with a single node database.
    1. Now I have to RE-configure this whole SOA suite with RAC (2 node database cluster) what changes or configuration needed to implement SOA suite with RAC database?
    2. Do I need to create a "Multi Data Source" to configure RAC with SOA suite..?
    Thanks
    Sam

    DB wrote:
    This is regarding Oracle RAC..so if there is a specific category..please let me know..
    I have installed OEL linux 5.6 as guest OS (using virtualbox) in two laptops.
    I want to install 2 node oracle 10gR2 RAC with the OEL linux as OS and each laptop as one node.
    Read docs and understood that there must be shared storage for oracle clusterware and oracle ASM for oracle RAC to work.
    Please let me know the steps to create shared storage for oracle clusterware and oracle ASM (considering virtualbox OEL) and to configure public,private and virtual IPs.
    I already have document to create 2node oracle RAC using virtualbox with two nodes on the same laptop.so please dont suggest that doc.
    Thanks,
    DBMay be my step by step RAC installation guide can help you somehow?
    http://kamranagayev.wordpress.com/2011/04/05/step-by-step-installing-oracle-10g-rac-on-vmware/

  • Install mulutple RAC databases on 2-node RAC cluster

    I am installing 5 RAC databases on a 2-node RAC cluster. I have setup SCAN using 3 IP addresses.
    Do I have to use SCAN listener for all databases?
    When installing the 3 database, I get ORA-12537: TNS connection closed error.
    ENV: 11gR2 2-node RH5.x
    Thanks!

    I have setup SCAN using 3 IP addresses.
    Do I have to use SCAN listener for all databases?These 3 scan ip will work for youe all database running under this cluster setup.
    you may also use VIP to make connection like 10g.
    I get ORA-12537: TNS connection closed error.Appear some connectivity/configuration isue,please try MOS doc contain detail on this.
    How to Troubleshoot Connectivity Issue with 11gR2 SCAN Name [ID 975457.1]

  • How to Delete the node from cluster when the machine crashed?

    In an three nodes Rac of 11g r2,How to delete the node from cluster when the machine crashed?
    There is now way to repair the machine and have to add a new one.
    What is step to follow up?

    hi
    IF YOU WANT TO DELETE RAC1 NODE
    check $./olsnodes
    1) delete the instance using dbca from any active nodes
    crs_stat -t
    srvctl stop asm -n rac1
    2) delete listener
    3) delete oracle_home from oracle user
    $ORACLE_HOME/bin/runInstaller -updatenodelist ORACLE_HOME=<db_home> "CLUSTER_NODES={RAC1}
    4)delete asm home
    $ORACLE_HOME/bin/runInstaller -updatenodelist ORACLE_HOME=<asm_home> "CLUSTER_NODES={RAC1}
    5) update cluster node
    $ORACLE_HOME/bin/runInstaller -updatenodelist ORACLE_HOME=<db_home> "CLUSTER_NODES={active nodes like rac2,rac3}
    6) update ASm home
    $ORACLE_HOME/bin/runInstaller -updatenodelist ORACLE_HOME=<asm_home> "CLUSTER_NODES={active nodes like rac2,rac3}
    cd $ORA_CRS_HOME
    cd crs/opmn/conf
    check for
    $cat ons.config
    remoteport=6200
    cd crs_home/bin
    $./racgons remove_config rac1:6200
    $ go to crs home
    and $ORA_CRS_HOME/crs/install/rootdelete.sh
    $ORA_CRS_HOME/crs/install/rootdeletenode.sh
    check for ./olsnodes

  • Multiple databases/instances on 4-node RAC Cluster including Physical Stand

    OS: Windows 2003 Server R2 X64
    DB: 10.2.0.4
    Virtualization: NONE
    Node Configuration: x64 architecture - 4-Socket Quad-Core (16 CPUs)
    Node Memory: 128GB RAM
    We are planning the following on the above-mentioned 4-node RAC cluster:
    Node 1: DB1 with instanceDB11 (Active-Active: Load-balancing & Failover)
    Node 2: DB1 with instanceDB12 (Active-Active: Load-balancing & Failover)
    Node 3: DB1 with instanceDB13 (Active-Passive: Failover only) + DB2 with instanceDB21 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB31 (Active-Active: Load-balancing & Failover) + DB4 with instance41 (Active-Active: Load-balancing & Failover)
    Node 4: DB1 with instanceDB14 (Active-Passive: Failover only) + DB2 with instanceDB22 (Active-Active: Load-balancing & Failover) + DB3 with instanceDB32 (Active-Active: Load-balancing & Failover) + DB4 with instance42 (Active-Active: Load-balancing & Failover)
    Note: DB1 will be the physical primary PROD OLTP database and will be open in READ-WRITE mode 24x7x365.
    Note: DB2 will be a Physical Standby of DB1 and will be open in Read-Only mode for reporting purposes during the day-time, except for 3 hours at night when it will apply the logs.
    Note: DB3 will be a Physical Standby of a remote database DB4 (not part of this cluster) and will be mounted in Managed Recovery mode for automatic failover/switchover purposes.
    Note: DB4 will be the physical primary Data Warehouse DB.
    Note: Going to 11g is NOT an option.
    Note: Data Guard broker will be used across the board.
    Please answer/advise of the following:
    1. Is the above configuration supported and why so? If not, what are the alternatives?
    2. Is the above configuration recommended and why so? If not, what are the recommended alternatives?

    Hi,
    As far as i understand, there's nothing wrong in configuration except you need to consider below points while implementing final design.
    1. No of CPU on each servers
    2. Memory on each servers
    3. If you've RAC physical standby then apply(MRP0) will run on only one instance.
    4. Since you are configuring physical standby for on 3rd and 4th nodes of DB1 4 node cluster where DB13 and DB14 instances are used only for failver, if you've a disaster at data center or power failure in entire data center, you are losing both primary and secondary with an assumption that your primary and physical standby reside in same data center so it may not be highly available architecture. If you are going to use extended RAC for this configuration then it makes sense where Node 1 and Node 2 will reside in Datacenter A and Node 3 ,4 will reside in Datacenter B.
    Thanks,
    Keyur

  • How to remove a node from 4 node sun cluster 3.1

    Dear All,
    We are having a four nodes in a cluster.
    Could any one please guide me, how to remove a single node from a 4 node cluster.
    what are the procedure and step's I have to follow.
    Thanks in advance.
    Veera.

    Google is pretty good at finding the right pages in our docs quickly. I tried >how to remove a node Solaris Cluster< and it came up with
    http://docs.sun.com/app/docs/doc/819-2971/gcfso?a=view
    Tim
    ---

  • How can I view the newly added node in a JTree

    Hi! I have a problem with my project. I set up a JTree. At first it has a node that contains FARInfo object. When I click this node, the other program in the package will pop up a form to let user input and submit, then it will add a new node into the other node. The new node will contain FilledInfo object. But I cannot view this newly added node. The source code related with the 2 different object is as following:
    tree.addTreeSelectionListener( new TreeSelectionListener()
    public void valueChanged(TreeSelectionEvent e4)
    DefaultMutableTreeNode node = ( DefaultMutableTreeNode )
    (tree.getLastSelectedPathComponent ());
    Object nodeInfo = node.getUserObject();
    if (node.isLeaf())
    if ( nodeInfo instanceof FARInfo )
    FARInfo category = (FARInfo) nodeInfo;
    displayURL ( category.categoryURL );
    displayForm ( category.farFormName );
    if ( DEBUG )
    System.out.print ( category.categoryURL + ":\n" );
    else if ( nodeInfo instanceof FilledInfo )
         FilledInfo category2 = ( FilledInfo ) nodeInfo;      
         displayFilledForm ( category2.num );
    }else
    return;
    My question is: how to deal with the nodes containing 2 different objects: FARInfo and FilledInfo? FilledInfo is created by the other program in the package. Thanks for your help!

    I used insertNodeInto() to inser a new node into the tree, and it can be displayed. But when I used addTreeSelectionListener() to click on the newly added node, it cann not reaspond the click. Following is my original addTreeSelectionListener(). You see, if you click a node of FARInfo, then it will open a form, and after user filled in and submit it, a new node will be added into the Jtree. If you click a node of FilledInfo, then the user should view the content of this new node. But now, it seems that the sencond click cannot work. Thanks for your help.
    tree.addTreeSelectionListener( new TreeSelectionListener()
    public void valueChanged(TreeSelectionEvent e4)
    DefaultMutableTreeNode node = ( DefaultMutableTreeNode )
    (tree.getLastSelectedPathComponent ());
    Object nodeInfo = node.getUserObject();
    if (node.isLeaf())
    if ( nodeInfo instanceof FARInfo )
    FARInfo category = (FARInfo) nodeInfo;
    displayURL ( category.categoryURL );
    displayForm ( category.farFormName );
    if ( DEBUG )
    System.out.print ( category.categoryURL + ":\n" );
    else if ( nodeInfo instanceof FilledInfo )
    FilledInfo category2 = ( FilledInfo ) nodeInfo;
    displayFilledForm ( category2.num );
    }else
    return;

  • Automatic restart of services on a 1 node rac cluster with Clusterware

    How do we enable a service to automaticly start-up when the db starts up?
    Thanks,
    Dave

    srvctl enable service -d DBThanks for your reply M. Nauman. I researched that command and found we do have it enabled and that it only works if the database instance was previously taken down. Since the database does not go down on an Archiver Hung error as we are using FRA with an alt location, this never kicks in and brings up the service. What we are looking for something that will trigger off of when the archive logs error and switch from FRA(Flash Recovery Area) to our Alternate disk location. Or more presicely, when it goes back to a Valid status(on the FRA - after we've run an archive log backup to clear it).
    I found out from our 2 senior dba's that our other 2 node rac environment does not suffer from this problem, only the newly created 1 node rac cluster environment. The problem is we don't know what that is(a parameter on the db or cluster or what) and how do we set it?
    Anyone know?
    Thanks,
    Gib
    Message was edited by:
    Gib2008
    Message was edited by:
    Gib2008

Maybe you are looking for