Asm and node shutdown

Hi,
Due to maintenance of our IT infrastructure we need to perform a clean shutdown one of our 2-node RAC.
This maintenance during about one week.
What is the correct procedure to shutdown node,asm e diskgroup of this node?
After the startup the diskgroup rebalanced automatlically?
thanks

You can refer below link as well
Oracle DBA and RAC DBA Expert: How to STOP and START processes in Oracle RAC and Log Directory Structure
Regards,
http://www.oracleracexpert.com
Thread 1 cannot allocate new log & Checkpoint not complete
Move Datafile between DikGroups in ASM

Similar Messages

  • Node and domain shutdown after windows session logout

    Hello
    I made this script to launch all services and start ifs server at the startup of the computer :
    @echo off
    REM +++++++ Infra_START +++++++++
    set ORACLE_SID=iasdb
    set ORACLE_HOME=C:\Infrastructure
    set PATH=%ORACLE_HOME%\bin;%PATH%
    net start OracleInfrastructureTNSListener
    net start OracleServiceIASDB
    oidmon start
    ping 127.0.0.1 -n 10 -w 1000 > null
    oidctl connect=iasdb server=oidldapd instance=1 start
    ping 127.0.0.1 -n 10 -w 1000 > null
    REM +++++++ IFS_START +++++++
    set ORACLE_SID=database
    set ORACLE_HOME=C:\Files
    set PATH=%ORACLE_HOME%\bin;%PATH%
    call %ORACLE_HOME%\dcm\bin\dcmctl start -ct ohs -v
    call %ORACLE_HOME%\dcm\bin\dcmctl start -co OC4J_iFS_files -v
    call %ORACLE_HOME%\bin\webcachectl start
    call %ORACLE_HOME%\ifs\files\bin\ifsctl start
    REM +++++++ Infra_START +++++++++
    set ORACLE_SID=iasdb
    set ORACLE_HOME=C:\Infrastructure
    set PATH=%ORACLE_HOME%\bin;%PATH%
    net start OracleInfrastructureEMWebsite
    it works correctly. When user is logged on the server, there is no problem but when the user is logged out the node and domain shutdown.
    I'm using Ifs (Oracle Collaboration Suite 9.0.3) on Windows 2000 Pro
    Can you help me to solve my problem ?
    (sorry for my english)

    See this thread:
    Its a limitation of Java on Windows - but there are some workarounds.
    Re: Please read if considering iFS for production.

  • The Script root.sh problem - ora.asm and ASM and Clusterware Stack failed

    Folks,
    Hello. I am installing Oracle 11gR2 RAC using 2 VMs (rac1 and rac2) whose OS are Oracle Linux 5.6 in VMPlayer according to the website http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
    I am installing Grid infrastructure. On step 9 of 10 - execute script /u01/app/grid/root.sh for 2 VMs rac1 and rac2.
    After run root.sh in rac1 successfully. I run root.sh in rac2 and get an error as below:
    [root@rac2 grid]# ./root.sh
    Running Oracle 11g root.sh script...
    The following environment variables are set as:
    ORACLE_OWNER= ora11g
    ORACLE_HOME= /u01/app/grid
    Enter the full pathname of the local bin directory: [usr/local/bin]: /usr/local/bin
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root.sh script.
    Now product-specific root actions will be performed.
    2012-03-05 16:32:52: Parsing the host name
    2012-03-05 16:32:52: Checking for super user privileges
    2012-03-05 16:32:52: User has super user privileges
    Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    Adding daemon to inittab
    CRS-4123: Oracle High Availability Services has been started.
    ohasd is starting
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
    CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
    CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
    CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
    CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
    CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
    CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
    CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
    Start action for octssd aborted
    CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
    CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'
    CRS-2672: Attempting to start 'ora.asm' on 'rac2'
    CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded
    CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
    CRS-2664: Resource 'ora.ctssd' is already running on 'rac2'
    CRS-4000: Command Start failed, or completed with errors.
    Command return code of 1 (256) from command: /u01/app/grid/bin/crsctl start resource ora.asm -init
    Start of resource "ora.asm -init" failed
    Failed to start ASM
    Failed to start Oracle Clusterware stack
    [root@rac2 grid]#
    As we see the output above, at the end of the output
    1) Start of resource ora.asm -init failed
    2) Failed to start ASM
    3) Failed to start Oracle Clusterware stack
    The runInstaller is in the first VM rac1. My question is:
    Do any folk understand how to solve the script root.sh in rac2 problem ( 3 fails of ora.asm, ASM and Clusterware stack as above) ?
    Thanks.

    Please check there is no firewall exist:
    try this like:
    root.sh fails on second node
    MOS note:
    11gR2 Grid: root.sh Fails to Start the Clusterware on the Second Node Due to Firewall on Private Network [ID 981357.1]
    Grid Infrastructure 11.2.0.2 Installation or Upgrade may fail due to Multicasting Requirement [ID 1212703.1] (Most probabily this issue)

  • How to force write-behind store on cache node shutdown?

    Hi,
    I built a small pilot project based on Coherence and now I test it for failover. I found replication issues with Distributed cache in the following scenario:
    - start cache node 1 (JVM instance 1);
    - connect Extend client to it and get 1 object from cache (only 1 object in the cache - loaded by CacheStore from DB);
    - change the object and put it back (I use EntryProcessor for this);
    - start cache node 2 (JVM instance 2);
    - stop cache instance 1 (write-behind store wasn't invoked yet: write-delay = 2m);
    - load/change the same object on node 2; all changes done on node 1 are lost.
    My expectation was that cache will replicate its data between nodes when new member joins cache cluster. The backup count = 1 by default, right?
    What should I do in order to prevent such behavior? Is it possible to force write-behind store on cache node shutdown event?
    Thanks, Denis.
    My cache-config, just in case:
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>AccountCache</cache-name>
    <scheme-name>account-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>account-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>account-pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <scheme-name>AccountDatabaseScheme</scheme-name>
    <internal-cache-scheme>
    <local-scheme>
    <!--scheme-ref>default-eviction</scheme-ref-->
    <eviction-policy>LRU</eviction-policy>
    <high-units>0</high-units>
    <expiry-delay>30m</expiry-delay>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.roox.bss.cache.store.AccountCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>dburl_</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>user</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>password</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    <write-delay>2m</write-delay>
    <write-batch-factor>.5</write-batch-factor>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    </distributed-scheme>
    <proxy-scheme>
    <service-name>ExtendTcpProxyService</service-name>
    <thread-count>10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address>localhost</address>
    <port>9098</port>
    <reuse-address>true</reuse-address>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>account-pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart>true</autostart>
    </proxy-scheme>
    </caching-schemes>
    </cache-config>

    solved with autostart=true

  • About ASM and SAN...

    Hello Guys,
    I have to implement 3 nodes RAC 10gR2 ob centOS4 operating system. I have study so many documents about rac instaltion and configurations. I have learn how to set the network requirements with private, public and virtual IPs and all other stuff. I have learn installtion of clusterware and database with cluster enable functionality.
    BUT the storage options are still not clear to me. We have purchases SAN and we are planning to implement ASM for the storage. Now i want to know:
    How many disk and disk partitions 3 node structure will require on SAN?
    How ASM will access SAN, or you can say OS will access this shared storage?
    Voting disk and OCR can not be store on sharted storage and need to be store on raw devices... what these raw device can be? How it can be access by all nodes?
    Above three questions are disturbing me a lot. If they are clear to me the whole storage concept will be clear and i can implement RAC.
    Please help me by answering the above 3 questions. I will be vert greatful to you.
    Regards,
    Imran

    How many disk and disk partitions 3 node structure will require on SAN?
    There's no real answer to that! With Oracle generally, RAC or no RAC, the answer to how many disks you should have is "as many as possible". Partitioning is really up to you, too, depending on what you find easiest to manage. If you have a single SAN array, for example, comprised of 15 disks that you choose to partition into three or four logical volumes so that you can call one 'data', one 'redo', one 'OS', and one 'other' -that's entirely up to you, since Oracle could care less how you partition, what you call them or how many of them there are. Moreover, everything on every partition is being striped across those 15 disks anyway, so who cares?
    I think, however, you might be thinking of the RAC-specific issues of the voting disk and the Oracle Cluster Registry. If you were using a cluster file system, they could be just two files on the file system, about 120M in size between them. Since you are going to use ASM and these two elements can't be stored inside an ASM array, you'll have to create two raw partitions for this purpose. The rest you then chop up for ASM's use.
    It is NOT true, incidentally, that "Voting disk and OCR can not be store on sharted storage". By definition, the voting disk and OCR must be on shared storage! Indeed, raw partitions, ASM arrays and cluster file systems are ALL shared storage technologies. It just so happens that those two files can't use ASM... but raw or cfs are fine.
    A raw partition is not, of course, intrinsically 'shared storage'... but if it's a raw partition on your SAN, to which all three of your nodes are physically attached, then it is shareable. It's shareable simply because three nodes can see it. And because there's no file system there with exclusive and blocking file locks, what one node does to a raw partition doesn't stop another node accessing it simultaneously (which is the definition of shared storage, of course).
    How will ASM access SAN? By you partitioning the SAN into a number of logical volumes, each one of which will be kept raw, and you then declaring each such volume as a candidate disk. You'll wrap all candidate disks up into an ASM disk group... and then Oracle will write to that disk group and hence through to the underlying logical volumes. Which comes back to the original question: how many logical volumes should you create out of, say, a 15 disk LUN on a SAN?
    Depends, as I said, on a lot of things, but for example RAID5 runs best when there are either 5 or 9 disks in the array (or did when last I looked at an EMC Clariion SAN!). So if your underlying RAID technology was going to be RAID5, you might well create 3 5-disk logical volumes on the one LUN. To let ASM use all 15 disks, you'd then create a 3-disk diskgroup (where 1 ASM disk = 1 SAN logical volume). On the other hand, you might want to keep some disks back for future storage, in which case a 1-disk ASM diskgroup representing a single 9-disk logical volume might be the go, with the remaining 6 disks on the LUN available for future expansion.
    It's a complicated topic, unfortunately. You're dealing with physical storage which is already abstracted into logical volumes and then abstracted even further by wrapping those logical volumes up into ASM disk groups. You balance performance, expandability, management convenience, your SAN vendor's optimisation tricks and so on... and hopefully come out with something that works for you!

  • Migrating Non ASM, Non RMAN to New Server with ASM and RMAN - Possible?

    We currently have a database ( Oracle 10g R1 ) on a Sun Solaris server that is NOT using ASM or RMAN. The database is about 300GB. We are getting a new server and we want to install Oracle 10g R2 with ASM and RMAN and migrate the database.
    I have seen the documentation on migrating non ASM to an ASM server but the methods all use RMAN. Is it possible to migrate to an ASM database without using RMAN? Would datapump import/export work if I created a new database on the new server with all the same tablespaces? Or, do I have to bite the bullet, install RMAN on the old server and do the backup?
    Thanks.

    If you're not using RMAN that doesn't mean you can't use it to perform a single backup, rman is contained in every oracle RDBMS installation version 10G or higher.
    this is only a sample of how to do it
    RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '<file_system_path>/%U.DBF';
    --first we allocate the channel default channel.
    RMAN>RUN
    ALLOCATE CHANNEL DEFAULTCHANNEL TYPE DISK;
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    BACKUP DATABASE;
    SHUTDOWN
    }then once you have it, you can do what you want.
    It should also be possible to manually restore the database from the original datafiles but it's better to follow the solution involving RMAN.
    Bye Alessandro

  • How can I remove asm and ocr installation in AIX?

    Hi,
    I try to install single instance with using ASM in AIX.
    But I did not make successfully.
    Now I want to remove ASM and OCR installation then
    I will plan to make new clear installation.
    How can I remove asm and ocr ??
    Or How can I control my removing is fully correct ?

    1) ASM Instance Clean-Up Procedures
    Stop all of the databases that use the ASM instance that is running from the Oracle home that is on the node that you are deleting.
    On the node that you are deleting, if this is the Oracle home which from which the ASM instance runs, then remove the ASM configuration by completing the following steps. Run the command srvctl stop asm -n node_name for all of the nodes on which this Oracle home exists. Run the command srvctl remove asm -n node for all nodes on which this Oracle home exists. If there are databases on this node that use ASM, then use DBCA Disk Group Management to create an ASM instance on one of the existing Oracle homes on the node, restart the databases if you stopped them.
    If you are using a cluster file system for your ASM Oracle home, then ensure that your local node has the $ORACLE_BASE and $ORACLE_HOME environment variables set correctly. Run the following commands from a node other than the node that you are deleting, where node_number is the node number of the node that you are deleting:
    rm -r $ORACLE_BASE/admin/+ASMnode_number
    rm -f $ORACLE_HOME/dbs/*ASMnode_number
    If you are not using a cluster file system for your ASM Oracle home, then run the rm or delete commands mentioned in the previous step on each node on which the Oracle home exists.
    2) Deleting an Oracle Clusterware Home Using OUI in Silent Mode
    !!! Oracle recommends that you back up your voting disk and OCR files after you complete the node deletion process.
    If you ran the Oracle Interface Configuration Tool (OIFCFG) with the -global flag during the installation, then skip this step. Otherwise, from a node that is going to remain in your cluster, from the CRS_home/bin directory, run the following command where node2 is the name of the node that you are deleting:
    ./oifcfg delif –node node2
    Obtain the remote port number, which you will use in the next step, using the following command from the CRS_home/opmn/conf directory:
    cat ons.config
    From CRS_home/bin on a node that is going to remain in the cluster, run the Oracle Notification Service Utility (RACGONS) as in the following example where remote_port is the ONS remote port number that you obtained in the previous step and node2 is the name of the node that you are deleting:
    ./racgons remove_config node2:remote_port
    On the node to be deleted, run rootdelete.sh as the root user from the CRS_home/install directory. If you are deleting more than one node, then perform this step on all of the other nodes that you are deleting.
    From any node that you are not deleting, run the following command from the CRS_home/install directory as the root user where node2,node2-number represents the node and the node number that you want to delete:
    ./rootdeletenode.sh node2,node2-number
    If necessary, identify the node number using the following command on the node that you are deleting:
    CRS_home/bin/olsnodes -n
    Perform this step only if your are using a non-shared Oracle home. On the node or nodes to be deleted, run the following command from the CRS_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:
    ./runInstaller -updateNodeList ORACLE_HOME=CRS_home
    "CLUSTER_NODES={node_to_be_deleted}"
    CRS=TRUE -local
    Deinstall the Oracle Clusterware home from the node that you are deleting using OUI as follows by running the following command from the Oracle_home/oui/bin directory, where CRS_home is the name defined for the Oracle Clusterware home:
    ./runInstaller -deinstall –silent "REMOVE_HOMES={CRS_home}"
    Perform step 9 from the previous section about using OUI interactively under the heading "Deleting an Oracle Clusterware Home Using OUI in Interactive Mode".

  • RAC with ASM and without ASM

    Hi all,
    we planing to install RAC 11g instance active/active . and we are using SAN storage RAID 10.
    I know ASM is nice feature . but it need more maintenance in future . This is what I see
    it from Manual and training . for patching ..... because it maintain as instance.
    why I do need ASM since I have SAN and I can control mirroring ...etc
    I need sold answer here ?? why I need to use this feature that already can be covered using another facility like SAN.
    Best Regards,

    What I have found in a RAC world is there is maintenance no matter which way you go, A cluster file system will require upgrades, patches, etc. RAW volumes will require extra effort in allocation, etc. as well as increase the number of files in the database. ASM requires additional instance on each node to maintain which is quite simple and rolling patches in ASM is becoming reality slowly. I have found that removing the management of RAW volumes is more trouble then the maintenance of the ASM instances and the added benefits of ASM outweigh the maintenance for sure. I found that the cluster file system mainteance is pretty well a wash.
    As for ASM being widely used, the most recent RAC clusters (last 3) I have built have all been ASM....... 1 on HPUX and 2 on Linux (Red Hat and Oracle Enterprise Linux) and future clusters coming up that I know of are all going to be ASM as well. While it may be true that a lot of existing RAC environments have not yet gone to ASM almost all new RAC environments are. It is certainly taking hold. If you look at the effort on a large database to move to ASM from RAW volumes or cluster file system it can appear to be a lot of work and that is true, but in the long run my experience with ASM has been positive therefore I would not hesitate to recommend new RAC clusters be built with ASM and existing clusters should have a migration plan in place. As with some cluster file systems like veritas, GPFS, etc. There is addtional cost involved where ASM does not have the additional cost so moving existing clusters can save $$........ RAM volumne management may not fall on the DBA but someone has to manage all those volumnes at a SAN level and that is additional management just may not really be with the DBA.
    Just my additional 2 cents worth.
    Hope this helps.

  • ASM and Cold backup

    Is there any way to coldbackup a database that is using ASM?
    My development server has a database of 500GB in size. To move this to the production server I used to do cold backups but unable to do it with ASM.
    I was planning to use EXPORT utility but I read a few articles saying the export files should be more than a few GB.
    Has anyone got any ideas how to move large database that is using ASM from dev to prod?
    Thanks

    RMAN is the only option to take backups of databases using on ASM and you can use RMAN to take the cold backup of your database. First shutdown the database cleanly and then mount it and issue "backup database" at rman prompt to the cold backup. Read oracle doc for more information on RMAN setup.
    Daljit Singh

  • ASM and DB on different machine

    Hi,
    Have one question, we are given 2 VM's on which ASM and Oracle DB is installed. We are asked to start both machine in sequence, after starting Machine-1 we need to start Machine-2 then only DB (orcl) on Machine-2 can be opened.
    If we miss sequence it gives does not come up and gives error like:
    SQL> startup;
    ORA-01078: failure in processing system parameters
    ORA-01565: error in identifying file '+DATA/orcl/spfileorcl.ora'
    ORA-17503: ksfdopn:2 Failed to open file +DATA/orcl/spfileorcl.ora
    ORA-29701: unable to connect to Cluster Synchronization Service
    The initorcl.ora on Machine-2 has following entry:
    SPFILE='+DATA/orcl/spfileorcl.ora'
    Can someone please suggest how this two machines would be configured, what entries on Machine-2 are pointing to ASM instance on Machine-1.
    Please suggest..
    Thanks!

    bLaK wrote:
    Thanks for the reply HTH!
    I am new to this and learning ASM. What i wanted to know is how and what setting are done in Machine-2 that it refers to ASM instance on Machine-1.
    Where can I see that?
    Regards!What is your database & ASM version?
    Are you using Grid?
    My first post explanation is specific to any ASM & RDBMS instance on server.
    Describe more how these two servers configured.
    you can check by
    $crsctl check cssd
    Or let's say how can we tell CSS service where is ASM instance located..It will be handled by CSSD process.
    read this manual http://www.oracle.com/technetwork/database/asm-10gr2-bestpractices.pdf
    Since CSS provides cluster management and node monitor management, it inherently monitors ASM and
    its shared storage components (disks and diskgroups). Upon startup, ASM will register itself and all
    diskgroups it has mounted, with CSS. This allows CSS across all RAC nodes to keep diskgroup metadata
    in-sync. Any new diskgroups that are created are also dynamically registered and broadcasted to other
    nodes in the cluster.
    As with the database, internode communication is used to synchronize activities in ASM instances. CSS is
    used to heartbeat the health of the ASM instances. ASM internode messages are initiated by structural
    changes that require synchronization; e.g. adding a disk. Thus, ASM uses the same integrated lock
    management infrastructure that is used by the database for efficient synchronization.

  • ASM and Databases Instances (best practices)

    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Any comment, advises ?
    Bests Regards
    Den

    user12067184 wrote:
    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Hi Den ,
    It is not advisable to have 25 databases in single RAC . Instead of databases , you can also think of creating different schemas in same database.
    For ASM best practice please follow :
    ASM Technical Best Practices [ID 265633.1]
    Regards
    Rajesh

  • Placing Rac db redologs members on san disks ASM and local scasi disks

    Hi
    Kindly advice if should I face a performance problem case I placed db redologs members one on ASM and the 2nd on Local server scasi disk.
    Thanks

    As long as you would have the local disk not being into contention due to any other files being present or just being slow, you should be fine. But make sure that you do have the local disk in such a way that it's visible to the other node as well because that would be required in the case of recovery.
    HTH
    Aman....

  • Resource group and node

    Hi to all.
    How to attach resource group and node?
    i.e. when i shutdown my node0. All resources switch to node1.
    I want:
    When my node0 boot then my resource group switch back.
    How I should make this?
    Thanks

    On node1:
    nfl-node1# clrg show | grep Nodelist
    Nodelist: nfl-node2 nfl-node1
    Nodelist: nfl-node2 nfl-node1
    Nodelist: nfl-node2 nfl-node1
    nfl-node1#
    On node2:
    nfl-node2# clrg show | grep Nodelist
    Nodelist: nfl-node2 nfl-node1
    Nodelist: nfl-node2 nfl-node1
    Nodelist: nfl-node2 nfl-node1
    nfl-node2#

  • Node shutdown or crash

    When a primary node shutdown or crash happens, somtimes the node is not getting shutdown properly because some cluster processes like "/etc/rc0.d/K05initrgm stop" are continously running . Meanwhile
    The application is getting failovered to seondary node , but it is going to intermediate stage "Pending Online".
    And both are hung.Below is the status
    cldevicegroup status:
    === Cluster Device Groups ===
    --- Device Group Status ---
    Device Group Name Primary Secondary Status
    devset - - Offline
    What might cause above problem like some cluster processes not coming out ...

    You are not saying which specific Sun Cluster version you are using. Since you refer to /etc/rc0.d/K05initrgm I assume it is Sun Cluster 3.1. The question is, which update and patch level?
    You should make sure to have the most recent Patches for this release applied. There are known and fixed problems around this script being hung - but they are fixed for quite some time in the corresponding Patches.
    If you have all recent Patches applied, I suggest to open a service call to further analyze the problem.
    Greets
    Thorsten

  • ASM and Patching

    Say I'm running RAC and ASM across two nodes. I install ASM in its own home and run the listener out of that home and the database and RAC are in their own home. What is my patching order or will that be dictated by the specific patch? Which home do I patch first or does it matter?
    I read that it is advised to run ASM in a separate home if multiple databases are going to be using ASM and I'm wondering how patching is managed at that point since there are now two Oracle homes to be patched instead of one.
    Thanks.

    I hope there are three Oracle homes given that you also have CRS_HOME.
    You patch in the order directed by the written instructions that come with the patch.
    Given that you didn't include a specific patch number, much less a version number, no other help is possible.

Maybe you are looking for