Failed to install SUN cluster 3.1 8/05 on solaris 10(x64)

I install the cluster 3.1 u4 on two nodes(node1 and node2) by using "scinstall".( The two nodes have the same system) After cluster installed, i reboot node2 . Before rebooting, I have done the following operation.
# echo "etc/cluster/nodeid" >> /boot/solaris/filelist.ramdisk
# echo "etc/cluster/ccr/did_instances" >> /boot/solaris/filelist.ramdisk
# bootadm update-archive
updating /platform/i86pc/boot_archive...this may take a minute
# init 6
But node2 crashed,and failed to reboot.
I could only see several characters "GRUB" in the screen. The system even doesnot present the grub menu.
The system i used:
bash-3.00# prtdiag
System Configuration: Sun Microsystems Sun Fire X4200 Server
BIOS Configuration: American Megatrends Inc. 080010 08/10/2005
bash-3.00# uname -a
SunOS arcsunx42km0838 5.10 Generic_118855-19 i86pc i386 i86pc
Message was edited by:
skyqa
Message was edited by:
skyqa

The only thing I can find that is vaguely similar is where users have installed -19 of 118855 but not updated all the other patches on the system. I would try booting from the failsafe mode, if you can get to this. Then update all the patches.
If you can't I guess you are going to have to either boot from DVD and fix or just rebuild.
Tim
---

Similar Messages

  • Can I install Sun Cluster on LDOM guest domain. Is Oracle RAC a supported c

    Hello,
    Can I install Sun Cluster on LDOM guest domains. Is Oracle RAC on LDOM guest domains of 2 physical servers a supported configuration from oracle?
    Many thanks in advance
    Ushas Symon

    Hello,
    The motive behind using LDOm Guest domains as RAC node is to have a better control of the resource allocation, since i will be having more than one guest domains which should perform different functions. The customer wants to have ORACLE RAC alone (without sun cluster).
    I will have two T5120's and one 2540 shared storage.
    My plan of configuration is to have
    Control&IO Domain with 8VCPU, 6GB mem
    one LDOM guest domain on each physical machine with 8 VCPU's, 8GB of memory, shared n/w and disks participating as RAC node's. (Don't know yet if i will use solaris cluster or not)
    one guest domain on each physical machine with 12 VCPU's, 14GB of memory, shared n/w and disks participating as BEA weblogic cluster nodes (not on solaris cluster)
    One guest domain on each physical machine with 4 VCPU's, 4GB of memory,shared n/w and disks participating as apache web cluster (on solaris cluster)
    Now, My question is, is it a supported configuration to have guest domains as Oracle RAC participants for 11gR2 (either with or without solaris cluster).
    If I need to configure RAC nodes on solaris cluster, is it possible to have two independent clusters on LDOM , one 2 node cluster for RAC and another 2 node cluster for apache web?
    Kindly advise
    Many thanks in advance
    Ushas Symon

  • Is Veritas- or Sun Cluster needed for RAC in a Solaris Environment

    Is a Veritas- or Sun Cluster needed for RAC in a Solaris Environment?
    Does anyone know, when OCFS will be available for Solaris?

    You don't need Veritas Cluster File System, but until OCFS comes out for Solaris you need to think about backups. If you've not got a backup solution that can integrate with rman for an SBT device then backups become more tricky.
    If you use ASM then you can take a backup to a "cluster filesystem" (although ASM is raw partitions think of it as a cluster filesystem), that both nodes can see. BUT you then need to get these to tape somehow, unless you've got NetBackup et al. that support RMAN and can backup direct to tape you're more stuck.
    Too many people don't think about this. You could create an NFS mount and backup to this from the nodes.

  • Installing sun cluster on system based on SUNWcrnet

    Hi,
    I'm planning to testdrive the sun cluster software. All our systems do run the SUNWcrnet install cluster plus additionally required packages. I read that for sun cluster 3.1 at least SUNWCuser is necessary. Did anybody succeed in running sun cluster software on a system based on SUNWcrnet? If yes which are the additionally needed packages?
    Thanks,
    mark

    I wouldn't bother. You will need to install way too many packages. SUNWcrnet by default does not even give you the showrev command. I rather install everything and then disable what I don't need.

  • Sun Cluster 3.0 update 1 on Solaris 8 - panics!

    I am building a test system in our lab on admittedly unsupported hardware configurations but the failure wasn't expected to be so dramatic. Setup as follows:
    2x E250 (single processor, 512 MB RAM)
    dual connected to D1000 fully populated 18 Gb HDD
    Solaris 8 6/00 with all latest recommended patches
    Sun Cluster 3.0 update 1 installed with latest patches.
    On first reboot (on either node), the kernel panics with the following:
    panic[cpu0]/thread=3000132c320: segkp_fault: accessing redzone
    This happens straight after the system sets up the network and happens like that everytime and is easily reproduceable. My question is, has anyone successfully used SC3.0 update 1 on Solaris 8 6/00? Any information would be most appreciated.
    -chris.

    We have the same problem with 2 SUN E420 and a D1000 storage array.
    The problem is releted to the settings on the file :
    /etc/system
    added by the cluster installation :
    set rpcmod:svc_default_stksize=0x4000
    set ge:ge_intr_mode=0x833
    the second one line try to set a Gigabit Ethernet Interface that does not exist.
    We comment out the two line and every things work fine.
    I'm interesting to know what you think about Sun Cluster 3.0. and your experience.
    email : [email protected]
    Stefano

  • Jumpstart install of  sun cluster 3.1

    Hi,
    I'm trying to use jumpstart to install a 2nodes cluster. Solaris9 installs but get problems when the cluster software begins to install. I get these error messages:
    Performing setup for Sun Cluster autoinstall ... nfs mount: scadm: : RPC: Rpcbin
    d failure - RPC: Success
    nfs mount: retrying: /a/autoscinstalldir
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    nfs mount: scadm: : RPC: Rpcbind failure - RPC: Success
    n
    It does not install further. Could somebody give some suggestions?
    thank you, pcurran

    Hi All,
    I have decided to first install sun cluster 3.1(trail version) manually:
    1) installed sun_web_console. Installation completed successfully but got this statement at the end:
    "Server not started, No management application registed"
    2) Went ahead and to run "scinstall" but got this error:
    ** Installing SunCluster 3.1 framework **
    SUNWscr.....failed
    scinstall: Installation of "SUNWscr" failed
    Below is the "log file":
    ** Installing SunCluster 3.1 framework **
    SUNWscr
    pkgadd -S -d /cdrom/hotburn_oct24_05/suncluster3.1/Solaris_sparc/Product/sun_clu
    ster/Solaris_9/Packages -n -a /var/cluster/run/scinstall/scinstall.admin.11374
    SUNWscrfailed
    pkgadd: ERROR: cppath(): unable to stat </cdrom/hotburn_oct24_05/suncluster3.1/S
    olaris_sparc/Product/sun_cluster/Solaris_9/Packages/SUNWscr/reloc/etc/cluster/cc
    r/rgm_rt_SUNW.LogicalHostname:2>
    pkgadd: ERROR: cppath(): unable to stat </cdrom/hotburn_oct24_05/suncluster3.1/S
    olaris_sparc/Product/sun_cluster/Solaris_9/Packages/SUNWscr/reloc/etc/cluster/cc
    r/rgm_rt_SUNW.SharedAddress:2>
    ERROR: attribute verification of </etc/cluster/ccr/rgm_rt_SUNW.LogicalHostname:2
    failedERROR: attribute verification of </etc/cluster/ccr/rgm_rt_SUNW.SharedAddress:2>
    failed
    pathname does not exist
    Reboot client to install driver.
    Installation of <SUNWscr> partially failed.
    scinstall: Installation of "SUNWscr" failed
    scinstall: scinstall did NOT complete successfully!
    Could someone please give some direction?
    thank you, pcurran

  • How to install a SUN cluster in a non-global zone?

    How do I install a SUN cluster in a non-global zone? If it is the same as installing SUN Cluster in the global zone, then how do I access the cdrom from a non-global zone to the global zone?
    Please point me to some docs or urls if there are any. Thanks in advance!

    You don't really install the cluster software on zones. You have to set up a zone cluster once you have configured the Global Zone cluster (or in other words, clustered the physical systems + OS).
    http://blogs.sun.com/SC/entry/zone_clusters

  • Configure iws on Sun cluster???

    I have installed sun cluster 3.1.On top of it I need to install iws(sunone web server).Does anyone have document pertaining to it?
    I tried docs.sun.com , the document there sound greek or latin to me
    Cheers

    Just to get you started:
    3) create the failover RG to hold the shared address.
    #scrgadm -a -g sa-rg (unique arbitrary RG name) -h prod-node1,prod-node2 (comma seperated list of nades that can host this RG, in the order you want it to fail over)
    again - #scrgadm -a -g sa-rg -h prod-node1,prod-node2
    4) add the network resource to the failover RG.
    # scrgadm -a -S (telling the cluster this is going to be a scalable resource, if it were failover you would use -L) -g sa-rg (the group we created in step #3) -l web-server (-l is for the hostname of the logical host. This name (web-server) needs to be specified in the /etc/hosts file on each node of the cluster. Even if a nodfe is not going to host the RG, it has to know about the LH (logical hosts) hostname!)
    again - #scrgadm -a -S -g sa-rg -l web-server
    5) create the scalable resource group that will run on all nodes.
    #scrgadm -a -g web-rg -y Maximum_primaries=2 -y Desired_primaries=2 -y RG_dependencies=sa-rg
    -y is an extension property. Most resources use standard properties, other "can" use extension properties, still others "must" have extension properties defined. Maximum_primaries says how many nodes you want instance to run on at the most. Desired_primaries is how many instances you want to run at the same time. For an eight node cluster, running other DS's you might say, Maximum_primaries=8 Desired_primaries=6 Which means an instance could run on any node in the cluste, but you want to try to make sure there are nodes available for your other resource so you only want to run 6 instance at any given time, leaving the other two nodes to run your other DS's.
    You could say Max=8 Desired=8 it's a matter of choice.
    6) create a storage resource to be used by the app. This tells the app where to go to find the software it needs to run or process.
    -a=add,-g=in the group,-j=resource name, needs to be unique and is arbitrary, -t resource type installed in pkg format earlier, and registered, -x= resource type extension property (a -y extension property could be used for a RG property or a RT property) -x is only for a RT property. /global/web is defined in the /etc/vfstab file with the mount options field specifying global,logging (at least global, maybe logging) (note you do not specify the DG, just mounts from storage supplied by the DG, because multiple RG's may use storage from the same DG)
    #scrgadm -a -g web-rg -j web-stor -t SUNW.HAStoragePlus (HAStoragePlus provides support only for global devices and file systems) -x Affinityon=false -x FileSystemMountPoints=/global/web
    7) create the app resource in the scalable RG.
    -a=add, -j new resource -g (in the group) web-rg (created in step #5) using the type -t SUNW.apache (defined in step #2, remember the pkg installed was SUNWscapc, SUNW.apache is a made up name we are using to use apache for possibly multiple resource groups. Each -j (resource name must be unique, and only used once) but each -t (resource type, allthough having a unique name from other RT's can be used over and over again in different resources of different RG's.) Bin_dir (self explanitory, where to go to get the binaries) Network_Resouces_Used=web-server (created in step #5, again is the hostname in the /etc/vfstab for the logical host, the name the clients are going to use to get to the resource.) Resource_Dependencies=web-stor (created in step #6) saying that apache-res depends on web-stor, so if web-stor is not online, don't bother trying to start apache because the binaries won't be there. They are supplied by the storage being online and /global/web being mounted up.
    #scrgadm -a -j apache-res -g web-rg -t SUNW.apache -x Bin_dir=/usr/apache/bin -y Scalable=True -y Network_Resources_Used=web-server -y Resource_dependencies=web-stor
    8) switch the failover group to activate it.
    #scswitch -z -g sa-rg
    9) switch the scalable RG to activate it.
    #scswitch -z -g web-rg
    10) make sure everything got started.
    #scstat -g
    11) connect to the newly, cluster started service.

  • Configuration of LUN's to Sun Cluster

    Hi,
    I have a 2 node Sun Cluster (V3.2) running on 2xE2900, Solaris 10...
    Basically, there are 3 installed Databases running on the development environment and I need to cluster all 3 in the Global Zone do some failovers and then engage Sun PS to come on site and configure the production cluster environment...
    Usually I have already configured metasets or ZFS and then the DBA installs the DB while everything is nice and neat, my question however is what is the best way to cluster the LUN's when they already have data which I cannot (or would prefer not) to loose.
    I believe the creation of LUN's in a metaset will destroy the data and obviously zfs pools will also destroy any data... hopefully this is a simple question from an SC novice :)
    Thanks...

    Thanks Tim, that answer the question... one more though :)
    I was advised to install a single node cluster then add the 2nd node to the config later. Ive done this but when I try to do the add it seems I have a problem with the cluster interconnects and receive the messages:-
    Adding cable to the cluster configuration ... failed
    scrconf: Failed to add cluster transport cable - does not exist
    scinstall: Failed to update cluster configuration ("-m endpoint=<server>:ce3,endpoint=switch1")
    The heartbeats are ce3 and ce7 which I know are working ok, ive tried everything from the 1st node but when I enter:-
    # scstat -W
    Nothing is shown, although when I do a scconf -p I can see the node transport adapters ok... so how do I let the 2nd node access to the cluster interconnects, ive tried clsetup and adding the interconnects via option4 and I remember configuring them during installation...
    Again any input would be greatly received...
    Thanks...
    Steve..

  • Sun Cluster, vx mode - "mode: enabled: cluster inactive"

    Hi,
    I have installed sun cluster 3.2 on solaris 9 (Solaris 9 9/05). I want to make it an active-active setup with shared veritas DGs. This setup also has vxvm 5 (Veritas-5.0_MP1_RP4.4) with rolling pack 4 and solaris has all the latest pathes updated via "updatemanager". The shared storage comes from DMX800.
    In order to get VxVM in cluster mode I have installed licenses for CVM, VCS and also ORCLudlm (3.3.4.8 ) package.
    The sun cluster install has all the necessary framework packages. But the VX mode refuses to be in cluster mode:
    #vxdctl -c mode
    mode: enabled: cluster inactive
    Issue is udlm daemon "dlmmon" isnt starting.
    Also I see the below errors
    cacao: Error: Fail to start cacao agent. (instance default)
    Error: Fail to start cacao agent. (instance default)
    AND messages file on nodeA shows the below error
    [ID 988885 daemon.error] libpnm error: can't connect to PNMd on nodeB
    I am at my wits end on how to resolve this issue :(
    Any help is appreciated.
    Regards,
    Ashish

    Well it could be the problem I ran into... and I went round and round for ages trying to figure out what was wrong - before I realised my mistake.
    Assuming you have VxVM/CVM licensed properly, check that ORCLudlm is installed on all nodes. Then create your rac-framework-rg and ensure you have a rac-framework-rs, rac-udlm-rs AND a rac-cvm-rs resource. Now, unless you have both of these and they can be brought enabled and brought online, then you'll have exactly the problem you are seeing.
    Hope that helps,
    Tim
    Edited by: Tim.Read on Feb 19, 2008 4:08 AM
    Ooops missed the rac-udlm-rs ... Doh!

  • Cannot import a disk group after sun cluster 3.1 installation

    Installed Sun Cluster 3.1u3 on nodes with veritas VxVM running and disk groups used. After cluster configuration and reboot, we can no longer import our disk groups. The vxvm displays message: Disk group dg1: import failed: No valid disk found containing disk group.
    Did anyone run into the same problem?
    The dump of the private region for every single disk in the VM returns the following error:
    # /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/did/rdsk/d22s2
    VxVM vxprivutil ERROR V-5-1-1735 scan operation failed:
    Format error in disk private region
    Any help or suggestion would be greatly appreciated
    Thx
    Max

    If I understand correctly, you had VxVM configured before you installed Sun Cluster - correct? When you install Sun Cluster you can no longer import your disk groups.
    First thing you need to know is that you need to register the disk groups with Sun Cluster - this happens automatically with Solaris Volume Manager but is a manual process with VxVM. Note you will also have to update the configuration after any changes to the disk group too, e.g. permission changes, volume creation, etc.
    You need to use the scsetup menu to achieve this, though it can be done via the command line using an scconf command.
    Having said that, I'm still confused by the error. See if the above solves the problem first.
    Regards,
    Tim
    ---

  • Cluster Transport Adapter Error - Sun Cluster

    I am installing sun cluster 3.0 and it gives me an error saying:
    failed to add cluster transport adapter - unknown adapter of transport type, trtype=dlpi...
    My network card is syskonnect - interface is skge0.....
    What is wrong....Thanks

    Hi,
    I have a similar problem .
    Get the same error with Sun Cluster 3.0 the card is Phobos quad port.
    Could find a solution to it or had to shell out a few hundred bucks for sun cards ?

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Sun Cluster.. Why?

    What are the advantages of installing RAC 10.2.0.3 on a Sun Cluster.? Are there any benefits?

    Oracle 10g onward, there is no such burning requirement for Sun Cluster (or any third party cluster) as far as you are using all Oracle technologies for your Oracle RAC database. You should Oracle RAC with ASM for shared storage and that would not require any third party cluster. Bear inmind that
    You may need to install Sun Cluster in the following scenarios:
    1) If there is applicaiton running with in the cluster along with Oracle RAC database that you want to configure for HA and Sun Cluster provide the cluster resourced (easy to use) to manage and monitor the application. THIS can be achieved with Oracle Clusterware but you will have to write your own cluster resource for that.
    2) If you want to install cluster file system such as QFS then you will need to install the Sun Cluster. If this cluster is only running the Oracle RAC database then you can rely on Oracle technologies such as ASM, raw devices without installing Sun Cluster.
    3) Any certification conflicts.
    Any correction is welcome..
    -Harish Kumar Kalra

  • Didadm: unable to determine hostname.  error on Sun cluster 4.0 - Solaris11

    Trying to install Sun Cluster 4.0 on Sun Solaris 11 (x86-64).
    iscs sharedi Quorum Disk are available in /dev/rdsk/ .. ran
    devfsadm
    cldevice populate
    But don't see DID devices getting populated in /dev/did.
    Also when scdidadm -L is issued getting the following error. Has any seen the same error ??
    - didadm: unable to determine hostname.
    Found in cluster 3.2 there was a Bug 6380956: didadm should exit with error message if it cannot determine the hostname
    The sun cluster command didadm, didadm -l in particular, requires the hostname to function correctly. It uses the standard C library function gethostname to achieve this.
    Early in the cluster boot, prior to the service svc:/system/identity:node coming online, gethostname() returns an empty string. This breaks didadm.
    Can anyone point me in the right direction to get past this issue with shared quorum disk DID.

    Let's step back a bit. First, what hardware are you installing on? Is it a supported platform or is it some guest VM? (That might contribute to the problems).
    Next, after you installed Solaris 11, did the system boot cleanly and all the services come up? (svcs -x). If it did boot cleanly, what did 'uname -n' return? Do commands like 'getent hosts <your_hostname>' work? If there are problems here, Solaris Cluster won't be able to get round them.
    If the Solaris install was clean, what were the results of the above host name commands after OSC was installed? Do the hostnames still resolve? If not, you need to look at why that is happening first.
    Regards,
    Tim
    ---

Maybe you are looking for