Configuration of LUN's to Sun Cluster

Hi,
I have a 2 node Sun Cluster (V3.2) running on 2xE2900, Solaris 10...
Basically, there are 3 installed Databases running on the development environment and I need to cluster all 3 in the Global Zone do some failovers and then engage Sun PS to come on site and configure the production cluster environment...
Usually I have already configured metasets or ZFS and then the DBA installs the DB while everything is nice and neat, my question however is what is the best way to cluster the LUN's when they already have data which I cannot (or would prefer not) to loose.
I believe the creation of LUN's in a metaset will destroy the data and obviously zfs pools will also destroy any data... hopefully this is a simple question from an SC novice :)
Thanks...

Thanks Tim, that answer the question... one more though :)
I was advised to install a single node cluster then add the 2nd node to the config later. Ive done this but when I try to do the add it seems I have a problem with the cluster interconnects and receive the messages:-
Adding cable to the cluster configuration ... failed
scrconf: Failed to add cluster transport cable - does not exist
scinstall: Failed to update cluster configuration ("-m endpoint=<server>:ce3,endpoint=switch1")
The heartbeats are ce3 and ce7 which I know are working ok, ive tried everything from the 1st node but when I enter:-
# scstat -W
Nothing is shown, although when I do a scconf -p I can see the node transport adapters ok... so how do I let the 2nd node access to the cluster interconnects, ive tried clsetup and adding the interconnects via option4 and I remember configuring them during installation...
Again any input would be greatly received...
Thanks...
Steve..

Similar Messages

  • Configuring a HA service on Sun Cluster 2.2

    I am a Product Manager working with customers using Oracle software on Sun Cluster 2.2. My question is, how can I configure a service to bind to logical/virtual address, so as to make it available at the same address after failover? Are there cluster specific steps that I need to take in order to achieve this?

    In the OPS environment there is no use of the HA-Oracle agent, the same instance of the database is running on the different node. The failover is from the client side, because all nodes access the same shared disk space database. The tnsnames.ora files is modifed so that if a transaction fails the client will try the other nodes. The OPS environment also uses the Oracle's unix distributed lock mangage(UDLM), so there is some overhead issues.
    let me know if this the info you needed,
    Heath
    [email protected]

  • Migrating a Sun Cluster Running Oracle to New Hardware

    Has anyone attempted this? Essentially we are moving a Sun Cluster from one location to hardware at another location while maintaining the same node names. From what I can tell, I need to (on an install lan):
    1) Load the OS
    2) Configure the IPs
    3) Install Sun Cluster
    4) Install Oracle Parallel Server/RAC
    5) Restore the data on a per node basis
    6) Restore the shared data
    7) Adjust, tweak, and run
    Are there any pitfalls or suggestions on the approach? The shop is relatively new to clustering much less oracle clustering and the original cluster was installed by admins gone bye.

    I would say that Apple should be able to update your 36-months maintenance agreement with a OSXS 10.4 serial number.
    As far as I know, the structure of 10.3 and 10.4 serial numbers is different (wasn't the case between 10.2 and 10.3) so I'm short of a technical answer here.
    Maybe you could try :
    /System/Library/ServerSetup/serversetup -setServerSerialNumber xxxx-xxx-xxx-x-xxx-xxx-xxx-xxx-xxx-xxx-x
    in a Terminal window on the server. It's theorically the same as using Server Admin but maybe this could help.

  • Sun Cluster 2.2 configuration

    Hi all,
    is it possible to configure Sun Cluster 2.2 to use "sqlplus" instead of "svrmgrl" to start an oracle db (9i)?
    thanks

    Oracle 9i is not supported on SC 2.2.
    Just replacing svrmgrl with 'sqlplus' will not be sufficient to make HA-Oracle work with Oracle 9i. You will most likely encounter problems in fault monitor, it will just fail.
    Oracle 8i fault monitor is likely to be incompatible with 9i server.
    If you really want to use Oracle 9i on SC2.2, you will be better of by writing a custom agent or moving to SC 3.x.

  • Sun cluster: scalable cluster configuration

    Hi all,
    is it possible to configure the scalable - load sharing cluster for SAP with oracle 10g database using only Sun Cluster 3.1 software with out oracle RAC.
    If yes how sun cluster manage the two database instance on different nodes?
    thanks and regards
    suj.

    Now, you would need the real SAP expert.
    If you want to have more than 1 DB instance (to reduce failover times) you need a parallel DB, in the case of Oracle this is Oracle RAC. If you have a single instance Oracle DB, this would be restarted in case of a failover.
    I think the only thing in SAP that is scalable is the app server. (I remember that there is also support for a replicated enqueue server, but that is a different issue) But if you had SAP (is this the CI only?) on one node and the DB on the other, you very probably would only have to restart the service that would failover in case of a node failure and not both.
    I recommend getting a more detailed answer by an SAP expert.
    Hartmut

  • Sun Cluster 3.0 MQ Series 5.2 configuration

    Hi All,
    we have to review MQ Series installation/configuration on 2 solaris 8 Clustered with Sun Cluster 3.0 machines. The present configuration has a global filesystem /var/mqm with one queue manager.
    According to Sun Cluster 3.1 dataservice for websphere MQ(5.3 ndr) there are 2 ways of filesystem layout
    FFS: with local qmgrs (data and log) at each cluster node
    GFS: with global filesystem qmgrs (data and log).
    Are there any special consideration about shmem and ipc directories in <qmgr>/data?
    Does this scenario also apply to 3.0 /5.2 ?
    Does FFS configuration allow persistant messages failover at takeover?
    Are there any dataservices/docs available for MQ on 3.0?
    Thanks in advance.

    To deploy multiple qmgrs requires /var/mqm to be mounted as a GFS. The reason for this is to overcome IPC key clashes. The recommended file system layout is as follows -> represents a symlink, assuming two qmgrs - qmgr1 & qmgr2
    Using FFS (recommended - /local/mqm etc.. are mounted as FFS with /etc/vfstab)
    /var/mqm -> /global/mqm
    /global/mqm/qmgrs/qmgr1 -> /local/mqm/qmgr/qmgr1
    /global/mqm/qmgrs/qmgr2 -> /local/mqm/qmgr/qmgr2
    /global/mqm/log/qmgr1 -> /local/mqm/log/qmgr1
    /global/mqm/log/qmgr2 -> /local/mqm/log/qmgr2
    Using GFS (mainly early SC3.0 as HAStoragePlus wasn't available until later on)
    All mounted as GFS with /etc/vfstab
    /var/mqm -> /global/mqm
    /global/mqm/qmgrs/qmgr1
    /global/mqm/qmgrs/qmgr2
    /global/mqm/log/qmgr1
    /global/mqm/log/qmgr2
    Finally, FFS (Failover File System) is recommend because, at present, whenever GFS is used for the qmgr & log files, MQ Series is unable to determine that the qmgr may have been started on another node. e.g. Assuming GFS, and MQ Series is started on Node A, it is possible (but don't do it) to start MQ Series on Node B.
    The Sun Cluster Agent provides some protection against this. Instead it's recommened to deploy FFS as above.
    The agent for WebSphere MQ for SC 3.1 is available and supported on SC3.0 update 3 as well as SC3.1. There is also a patch available for the WebSphere MQ Agent which deals with IPC cleanup, for single or multiple qmgrs.
    Docs available can be found at
    http://docs.sun.com/db/prod/7192#hic - Just select Sun Cluster Data Service for WebSphere MQ
    Finally, the above scenario also applies to SC3.0/5.2 as well as SC3.1/5.3 and either GFS/FFS allow for persistant messages to be available after a failover.
    Regards
    Neil

  • Jboss configuration on Sun Cluster 3.1

    Hi.
    I am using generic Data Services to manage JBoss instance under Sun Cluster. the command is as follows.
    scrgadm -a -j jboss_resource -g cluster_failover_rg -t SUNW.gds \
    -y Scalable=false -y Start_timeout=900 \
    -y Stop_timeout=420 -x Probe_timeout=300 \
    -y Port_list="8080/tcp" \
    -y Resource_dependencies=oracle_server_resource \
    -x Start_command='/bin/su mform -c "/usr/msm40/scripts/startup/jboss.sh start"' \
    -x Stop_command='/bin/su mform -c "/usr/msm40/scripts/startup/jboss.sh stop"' \
    -x Child_mon_level=0 -x Failover_enabled=true -x Stop_signal=9
    My jboss script will take about 8 to 10 minutes to start completely as it is designed to start about 10 child processes. Hence I set the time out as 15 minutes.
    But while starting the resource I found following messages on the console.
    Oct 6 12:45:29 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:29 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:31 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:31 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:33 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:33 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:35 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:35 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    here msm is the logical hostname i have selected and port 8080 is used by jboss instance.
    after throwing these error messages the cluster software failes over to the other node and changes the status to offline after several attempts.
    I tried starting the instance manually and it worked fine.
    Please let me know if I am missing something.
    Thanks in advance for the help.

    Found the solution. Added delay at the end of start script. This may be because jboss takes some time to bind the ports and the hostname.

  • Configure iws on Sun cluster???

    I have installed sun cluster 3.1.On top of it I need to install iws(sunone web server).Does anyone have document pertaining to it?
    I tried docs.sun.com , the document there sound greek or latin to me
    Cheers

    Just to get you started:
    3) create the failover RG to hold the shared address.
    #scrgadm -a -g sa-rg (unique arbitrary RG name) -h prod-node1,prod-node2 (comma seperated list of nades that can host this RG, in the order you want it to fail over)
    again - #scrgadm -a -g sa-rg -h prod-node1,prod-node2
    4) add the network resource to the failover RG.
    # scrgadm -a -S (telling the cluster this is going to be a scalable resource, if it were failover you would use -L) -g sa-rg (the group we created in step #3) -l web-server (-l is for the hostname of the logical host. This name (web-server) needs to be specified in the /etc/hosts file on each node of the cluster. Even if a nodfe is not going to host the RG, it has to know about the LH (logical hosts) hostname!)
    again - #scrgadm -a -S -g sa-rg -l web-server
    5) create the scalable resource group that will run on all nodes.
    #scrgadm -a -g web-rg -y Maximum_primaries=2 -y Desired_primaries=2 -y RG_dependencies=sa-rg
    -y is an extension property. Most resources use standard properties, other "can" use extension properties, still others "must" have extension properties defined. Maximum_primaries says how many nodes you want instance to run on at the most. Desired_primaries is how many instances you want to run at the same time. For an eight node cluster, running other DS's you might say, Maximum_primaries=8 Desired_primaries=6 Which means an instance could run on any node in the cluste, but you want to try to make sure there are nodes available for your other resource so you only want to run 6 instance at any given time, leaving the other two nodes to run your other DS's.
    You could say Max=8 Desired=8 it's a matter of choice.
    6) create a storage resource to be used by the app. This tells the app where to go to find the software it needs to run or process.
    -a=add,-g=in the group,-j=resource name, needs to be unique and is arbitrary, -t resource type installed in pkg format earlier, and registered, -x= resource type extension property (a -y extension property could be used for a RG property or a RT property) -x is only for a RT property. /global/web is defined in the /etc/vfstab file with the mount options field specifying global,logging (at least global, maybe logging) (note you do not specify the DG, just mounts from storage supplied by the DG, because multiple RG's may use storage from the same DG)
    #scrgadm -a -g web-rg -j web-stor -t SUNW.HAStoragePlus (HAStoragePlus provides support only for global devices and file systems) -x Affinityon=false -x FileSystemMountPoints=/global/web
    7) create the app resource in the scalable RG.
    -a=add, -j new resource -g (in the group) web-rg (created in step #5) using the type -t SUNW.apache (defined in step #2, remember the pkg installed was SUNWscapc, SUNW.apache is a made up name we are using to use apache for possibly multiple resource groups. Each -j (resource name must be unique, and only used once) but each -t (resource type, allthough having a unique name from other RT's can be used over and over again in different resources of different RG's.) Bin_dir (self explanitory, where to go to get the binaries) Network_Resouces_Used=web-server (created in step #5, again is the hostname in the /etc/vfstab for the logical host, the name the clients are going to use to get to the resource.) Resource_Dependencies=web-stor (created in step #6) saying that apache-res depends on web-stor, so if web-stor is not online, don't bother trying to start apache because the binaries won't be there. They are supplied by the storage being online and /global/web being mounted up.
    #scrgadm -a -j apache-res -g web-rg -t SUNW.apache -x Bin_dir=/usr/apache/bin -y Scalable=True -y Network_Resources_Used=web-server -y Resource_dependencies=web-stor
    8) switch the failover group to activate it.
    #scswitch -z -g sa-rg
    9) switch the scalable RG to activate it.
    #scswitch -z -g web-rg
    10) make sure everything got started.
    #scstat -g
    11) connect to the newly, cluster started service.

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Multiple Oracle databases in Sun cluster 3.2 (without RAC setup)

    There are 2 Sun SPARC (Sun Fire T2000) servers with Solaris 10 (05/09) OS and Sun Cluster 3.2 software. We need two different Oracle databases & instances (Oracle 10g R2 without RAC) for an application. The first database is for Production and need to be configured in the first node & Shared storage disk and need high availability. This database should run from the second node if the first node fails. The second database is for Quality/Test and it is prefered to be running in the Second node for better load distribution. This DB doesnt require any failover.
    The shared storage is Sun SE 3510 FC and multiple LUNs can be created for different databases..
    Is it possible to configure two different resource groups (one for Quality and other for Production) and make the first node primary for Production RG, and the second node primary for Quality RG and thus distributing the load on 2 servers ? If possible, what special configuration required in Sun OS and Cluster side ?
    Appreciate if you can give some configuration procedures/documents for this multi-master cluster setup..

    You can configure two resource groups, such as:
    # clrg create -n node-a,node-b prod-rg
    # clrg create -n node-b qa-rgand you configure the required resources (disk groups / file systems, logical host, oracle listener, oracle server) as described within
    [http://docs.sun.com/app/docs/doc/819-2980?l=en&a=expand|http://docs.sun.com/app/docs/doc/819-2980?l=en&a=expand]
    Note that this is not really called a "multi-master" configuration - that has a specific meaning for a resource group (see [http://docs.sun.com/app/docs/doc/820-4682/babefcja?l=en&a=view|http://docs.sun.com/app/docs/doc/820-4682/babefcja?l=en&a=view] ) for details.
    With Solaris Cluster all nodes part of a cluster are considered active and can host resource groups. You can have any number of resource groups running, where a subset runs one one node, another subset on other nodes. The nodelist property of the resource group defines where it can run, the first node in the list is the preferred primary.
    You can even define resource group dependencies or affinities between the resource groups. Like you could define a negative affinity between qa-rg and prod-rg, such as if prod-rg needs to failover to node-b (since e.g. node-a died), it would offline qa-rg. Details for that kind of possibilities are described at [http://docs.sun.com/app/docs/doc/820-4682/ch14_resources_admin-35?l=en&a=view|http://docs.sun.com/app/docs/doc/820-4682/ch14_resources_admin-35?l=en&a=view].
    Regards
    Thorsten

  • Does oracle clusterware and oracle RAC require sun cluster

    Hi,
    I have to setup oracle RAC on solaris 10 SPARC. so is it necessary to install sun cluster 3.2, QFS file system on solaris
    I have 2 sun sparc servers with solaris 10 installed on it and shared LUN setup(SAN disk RAID 5 partitions)
    Have to have 2 node setup for RAC load balancing.
    Regards
    Prakash

    Hi Prakash,
    very interesting point:
    As per oracle clusterware documents the cluster manager support is only for windows and linux.
    In case of solaris SPARC will the cluster manager get configured ???
    The term "Cluster Manager" refers to a "cluster manager" that Oracle used in 9i times and this one was indeed only available on Linux / Windows.
    Therefore, let me, please, ask you something: Which version of Oracle RAC do you plan to use?
    Because for 9i RAC, you would need Sun or Veritas Cluster on Solaris. The answers given here that Sun Cluster would not be required assume 10g RAC or higher.
    Now, you might see other dependencies which can be resolved by Sun Cluster. I cannot comment on those.
    For the RAW setup: having RAW disks (not raw logical volumes) will be fine without Veritas and ASM on top.
    Hope that helps. Thanks,
    Markus

  • Veritas required for Oracle RAC on Sun Cluster v3?

    Hi,
    We are planning a 2 node Oracle 9i RAC cluster on Sun Cluster 3.
    Can you please explain these 2 questions?
    1)
    If we have a hardware disk array RAID controller with LUNs etc, then why do we need to have Veritas Volume Manager (VxVM) if all the LUNS are configured at a hardware level?
    2)
    Do we need to have VxFS? All our Oracle database files will be on raw partitions.
    Thanks,
    Steve

    > We are planning a 2 node Oracle 9i RAC cluster on Sun
    Cluster 3.Good. This is a popular configuration.
    Can you please explain these 2 questions?
    1)
    If we have a hardware disk array RAID controller with
    LUNs etc, then why do we need to have Veritas Volume
    Manager (VxVM) if all the LUNS are configured at a
    hardware level?VxVM is not required to run RAC. VxVM has an option (separately
    licensable) which is specifically designed for OPS/RAC. But if
    you have a highly reliable, multi-pathed, hardware RAID platform,
    you are not required to have VxVM.
    2)
    Do we need to have VxFS? All our Oracle database
    files will be on raw partitions.No.
    IMHO, simplify is a good philosophy. Adding more software
    and layers into a highly available design will tend to reduce
    the availability. So, if you are going for maximum availabiliity,
    you will want to avoid over-complicating the design. KISS.
    In the case of RAC, or Oracle in general, many people do use
    raw and Oracle has the ability to manage data in raw devices
    pretty well. Oracle 10g further improves along these lines.
    A tenet in the design of highly available systems is to keep
    the data management as close to the application as possible.
    Oracle, and especially 10g, are following this tenet. The only
    danger here is that they could try to get too clever, and end up
    following policies which are suboptimal as the underlying
    technologies change. But even in this case, the policy is
    coming from the application rather than the supporting platform.
    -- richard

  • Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch

    Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch
    Hi,
    Currently the customer have 3 nodes clusters that are connected to the SE3510 via to the Sun StorEdge[TM] Network Fibre Channel Switch(SAN_Box Manager) and running SUN Cluster 3.X with Disk-Set.The customer want to decommised the system but want to access the 3510 data on the NEW system.
    Initially, I remove one of the HBA card from one the cluster nodes and insert it into the NEW System and is able to detect the 2 LUNS from the SE3150 but not able to mount the file system.After some checking, I decided follow the steps from SunSolve Info ID:85842 as show below:
    1.Turn off all resources groups
    2.Turn off all device groups
    3.Disable all configured resources
    4.remove all resources
    5.remove all resources groups
    6.metaset �s < setname> -C purge
    7.boot to non cluster node, boot �sx
    8.Remove all the reservations from the shared disks
    9.Shutdown all the system
    Now, I am not able to see the two luns from the NEW system from the format command.cfgadm �al shows as below:
    Ap_Id Type Receptacle Occupant Condition
    C4 fc-fabric connected configured
    Unknown
    1.Is it possible to get back the data and mount back accordingly?
    2.Any configuration need to done on the SE3150 or the SAN_Manager?

    First, you will probably need to change the LUN masking on the SE3510 and probably the zoning on the switches to make the LUN available to another system. You'll have to check the manual for this as I don't have these commands committed to memory!
    Once you can see the LUNs on the new machine, you will need to re-create the metaset using the commands that you used to create it on the Sun Cluster. As long as the partitioning hasn't changed from the default, you should get your data back intact. I assume you have a backup if things go wrong?!
    Tim
    ---

  • Sun Cluste 3.1 with SAN 6320 - Any Known Issues?

    Hello,
    We are moving to new Sun hardware with following configurations.
    Solaris 8,Sun Cluster 3.1, Oracle 8.0.6.3 on two V1280 connected to Sun Stordege SAN 6320. SAN is also connected to 5 other machines including one windows 2000.
    Following were the limitations which we came across during the testing phase.
    1. Maximum LUN you can have on a 6320 , co-existing with SUN Cluster is 16. ( You can not have more than 16 LUNS configured on 6320..!)
    2. Maximum number of CLUSTER nodes that you can have with 6320 is FOUR.
    Refer:
    http://docs-pdf.sun.com/816-3381/816-3381.pdf
    Bug ID: 4840853
    Is anybody else there, already moved/moving to any such configuration and wants to give some tips and suggestions. Please let me know.
    Thanks
    Sair

    An update on the same..
    we are having issues with SAN 6320.
    SAN hangs when we use 7 nodes with Sun CLuster 3.1 simultaneously accessing the volumes. No volumes are being accessed from moer than a single node.
    will update later...

  • VCS to Sun Cluster migration

    I am planning to migrate a 2-node cluster from VCS to Sun Cluster. how much downtime does this involve? is there any documentation that i can reference?

    Hi all,
    In the following I outlined the principle steps how to migrate a cluster in place. Tis will be one of the subtopics of an upcoming blog about VCS to SC migration.
    Pavel, you should revisit SC 3.2 definitely and explicitly the bui. We had various VCS admins on different projects who told us, the gap became that small, that VCS is not worth the additional costs.
    Bear in mind that migrating in place is the most complex scenario, and doing it on a complete alternative platform is a much simpler process. But lets proceed with the assumptions and process:
    Let us assume a two node cluster where you want to migrate from VCS with VXVM to Solaris Cluster and Solaris Volume Manager. I assume as well that your data is mirrored. The steps below are a principle outline of the migration process, to get the necessary cluster administration commands you need to consult the appropriate documentation.
    1.Reduce the VCS cluster to a one node cluster and disconnect the interconnect. The interconnect has to be disconnected to allow a Solaris Cluster installation on the other node. Solaris Cluster check the interconnect for unwanted traffic
    2.Split the storage in two halfs, and disallow the access from the VCS cluster to the future Solaris cluster part. This can be achieved in example by modifying the switch zoning, or lun masking. At this point in time your application is still running, but you have no high availability and no data redundancy any more.
    3.Install a single node Solaris Cluster on the second host, it is advisable to start with a fresh Solaris install.
    4.Configure the full Solaris Cluster topology with a temporary copy of your date. The data has to be installed by backup/restore, because you are changing the volume manager as well. It is important here, that you use different IP addresses for the logical hosts to avoid duplicate addresses. Now the new single node Solaris Cluster is ready to take the actual data.
    5.When you are ready for an application downtime, transfer the actual data from the Veritas Cluster again to the Solaris Cluster, and shut down the remaining VCS single node cluster.
    6.Change the IP Addresses of the logical host in the Solaris Cluster to the final value and enable all relevant resources. From now on your application will be running on the new Solaris Cluster.
    7.Reestablish the interconnect, destroy the VCS cluster and install Solaris Cluster packages on the old VCS node, but do not configure the node yet.
    8.Allow data access to the storage for both nodes with appropriate methods.
    9.Add the second node to the Solaris Cluster including the Solaris Cluster device groups, this step will take an other short application downtime.
    10.Mirror your data. From this point you have full redundancy and full high availability again.
    Cheers
    Detlef

Maybe you are looking for

  • AT&T upgrade jargon, if anyone knows (for iPhone 6)

    I have an iPhone 5C, which has served me pretty faithfully for the time that I've had it. Sadly, over the weekend it was dropped onto a hard floor and broke. (It had been in a case, which I had removed for cleaning...of course.) The screen is intact

  • Adobe CC Video Software on MacBook Pro 13" i5

    I have the standard MacBook Pro 13" with an Intel i5 2.5GHz, 4GB RAM, a 500GB HDD and Intel HD Graphics 4000 512MB. It's running OS X 10.8.5 I have yet to have any big editing projects to do on it and I would like to know how it will hold up when run

  • HT201320 How can you figure out the password for college email providers?

    I'm trying to configure my mail account. I want to use my college email but I don't know the provider password. How can I find that out?

  • How to control the cpu in java

    Hi everyone in the forum, i was wondering if it is possible or exist any way to know when the cpu is performing an operation and it occupacy is 100%. i am asking it because of the fact my app (as front-end) is sending a dll to perform some calculatio

  • Dreamweaver CS6 trial won't launch

    I have tried to launch Dreamweaver CS6 several times. I have installed and reinstalled the trial version but cannot get it to open. I get the little hourglass like it wants to load but that's only for a second and then nothing happens. The installati