CRS over SUN cluster

hello
sorry i am a newbie in this area
what is the advantages of installing oracle CRS then the database over sun cluster?
is it needed ? since we already installed SUN cluster....
also could it work if only we install oracle database 10gR2 on the sun cluster?
what are the advantages and disadvantages if we never use SUN cluster,
just install oracle CRS then install the oracle database
sorry for the long question
i really appreciate your help

what is the advantages of installing oracle CRS then the database over sun cluster?
is it needed ? since we already installed SUN cluster....1. CRS is required irrespective of whether or not your any vendor clusterware (such as Sun Cluster).
2. Check this white paper from Sun Microsystems which attempts to high light the advantages of using Oracle RAC with Sun Cluster:
http://www.sun.com/blueprints/0105/819-1466.pdf
also could it work if only we install oracle database 10gR2 on the sun cluster?As indicated above, No. You would still need CRS.
what are the advantages and disadvantages if we never use SUN cluster,See if the above mentioned document helps you with this question.
just install oracle CRS then install the oracle databaseYou could very well do this.
HTH
Thanks
-Chandra Pabba

Similar Messages

  • Encountered ora-29701 during Sun Cluster for Oracle RAC 9.2.0.7 startup (UR

    Hi all,
    Need some help from all out there
    In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
    ora-29701 *Unable to connect to Cluster Manager*
    during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
    We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesn’t work anymore when the server is rebooted.
    Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
    Bug No. 4262155
    Filed 25-MAR-2005 Updated 11-APR-2005
    Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
    Platform Linux x86
    Platform Version 2.4.21-9.0.1
    Database Version 9.2.0.6.0
    Affects Platforms Port-Specific
    Severity Severe Loss of Service
    Status Not a Bug. To Filer
    Base Bug N/A
    Fixed in Product Version No Data
    Problem statement:
    ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
    *** 03/25/05 07:32 am ***
    TAR:
    PROBLEM:
    Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
    While creating the database, customer receives following error:
         ORA-29701: unable to connect to Cluster Manager
    However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
    DIAGNOSTIC ANALYSIS:
    It seems that the problem is with libskgxn9.so shared library.
    For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
    installActions2005-03-22_03-44-42PM.log:,
    [libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
    For 9.2.0.4 -> 9.2.0.6, install log shows:
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
    This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
    ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
    WORKAROUND:
    Customer is using the following workaround:
    cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
    RELATED BUGS:
    Bug 4169291

    Check if following MOS note helps.
    Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1)

  • RAC 10g on Sun Cluster 3.1 U3 and Interconnect

    Hello,
    I have the following Interconnects on my Sun Cluster:
    ce5: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6
         inet 1.1.1.1 netmask ffffff80 broadcast 1.1.1.127
         ether 0:3:ba:95:fa:23
    ce5: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 6
         ether 0:3:ba:95:fa:23
         inet6 fe80::203:baff:fe95:fa23/10
    ce0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 7
         inet 1.1.0.129 netmask ffffff80 broadcast 1.1.0.255
         ether 0:3:ba:95:f9:97
    ce0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 7
         ether 0:3:ba:95:f9:97
         inet6 fe80::203:baff:fe95:f997/10
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 8
         inet 1.1.193.1 netmask ffffff00 broadcast 1.1.193.255
         ether 0:0:0:0:0:1
    In the Installation of RAC the routine will ask me which Interface I will use for RAC Interconnect and I do not know if it does not matter which Interface I choose, because I nevertheless in any case I have an SPOF.
    Can anybody help??
    Thank you very much

    Sorry for the late reply, but the interface to pick is the clprivnet0. This load-balances over the available private interconnects under the covers and so does not represent a single point of failure.
    Tim
    ---

  • IPFC (ip over fc) cluster interconnect

    Hello!
    It a possible create cluster interconnect with IPFC (ip over fc) driver (for example - a reserve channel) ?
    What problems may arise?

    Hi,
    technically Sun Cluster works fine with only a single interconnect, but it used to be not supported. The mandatory requirement to have 2 dedicated interconnects was lifted a couple of months ago. Although it is still a best practice and a recommendation to use 2 independent interconnects.
    The possible consequences of only having one NIC port have been mentioned in the previous post.
    Regards
    Hartmut

  • Bizzare Disk reservation probelm with sun cluster 3.2 - solaris 10 X 4600

    We have a 4 node X4600 sun cluster with shared AMS500 storage. There over 30 LUN's presented to the cluster.
    When any of the two higher nodes ( ie node id 2 and node is 3 ) are booted, their keys are not added to 4 out of 30 LUNS. These 4 LUNs show up as drive type unknown in format. I've noticed that the only thing common with these LUN's is that their size is bigger than 1TB
    To resolve this I simply scrub the keys, run sgdevs than they showup as normal in format and all nodes keys are present on the LUNS.
    Has anybody come across this behaviour.
    Commands used to resolve problem
    1. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
    2. scrub keys #/usr/cluster/lib/sc/scsi -c scrub -d devicename
    3. #sgdevs
    4. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
    all node's keys are now present on the lun

    Hi,
    according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
    So at the end it all boils down to
    - cost: Solaris multipathing is free, as it is bundled
    - support: Sun can offer better support for the Sun software
    You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
    Hartmut

  • Sun Cluster failed to switchover

    Hi,
    I have configured two node sun cluster and was working fine all these days.
    Since yesterday, i am unable to failover the cluster to second node.
    instead, resources are stopped and started again on the first node.
    when i use the command "scswitch -z -g oracle_failover_rg -h MFIN-SOL02" in first node I am getting these messages on the console
    Sep 28 17:53:16 MFIN-SOL01 ip: [ID 678092 kern.notice] TCP_IOC_ABORT_CONN: local = 010.010.007.120:0, remote = 000.000.000.00
    0:0, start = -2, end = 6
    Sep 28 17:53:16 MFIN-SOL01 ip: [ID 302654 kern.notice] TCP_IOC_ABORT_CONN: aborted 0 connection
    Pl. suggest me to solve this problem.

    Those messages aren't important here. I think that might be related to the fault monitor being stopped.
    As I said in the previous post, you need to diagnose this bit by bit. Try the procedure manually, i.e. stop Oracle on node 1, manually switch-over the disks and storage to node 2, mount the file system, bring up the logical address, start the database.
    I expect there is something wrong with your configuration, e.g. incorrect listener configuration.
    There is also a way of increasing the debug level for the Oracle agent. This is documented in the manuals IIRC.
    Regards,
    Tim
    ---

  • Sun Cluster 3.3 Mirror 2 SAN storages (storagetek) with SVM

    Hello all,
    I would like to know if you have any best practice for mirroring two storage systems with svm on sun cluster without corrupting/loosing data from the storages.
    I currently have enabled the multipath on the fc (stmsboot) after that configure the cluster and created the SVM mirror with the did devices.
    I have some issues that i wan to know if there's gonna be any problem.
    a) 4 quorum votes. As i have two (2) nodes and 2 storages that i need to know which is up i have 4 votes, so in order the cluster to start needs 3 votes. Is there any solution on this like cldevice combine ?
    b) The mirror is on SVM level so when a failover happens the metasets go to the other node. Is there any change to start the mirror from the second SAN insteand of the first and have any kind of corruption? Is there someway to better protect the storage ?
    c) The storagetek has option for snapshots, is there a good way of using this feature or not?
    d) Is there any problem by failling over global filesystems (global option in mount)? The only thing that may write in this filesystem is the application itself that belongs in the same resource group, so when it will need to fail over it will stop all the proccesses accessing this filesystem and it would be ok to unmount it.
    Best regards to all of you,
    PiT

    Thank you very much for your answers Tim, they are really very helpfull, i only have some comments on them to be fully answered.
    a) Its all answered to me. I thing that i will add the vote from only one storage and if the storage goes down, i will tell the customer to check the quorum status and add the second storage as QD. The quorum server is not a bad idea, but if the network is down for some reason i thing that bad thing will happen so i dont wont to relly on that.
    b) I think you are clear enough.
    c) I thing you are clear enough! (just as i thought this would happen for the snapshots....)
    d) Finally, if this filesystem is in a metadisk that is been started from the first node and the second node is proxing to the first node for the metaset disks, is there any change to lock the filesystem/ metaset group and don't be able to take it?
    Thanks in advance,
    Pit
    (I will also look the document you mention, a lot of thanks)

  • Jboss configuration on Sun Cluster 3.1

    Hi.
    I am using generic Data Services to manage JBoss instance under Sun Cluster. the command is as follows.
    scrgadm -a -j jboss_resource -g cluster_failover_rg -t SUNW.gds \
    -y Scalable=false -y Start_timeout=900 \
    -y Stop_timeout=420 -x Probe_timeout=300 \
    -y Port_list="8080/tcp" \
    -y Resource_dependencies=oracle_server_resource \
    -x Start_command='/bin/su mform -c "/usr/msm40/scripts/startup/jboss.sh start"' \
    -x Stop_command='/bin/su mform -c "/usr/msm40/scripts/startup/jboss.sh stop"' \
    -x Child_mon_level=0 -x Failover_enabled=true -x Stop_signal=9
    My jboss script will take about 8 to 10 minutes to start completely as it is designed to start about 10 child processes. Hence I set the time out as 15 minutes.
    But while starting the resource I found following messages on the console.
    Oct 6 12:45:29 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:29 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:31 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:31 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:33 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:33 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    Oct 6 12:45:35 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to host msm and port 8080: Connection refused.
    Oct 6 12:45:35 MFIN-SOL01 SC[SUNW.gds:5,cluster_failover_rg,jboss_resource,gds_
    svc_start]: Failed to connect to the host <msm> and port <8080>.
    here msm is the logical hostname i have selected and port 8080 is used by jboss instance.
    after throwing these error messages the cluster software failes over to the other node and changes the status to offline after several attempts.
    I tried starting the instance manually and it worked fine.
    Please let me know if I am missing something.
    Thanks in advance for the help.

    Found the solution. Added delay at the end of start script. This may be because jboss takes some time to bind the ports and the hostname.

  • Wrong hostname setting after Sun Cluster failover

    Hi Gurus,
    our PI system has been setup to fail over in a sun cluster with a virtual hostname s280m (primary host s280 secondary host s281)
    The basis team set up the system profiles to use the virtual hostname, and I did all the steps in SAP Note 1052984 "Process Integration 7.1 High Availability" (my PI is 7.11)
    Now I believe to have substituted "s280m" in every spot where previously "s280" existed, but when I start the system on the DR box (s281), the java stack throws erros when starting. Both SCS01 and DVEBMGS00 work directories contain a file called dev_sldregs with the following error:
    Mon Apr 04 11:55:22 2011 Parsing XML document.
    Mon Apr 04 11:55:22 2011 Supplier Name: BCControlInstance
    Mon Apr 04 11:55:22 2011 Supplier Version: 1.0
    Mon Apr 04 11:55:22 2011 Supplier Vendor:
    Mon Apr 04 11:55:22 2011 CIM Model Version: 1.5.29
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 HTTP trace: false
    Mon Apr 04 11:55:22 2011 Data trace: false
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 *** Start SLD Registration ***
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 HTTP open timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP send timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP response timeout = 420 sec
    Mon Apr 04 11:55:22 2011 Used URL: http://s280:50000/sld/ds
    Mon Apr 04 11:55:22 2011 HTTP open status: false - NI RC=0
    Mon Apr 04 11:55:22 2011 Failed to open HTTP connection!
    Mon Apr 04 11:55:22 2011 ****************************
    Mon Apr 04 11:55:22 2011 *** End SLD Registration ***
    Mon Apr 04 11:55:22 2011 ****************************
    notice it is using the wrong hostname (s280 instead of s280m). Where did I forget to change the hostname? Any ideas?
    thanks in advance,
    Peter

    Please note that the PI system is transparent about the Failover system used.
    When you configure the parameters against the mentioned note, this means that in case one of the nodes is down, the load will be sent to another system under the same Web Dispatcher/Load Balancer.
    When using the Solaris failover solution, it covers the whole environment, including the web dispatcher, database and all nodes.
    Therefore, please check the configuration as per the page below, which talks specifically about the Solaris failover solution for SAP usage:
    http://wikis.sun.com/display/SunCluster/InstallingandConfiguringSunClusterHAfor+SAP

  • SUN CLuster probe value

    Hi,
    I've a little question about probe value when creating a probe script.
    Exit code 100 (automatic failover) means that the probe is not valid and it should restart during the rety-count in the retry_interval,
    Exit code 0 means that everything is OK
    What about the other values? (1,2,......99). Is there other values ??
    Thanks.

    Pat,
    For GDS there is also exit 201, which will perform an immediate failover.
    Your exit 100 ---> to immediate failover is not completely true. An exit 100 from the probe will inform GDS that the application has failed and requires immediate attention. That attention is determined by other resource properties, i.e. Retry_count and retry_interval. So, assuming Retry_count=2, then GDS will attempt a resource restart and only consider a failover to another node once Retry_count is exceeded within Retry_interval.
    The SUNW.gds man page provides further information, i.e.
    The exit status of the probe command is used to deter-
    mine the severity of the failure of the application.
    This exit status, called probe status, is an integer
    between 0 (for success) and 100 (for complete failure).
    The probe status can also be 201, which causes the
    application to fail over unless Failover_enabled is set
    to False.
    One point to also consider is that Sun Cluster also sums the failure history, so 100 indicates a complete failure. This implies that your probe could exit 50 and if the next time the probe runs it also exit's 50, you'll have a failure history sum of 100 which would trigger a reaction for a complete failure, e.g.
    25 + 25 + 25 +25 = 100 would trigger a complete failure
    50 + 50 = 100 would trigger a complete failure
    Please note that if you consider exit values such as 25 or 50, then the failure history must be summed within the moving Retry_interval window. So if Retry_interval was set to 300 then you have a 5 minute moving window in which to sum 100 in order to get GDS to react to a complete failure. This implies that if your probe exits 50 and then 301 seconds later exits 50 again GDS won't react unless your probe exists sum 100 with Retry_interval.
    Hope this makes sense.
    Regards
    Neil

  • Sun Cluster 3.2, Zones, HA-Oracle, & FSS

    I have a customer who wants to deploy a cluster utilizing Solaris 10 Zones. With creating the resource groups with the following: nodeA:zoneA, nodeB:zoneA, the Oracle resource group will be contained in the respective zone.
    First create the Zone after the Sun Cluster software has been installed?
    When installing Oracle, the binaries and such should reside in the Zone or in the global zone?
    When configuring FSS, should this be done after the resources have been configured?
    Thanks in advance,
    Ryan

    The Oracle biaries are not big at all, ther is not much IO happening at this fs, you can easily create a ufs file system for each zone, mount that via lofs mounts into the zone. Or you can create a zpool for the binaries. My personal take would be to include them in the root path of the zones an you are set.
    You must install the binaries in all zones where your Oracle database can fail over to. To reduce the maintenance work in the case of upgrades I would limit the binary installation to the zones in the nodelist of your oracle resource group. If you install the binaries on all nodes/zones of the cluster you have more work when it comes to an upgrade.
    Kind Regards
    Detlef

  • Sun Cluster 3.2  without share storage. (Sun StorageTek Availability Suite)

    Hi all.
    I have two node sun cluster.
    I am configured and installed AVS on this nodes. (AVS Remote mirror replication)
    AVS working fine. But I don't understand how integrate it in cluster.
    What did I do:
    Created remote mirror with AVS.
    v210-node1# sndradm -P
    /dev/rdsk/c1t1d0s1      ->      v210-node0:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node1# 
    v210-node0# sndradm -P
    /dev/rdsk/c1t1d0s1      <-      v210-node1:/dev/rdsk/c1t1d0s1
    autosync: on, max q writes: 4096, max q fbas: 16384, async threads: 2, mode: sync, group: AVS_TEST_GRP, state: replicating
    v210-node0#   Created resource group in Sun Cluster:
    v210-node0# clrg status avs_test_rg
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    avs_test_rg      v210-node0      No             Offline
                     v210-node1      No             Online
    v210-node0#  Created SUNW.HAStoragePlus resource with AVS device:
    v210-node0# cat /etc/vfstab  | grep avs
    /dev/global/dsk/d11s1 /dev/global/rdsk/d11s1 /zones/avs_test ufs 2 no logging
    v210-node0#
    v210-node0# clrs show avs_test_hastorageplus_rs
    === Resources ===
    Resource:                                       avs_test_hastorageplus_rs
      Type:                                            SUNW.HAStoragePlus:6
      Type_version:                                    6
      Group:                                           avs_test_rg
      R_description:
      Resource_project_name:                           default
      Enabled{v210-node0}:                             True
      Enabled{v210-node1}:                             True
      Monitored{v210-node0}:                           True
      Monitored{v210-node1}:                           True
    v210-node0# In default all work fine.
    But if i need switch RG on second node - I have problem.
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Offline   Offline
                                v210-node1   Online    Online
    v210-node0# 
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    clrg:  (C748634) Resource group avs_test_rg failed to start on chosen node and might fail over to other node(s)
    v210-node0#  If I change state in logging - all work.
    v210-node0# sndradm -C local -l
    Put Remote Mirror into logging mode? (Y/N) [N]: Y
    v210-node0# clrg switch -n v210-node0 avs_test_rg
    v210-node0# clrs status avs_test_hastorageplus_rs
    === Cluster Resources ===
    Resource Name               Node Name    State     Status Message
    avs_test_hastorageplus_rs   v210-node0   Online    Online
                                v210-node1   Offline   Offline
    v210-node0#  How can I do this without creating SC Agent for it?
    Anatoly S. Zimin

    Normally you use AVS to replicate data from one Solaris Cluster to another. Can you just clarify whether you are replicating to another cluster or trying to do it between a single cluster's nodes? If it is the latter, then this is not something that Sun officially support (IIRC) - rather it is something that has been developed in the open source community. As such it will not be documented in the Sun main SC documentation set. Furthermore, support and or questions for it should be directed to the author of the module.
    Regards,
    Tim
    ---

  • Sun Cluster 3.2 - Global File Systems

    Sun Cluster has a Global Filesystem (GFS) that supports read-only access throughout the cluster. However, only one node has write access.
    In Linux a GFS filesystem allows it to be mounted by multiple nodes for simultaneous READ/WRITE access. Shouldn't this be the same for Solaris as well..
    From the documentation that I have read,
    "The global file system works on the same principle as the global device feature. That is, only one node at a time is the primary and actually communicates with the underlying file system. All other nodes use normal file semantics but actually communicate with the primary node over the same cluster transport. The primary node for the file system is always the same as the primary node for the device on which it is built"
    The GFS is also known as Cluster File System or Proxy File system.
    Our client believes that they can have their application "scaled" and all nodes in the cluster can have the ability to write to the globally mounted file system. My belief was, the only way this can occur is when the application has failed over and then the "write" would occur from the "primary" node whom is mastering the application at that time. Any input will be greatly appreciated or clarification needed. Thanks in advance.
    Ryan

    Thank you very much, this helped :)
    And how seamless is remounting of the block device LUN if one server dies?
    Should some clustered services (FS clients such as app servers) be restarted
    in case when the master node changes due to failover? Or is it truly seamless
    as in a bit of latency added for duration of mounting the block device on another
    node, with no fatal interruptions sent to the clients?
    And, is it true that this solution is gratis, i.e. may legally be used for free
    unless the customer wants support from Sun (authorized partners)? ;)
    //Jim
    Edited by: JimKlimov on Aug 19, 2009 4:16 PM

  • TimesTen database in Sun Cluster environment

    Hi,
    Currently we have our application together with the TimesTen database installed at the customer on two different nodes (running on Sun Solaris 10). The second node acts as a backup to provide failover functionality, although right now only manual failover is supported.
    We are now looking into a hot-standby / high availability solution using Sun Cluster software. As understood from the documentation, applications can be 'plugged-in' to the Sun Cluster using Agents to monitor the application. Sun Cluster Agents should be already available for certain applications such as:
    # MySQL
    # Oracle 9i, 10g (HA and RAC)
    # Oracle 9iAS Application Server
    # PostgreSQL
    (See http://www.sun.com/software/solaris/cluster/faq.jsp#q_19)
    Our question is whether Sun Cluster Agents are already (freely) available for TimesTen? If so, where to find them. If not, should we write a specific Agent separately for TimesTen or handle database problems from the application.
    Does someone have any experience using TimesTen in a Sun Cluster environment?
    Thanks in advance!

    Yes, we use 2-way replication, but we don't use cache connect. The replication is created like this on both servers:
    create replication MYDB.REPSCHEME
    element SERVER01_DS datastore
    master MYDB on "SERVER01_REP"
    transmit nondurable
    subscriber MYDB on "SERVER02_REP"
    element SERVER02_DS datastore
    master MYDB on "SERVER02_REP"
    transmit nondurable
    subscriber MYDB on "SERVER01_REP"
    store MYDB on "SERVER01_REP"
    port 16004
    failthreshold 500
    store MYDB on "SERVER02_REP"
    port 16004
    failthreshold 500
    The application runs on SERVER01 and is standby on SERVER02. If an invalid state is detected in the application, the application on SERVER01 is stopped and the application on SERVER02 is started.
    In addition to this, we want to fail over if the database on the SERVER01 is in invalid state. What should we have monitored by the Clustering Agent to detect an invalid state in TT?

  • RAW disks for Oracle 10R2 RAC NO SUN CLUSTER

    Yes you read it correctly....no Sun cluster. Then why am I on the Forum right? Well we have one Sun Cluster and another that is RAC only for testing. Between Oracle and Sun, neither accept any fault for problems with their perfectly honed products. Currently, I have multipathed fiber hba's to a Storedge 3510, and I've tried to get Oracle to use a raw lun for the ocr and voting disks. It doesn't see the disk. I've made sure they are stamped for oracle:dba, and tried oracle:oinstall. When presenting /dev/rdsk/C7t<long number>d0s6 for the ocr, I get a "can not find disk path." Does Oracle raw mean SVM raw? Should I create metadisks?

    "Between Oracle and Sun, neither accept any fault for problems with their perfectly honed products"...more specific:
    Not that the word "fault" is characterization of any liability, but a technical characterization of acting like a responsible stakeholder when you sell your product to a corporation. I've been working on the same project for a year, as an engineer. Not withstanding a huge expanse of management issues over the project, when technical gray areas have been reached, whereas our team has tried to get information to solve the issue. The area has become a big bouncing hot potato. Specifically, when Oracle has a problem reading a storage device, according to Oracle, that is a Sun issue. According to Sun, they didn't certify the software on that piece of equipment, so go talk to Oracle. In the sun cluster arena, if starting the database creates a node eviction from the cluster, good luck getting any specific team to say, that's our problem. Sun will say that Oracle writes crappy cluster verify scripts, and Oracle will say that Sun has not properly certified the device for use with their product. Man, I've seen it. The first time I said O.K. how do we avoid this in the future, the second time I said how did I let this happen again, and after more issues, money spent, hours lost, and customers, pissed --do the math.   I've even went as far as say, find me a plug and play production model for this specific environment, but good luck getting two companies to sign the specs for it...neither wants to stamp their name on the product due to the liability.  Yes your right, I should beat the account team, but as an engineer, man that's not my area, and I have other problems that I was hired to deal with.  I could go on.  What really is a slap in face is no one wants to work on these projects, if given the choice with doing a Windows deployment, because they can pop out mind bending amounts of builds why we plop along figuring out why clusterware doesn't like slice 6 of a /device/scsi_vhci/ .  Try finding good documentation on that.  ~You can deploy faster, but you can't pay more!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for

  • Create Sales order with reference to purchase order - Help needed

    Hi Gurus Purchase order is being created in Oracle system. SAP system will receive the purchase order and creates the sales order for the corresponding purchase order. Hoe to create sales order, Through BAPI or through IDOC. Please suggest and give s

  • Skip inspection at the time of Goods Receipt

    I want to skip inspection since it has been done at source. How can I do it at the time of GR ? I know about activating indicator 'Source Inspection - No GR', but I don't find it in MIGO screen.

  • Managing Data

    Ive had very little issues with the iPhone so far, but today something minor happened but there doesn't seem to be an easy way to fix it; This is a big deal. I was syncing some videos when iTunes told me the "disc" (iPhone) has timed out or something

  • IGMP snooping

    Hello to all! I have come a cross one problem with hope someone can help me solve it or at least give some valuable ideas. The problem is regarding IGMP snooping with Cisco 4948E swithes. By documentation it is said that IGMP snooping is turned on by

  • Captivate 8 available now for annual subscription users?

    Just saw the announcement about the release of Captivate 8. I have an annual subscription to Captivate. When I check on Updates from within Captivate 7 it says all my subscriptions are up to date.  Will it be available soon via Updates or is there an