Shared Tuxedo 8.0 Binaries on a SUN Cluster 3.0

I know pefectly well that in every installation document BEA strongly advise not to
try to share executables across remote file systems (NFS etc.) Still i need to ask
if one of you have any experience in a setup of a SUN 8 / SUN cluster 3.0 enviroment,
where 2 or more nodes shares disks by the same SUN 3 cluster. The basic idea is to
have the the Tux8 binaries installed only once, and then separate all the "dynamic"
files tmconfig, tlogdevices etc in to its own respective katalog (/node1, /node2
etc.) But stil they remain on the clusterd disks.
Thank you for a quick response.
Best of regards
Raoul

We have the same problem with 2 SUN E420 and a D1000 storage array.
The problem is releted to the settings on the file :
/etc/system
added by the cluster installation :
set rpcmod:svc_default_stksize=0x4000
set ge:ge_intr_mode=0x833
the second one line try to set a Gigabit Ethernet Interface that does not exist.
We comment out the two line and every things work fine.
I'm interesting to know what you think about Sun Cluster 3.0. and your experience.
email : [email protected]
Stefano

Similar Messages

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Sun Cluster + meta set shared disks -

    Guys, I am looking for some instructions that most sun administrators would mostly know i believe.
    I am trying to create some cluster resource groups and resources etc., but before that i am creating the file systems that is going to be used by two nodes in the sun cluster 3.2. we use SVM.
    I have some drives that i plan to use for this specific cluster resource group that is yet to be created.
    i know i have to create a metaset since thats how other resource groups in my environment are setup already so i will go with the same concept.
    # metaset -s TESTNAME
    Set name = TESTNAME, Set number = 5
    Host Owner
    server1
    server2
    Mediator Host(s) Aliases
    server1
    server2
    # metaset -s TESTNAME -a /dev/did/dsk/d15
    metaset: server1: TESTNAME: drive d15 is not common with host server2
    # scdidadm -L | grep d6
    6 server1:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
    6 server2:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
    # scdidadm -L | grep d15
    15 server1:/dev/rdsk/c10t6005076307FFC4520000000000004121d0 /dev/did/rdsk/d15
    Do you see what i am trying to say ? If i want to add d6 in the metaset it will go through fine, but not for d15 since it shows only against one node as you see from the scdidadm output above.
    Please Let me know how i share the drive d15 same as d6 with the other node too. thanks much for your help.
    -Param
    Edited by: paramkrish on Feb 18, 2010 11:01 PM

    Hi, Thanks for your reply. You got me wrong. I am not asking you to be liable for the changes you recommend since i know thats not reasonable while asking for help. I am aware this is not a support site but a forum to exchange information that people already are aware of.
    We have a support contract but that is only for the sun hardware and those support folks are somewhat ok when it comes to the Solaris and setup but not that experts. I will certainly seek their help when needed and thats my last option. Since i thought this problem that i see is possibly something trivial i quickly posted a question in this forum.
    We do have a test environment but that do not have two nodes but a 1 node with zone clusters. hence i dont get to see this similar problem in the test environment and also the "cldev populate" would be of no use as well to me if i try it in the test environment i think since we dont have two nodes.
    I will check the logs as you suggested and will get back if i find something. If you have any other thoughts feel free to let me know ( dont bother about the risks since i know i can take care of that ).
    -Param

  • Configure iws on Sun cluster???

    I have installed sun cluster 3.1.On top of it I need to install iws(sunone web server).Does anyone have document pertaining to it?
    I tried docs.sun.com , the document there sound greek or latin to me
    Cheers

    Just to get you started:
    3) create the failover RG to hold the shared address.
    #scrgadm -a -g sa-rg (unique arbitrary RG name) -h prod-node1,prod-node2 (comma seperated list of nades that can host this RG, in the order you want it to fail over)
    again - #scrgadm -a -g sa-rg -h prod-node1,prod-node2
    4) add the network resource to the failover RG.
    # scrgadm -a -S (telling the cluster this is going to be a scalable resource, if it were failover you would use -L) -g sa-rg (the group we created in step #3) -l web-server (-l is for the hostname of the logical host. This name (web-server) needs to be specified in the /etc/hosts file on each node of the cluster. Even if a nodfe is not going to host the RG, it has to know about the LH (logical hosts) hostname!)
    again - #scrgadm -a -S -g sa-rg -l web-server
    5) create the scalable resource group that will run on all nodes.
    #scrgadm -a -g web-rg -y Maximum_primaries=2 -y Desired_primaries=2 -y RG_dependencies=sa-rg
    -y is an extension property. Most resources use standard properties, other "can" use extension properties, still others "must" have extension properties defined. Maximum_primaries says how many nodes you want instance to run on at the most. Desired_primaries is how many instances you want to run at the same time. For an eight node cluster, running other DS's you might say, Maximum_primaries=8 Desired_primaries=6 Which means an instance could run on any node in the cluste, but you want to try to make sure there are nodes available for your other resource so you only want to run 6 instance at any given time, leaving the other two nodes to run your other DS's.
    You could say Max=8 Desired=8 it's a matter of choice.
    6) create a storage resource to be used by the app. This tells the app where to go to find the software it needs to run or process.
    -a=add,-g=in the group,-j=resource name, needs to be unique and is arbitrary, -t resource type installed in pkg format earlier, and registered, -x= resource type extension property (a -y extension property could be used for a RG property or a RT property) -x is only for a RT property. /global/web is defined in the /etc/vfstab file with the mount options field specifying global,logging (at least global, maybe logging) (note you do not specify the DG, just mounts from storage supplied by the DG, because multiple RG's may use storage from the same DG)
    #scrgadm -a -g web-rg -j web-stor -t SUNW.HAStoragePlus (HAStoragePlus provides support only for global devices and file systems) -x Affinityon=false -x FileSystemMountPoints=/global/web
    7) create the app resource in the scalable RG.
    -a=add, -j new resource -g (in the group) web-rg (created in step #5) using the type -t SUNW.apache (defined in step #2, remember the pkg installed was SUNWscapc, SUNW.apache is a made up name we are using to use apache for possibly multiple resource groups. Each -j (resource name must be unique, and only used once) but each -t (resource type, allthough having a unique name from other RT's can be used over and over again in different resources of different RG's.) Bin_dir (self explanitory, where to go to get the binaries) Network_Resouces_Used=web-server (created in step #5, again is the hostname in the /etc/vfstab for the logical host, the name the clients are going to use to get to the resource.) Resource_Dependencies=web-stor (created in step #6) saying that apache-res depends on web-stor, so if web-stor is not online, don't bother trying to start apache because the binaries won't be there. They are supplied by the storage being online and /global/web being mounted up.
    #scrgadm -a -j apache-res -g web-rg -t SUNW.apache -x Bin_dir=/usr/apache/bin -y Scalable=True -y Network_Resources_Used=web-server -y Resource_dependencies=web-stor
    8) switch the failover group to activate it.
    #scswitch -z -g sa-rg
    9) switch the scalable RG to activate it.
    #scswitch -z -g web-rg
    10) make sure everything got started.
    #scstat -g
    11) connect to the newly, cluster started service.

  • VMware ESX + Sun Cluster - is it supported?

    Do you know if Sun Cluster installed on top of VMware ESX can be supported by Sun/Oracle?
    Do you know what requirements newly installed cluster has to meet in order to satisfy Sun's support contract?
    Regards,
    Marek Barczyk
    Edited by: Marek_Barczyk on Jun 29, 2010 7:13 AM

    The binaries (executables) in an Oracle home are "linked" (link edited?) to the OS libraries on each server where the software is installed.
    Unless the OS is IDENTICAL on each of the IDENTICAL(HW) servers -- that would share the Oracle home--, you could be in trouble.
    The only supported configuration (I know of) where the Oracle binaries are shared between servers is 9i RAC. On 10g RAC the binaries are installed on each server.
    Other wise I'd say it's NOT recommended, besides you don't save anything (execpt a cooupl of Gigs disk space).
    :p

  • Encountered ora-29701 during Sun Cluster for Oracle RAC 9.2.0.7 startup (UR

    Hi all,
    Need some help from all out there
    In our Sun Cluster 3.1 Data Service for Oracle RAC 9.2.0.7 (Solaris 9) configuration, my team had encountered
    ora-29701 *Unable to connect to Cluster Manager*
    during the startup of the Oracle RAC database instances on the Oracle RAC Server resources.
    We tried the attached workaround by Oracle. This workaround works well for the 1^st time but it doesn’t work anymore when the server is rebooted.
    Kindly help me to check whether anyone encounter the same problem as the above and able to resolve. Thanks.
    Bug No. 4262155
    Filed 25-MAR-2005 Updated 11-APR-2005
    Product Oracle Server - Enterprise Edition Product Version 9.2.0.6.0
    Platform Linux x86
    Platform Version 2.4.21-9.0.1
    Database Version 9.2.0.6.0
    Affects Platforms Port-Specific
    Severity Severe Loss of Service
    Status Not a Bug. To Filer
    Base Bug N/A
    Fixed in Product Version No Data
    Problem statement:
    ORA-29701 DURING DATABASE CREATION AFTER APPLYING 9.2.0.6 PATCHSET
    *** 03/25/05 07:32 am ***
    TAR:
    PROBLEM:
    Customer applied 9.2.0.6 patchset over 9.2.0.4 patchset.
    While creating the database, customer receives following error:
         ORA-29701: unable to connect to Cluster Manager
    However, if customer goes from 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the problem does not occur.
    DIAGNOSTIC ANALYSIS:
    It seems that the problem is with libskgxn9.so shared library.
    For 9.2.0.4 -> 9.2.0.5 -> 9.2.0.6, the install log shows the following:
    installActions2005-03-22_03-44-42PM.log:,
    [libskgxn9.so->%ORACLE_HOME%/lib/libskgxn9.so 7933 plats=1=>[46]langs=1=> en,fr,ar,bn,pt_BR,bg,fr_CA,ca,hr,cs,da,nl,ar_EG,en_GB,et,fi,de,el,iw,hu,is,in, it,ja,ko,es,lv,lt,ms,es_MX,no,pl,pt,ro,ru,zh_CN,sk,sl,es_ES,sv,th,zh_TW, tr,uk,vi]]
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]]
    For 9.2.0.4 -> 9.2.0.6, install log shows:
    installActions2005-03-22_04-13-03PM.log:, [libcmdll.so ->%ORACLE_HOME%/lib/libskgxn9.so 64274 plats=1=>[46] langs=-554696704=>[en]] does not exist.
    This means that while patching from 9.2.0.4 -> 9.2.0.5, Installer copies the libcmdll.so library into libskgxn9.so, while patching from 9.2.0.4 -> 9.2.0.6 does not.
    ORACM is located in /app/oracle/ORACM which is different than ORACLE_HOME in customer's environment.
    WORKAROUND:
    Customer is using the following workaround:
    cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk rac_on ioracle ipc_udp
    RELATED BUGS:
    Bug 4169291

    Check if following MOS note helps.
    Series of ORA-7445 Errors After Applying 9.2.0.7.0 Patchset to 9.2.0.6.0 Database (Doc ID 373375.1)

  • Connected to an idle instance in sun cluster nodes.

    i have two sun cluster node sharing common storage.
    Two schema's:
    test1 for nodeA
    test2 for nodeB.
    My requirement is as below:
    Login into b node.
    export ORACLE_SID=test1.
    sqlplus / as sysdba.
    But i am getting as
    "connected to an idle instance"
    Is there any way to connect to node A schema from Node B.

    I found the answer..
    sqlplus <sysdbauser>/<password>@test1 as sysdba

  • Sun Cluster.. Why?

    What are the advantages of installing RAC 10.2.0.3 on a Sun Cluster.? Are there any benefits?

    Oracle 10g onward, there is no such burning requirement for Sun Cluster (or any third party cluster) as far as you are using all Oracle technologies for your Oracle RAC database. You should Oracle RAC with ASM for shared storage and that would not require any third party cluster. Bear inmind that
    You may need to install Sun Cluster in the following scenarios:
    1) If there is applicaiton running with in the cluster along with Oracle RAC database that you want to configure for HA and Sun Cluster provide the cluster resourced (easy to use) to manage and monitor the application. THIS can be achieved with Oracle Clusterware but you will have to write your own cluster resource for that.
    2) If you want to install cluster file system such as QFS then you will need to install the Sun Cluster. If this cluster is only running the Oracle RAC database then you can rely on Oracle technologies such as ASM, raw devices without installing Sun Cluster.
    3) Any certification conflicts.
    Any correction is welcome..
    -Harish Kumar Kalra

  • Bizzare Disk reservation probelm with sun cluster 3.2 - solaris 10 X 4600

    We have a 4 node X4600 sun cluster with shared AMS500 storage. There over 30 LUN's presented to the cluster.
    When any of the two higher nodes ( ie node id 2 and node is 3 ) are booted, their keys are not added to 4 out of 30 LUNS. These 4 LUNs show up as drive type unknown in format. I've noticed that the only thing common with these LUN's is that their size is bigger than 1TB
    To resolve this I simply scrub the keys, run sgdevs than they showup as normal in format and all nodes keys are present on the LUNS.
    Has anybody come across this behaviour.
    Commands used to resolve problem
    1. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
    2. scrub keys #/usr/cluster/lib/sc/scsi -c scrub -d devicename
    3. #sgdevs
    4. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
    all node's keys are now present on the lun

    Hi,
    according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
    So at the end it all boils down to
    - cost: Solaris multipathing is free, as it is bundled
    - support: Sun can offer better support for the Sun software
    You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
    Hartmut

  • Recommendations for Multipathing software in Sun Cluster 3.2 + Solaris 10

    Hi all, I'm in the process of building a 2-node cluster with the following specs:
    2 x X4600
    Solaris 10 x86
    Sun Cluster 3.2
    Shared storage provided by a EMC CX380 SAN
    My question is this: what multipathing software should I use? The in-built Solaris 10 multipathing software or EMC's powerpath?
    Thanks in advance,
    Stewart

    Hi,
    according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
    So at the end it all boils down to
    - cost: Solaris multipathing is free, as it is bundled
    - support: Sun can offer better support for the Sun software
    You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
    Hartmut

  • Information about Sun Cluster 3.1 5Q4 and Storage Foundation 4.1

    Hi,
    I have 2 Sunfire V440 with Solaris 9 last release 9/05 with last cluster patchs.. , Qlogic HBA fibre card on a seven disks shared on a Emc Clariion cx500. I have installed and configured Sun cluster 3.1 and Veritas Storage Foundation 4.1 MP1. My problems is when i run format wcommand on each node, I see the disks in a different order and veritas SF (4.1) is also picking up the disks in a different order.
    1. Storage Foundation 4.1 is compatible with Sun cluster 3.1 2005Q4?????
    2. Do you have a how to or other procedure for Storage foundation 4.1 with Sun Cluster 3.1.
    I'm very confuse with veritas Storage foundation
    Thanks!
    J-F Aubin

    This combination does not work today, but it will be available later.
    Since Sun and Veritas are two separate companies, it takes more
    time than expected to synchronize releases. Products supported by
    Sun for Sun Cluster installation undergo extensive testing, which also
    takes time.
    -- richard

  • Didadm: unable to determine hostname.  error on Sun cluster 4.0 - Solaris11

    Trying to install Sun Cluster 4.0 on Sun Solaris 11 (x86-64).
    iscs sharedi Quorum Disk are available in /dev/rdsk/ .. ran
    devfsadm
    cldevice populate
    But don't see DID devices getting populated in /dev/did.
    Also when scdidadm -L is issued getting the following error. Has any seen the same error ??
    - didadm: unable to determine hostname.
    Found in cluster 3.2 there was a Bug 6380956: didadm should exit with error message if it cannot determine the hostname
    The sun cluster command didadm, didadm -l in particular, requires the hostname to function correctly. It uses the standard C library function gethostname to achieve this.
    Early in the cluster boot, prior to the service svc:/system/identity:node coming online, gethostname() returns an empty string. This breaks didadm.
    Can anyone point me in the right direction to get past this issue with shared quorum disk DID.

    Let's step back a bit. First, what hardware are you installing on? Is it a supported platform or is it some guest VM? (That might contribute to the problems).
    Next, after you installed Solaris 11, did the system boot cleanly and all the services come up? (svcs -x). If it did boot cleanly, what did 'uname -n' return? Do commands like 'getent hosts <your_hostname>' work? If there are problems here, Solaris Cluster won't be able to get round them.
    If the Solaris install was clean, what were the results of the above host name commands after OSC was installed? Do the hostnames still resolve? If not, you need to look at why that is happening first.
    Regards,
    Tim
    ---

  • Sun Cluster 3.2, Zones, HA-Oracle, & FSS

    I have a customer who wants to deploy a cluster utilizing Solaris 10 Zones. With creating the resource groups with the following: nodeA:zoneA, nodeB:zoneA, the Oracle resource group will be contained in the respective zone.
    First create the Zone after the Sun Cluster software has been installed?
    When installing Oracle, the binaries and such should reside in the Zone or in the global zone?
    When configuring FSS, should this be done after the resources have been configured?
    Thanks in advance,
    Ryan

    The Oracle biaries are not big at all, ther is not much IO happening at this fs, you can easily create a ufs file system for each zone, mount that via lofs mounts into the zone. Or you can create a zpool for the binaries. My personal take would be to include them in the root path of the zones an you are set.
    You must install the binaries in all zones where your Oracle database can fail over to. To reduce the maintenance work in the case of upgrades I would limit the binary installation to the zones in the nodelist of your oracle resource group. If you install the binaries on all nodes/zones of the cluster you have more work when it comes to an upgrade.
    Kind Regards
    Detlef

  • Sun cluster with netapps filers

    Has any one running sun cluster 3 with netapps filers. We would primary running oracle databases but would be running some apps servers as well.
    There does appear to be some issues in rgard to the fact that NFS file locking is not the same as on a standard filesystem.
    Also I can see how the logical host would be setup to mount a NFS filesystem instead of say mounting a filesystem that is using volume maanager

    Today, only Oracle RAC is supported under Sun Cluster 3.1
    using NetApp filers as shared storage. RAC has a builtin
    distributed lock manager to arbitrate access to the shared
    files. If you want to use your own application, then it needs
    a lock manager or you can wait for a later Sun Cluster release.
    -- richard

  • SAP Netweaver 7.0 Web Dispatcher HA Setup with Sun Cluster 3.2

    Hi,
    How to HA SAP web dispatcher, which is not mentioned in the guide 'Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS'.
    Since I do not want to install central instance within the cluster, should I install two standalone web dispatcers within the two nodes, and then HA it? Or maybe just install it on the shared storage with CFS once?
    And specifically, what kind of resource type should I use for it? SUNW.sapwebas?
    Thanks in advance,
    Stephen

    Hi all.
    I am having a similar issue with a Sun Cluster 3.2 and SAP 7.0
    Scenrio:
    Central Instance (not incluster) : Started on one node
    Dialog Instance (not in cluster): Started on the other node
    When I create the resource for SUNW.sap_as like
    clrs create --g sap-rg -t SUNW.sap_as .....etc etc
    in the /var/adm/messages I got lots of WAITING FOR DISPACHER TO COME UP....
    Then after timeout it gives up.
    Any clue? What does is try to connect or waiting for? I hve notest that it's something before the startup script....
    TIA

Maybe you are looking for

  • Using Actionscript to generate a grid

    Hello all. I am working on a Flash project and have a problem that has turned out to be a real head scratcher for me. I need to create a Flash form that will generate a floorplan grid based upon width and length values entered by the user. I have suc

  • How can I go back to Fire Fox version 6 or 5 my school's website does not currently work with Fire fox 7

    I just found out through my school that the online campus does not support Fire Fox 7. So I would like to go back to Fire Fox 6 which is what I had before. My school does say that it supports version 5, but 6 seemed to work just fine. Either way how

  • 2014 MSI Workstation Workshop in Frankfurt(photo gallery)

    2014 MSI Workstation Workshop in Frankfurt. Workshop Setup- GT70 2OL. Wrokshop Setup- GT60 2OK 3K IPS Edition. Workstation Live Demo. Microsoft Sales Director Named Sales EMEA- Tom Costers. NVIDIA Professional Solution Senior Sales Manager Centrol Eu

  • REST API for download event

    Is there a way to get events using REST API when ever a file is downloaded.

  • IIF performace in SSAS

    Hi  everyone! help me to solve an issue! So i have a cube with couple simple calculated members. But if i have null values i need to display 0. How to do it? I know i should use iif(exp1 isnull, 0, exp 1), but when i did it,  performance was been was