Sun Cluster + meta set shared disks -

Guys, I am looking for some instructions that most sun administrators would mostly know i believe.
I am trying to create some cluster resource groups and resources etc., but before that i am creating the file systems that is going to be used by two nodes in the sun cluster 3.2. we use SVM.
I have some drives that i plan to use for this specific cluster resource group that is yet to be created.
i know i have to create a metaset since thats how other resource groups in my environment are setup already so i will go with the same concept.
# metaset -s TESTNAME
Set name = TESTNAME, Set number = 5
Host Owner
server1
server2
Mediator Host(s) Aliases
server1
server2
# metaset -s TESTNAME -a /dev/did/dsk/d15
metaset: server1: TESTNAME: drive d15 is not common with host server2
# scdidadm -L | grep d6
6 server1:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
6 server2:/dev/rdsk/c10t6005076307FFC4520000000000004133d0 /dev/did/rdsk/d6
# scdidadm -L | grep d15
15 server1:/dev/rdsk/c10t6005076307FFC4520000000000004121d0 /dev/did/rdsk/d15
Do you see what i am trying to say ? If i want to add d6 in the metaset it will go through fine, but not for d15 since it shows only against one node as you see from the scdidadm output above.
Please Let me know how i share the drive d15 same as d6 with the other node too. thanks much for your help.
-Param
Edited by: paramkrish on Feb 18, 2010 11:01 PM

Hi, Thanks for your reply. You got me wrong. I am not asking you to be liable for the changes you recommend since i know thats not reasonable while asking for help. I am aware this is not a support site but a forum to exchange information that people already are aware of.
We have a support contract but that is only for the sun hardware and those support folks are somewhat ok when it comes to the Solaris and setup but not that experts. I will certainly seek their help when needed and thats my last option. Since i thought this problem that i see is possibly something trivial i quickly posted a question in this forum.
We do have a test environment but that do not have two nodes but a 1 node with zone clusters. hence i dont get to see this similar problem in the test environment and also the "cldev populate" would be of no use as well to me if i try it in the test environment i think since we dont have two nodes.
I will check the logs as you suggested and will get back if i find something. If you have any other thoughts feel free to let me know ( dont bother about the risks since i know i can take care of that ).
-Param

Similar Messages

  • Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch

    Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch
    Hi,
    Currently the customer have 3 nodes clusters that are connected to the SE3510 via to the Sun StorEdge[TM] Network Fibre Channel Switch(SAN_Box Manager) and running SUN Cluster 3.X with Disk-Set.The customer want to decommised the system but want to access the 3510 data on the NEW system.
    Initially, I remove one of the HBA card from one the cluster nodes and insert it into the NEW System and is able to detect the 2 LUNS from the SE3150 but not able to mount the file system.After some checking, I decided follow the steps from SunSolve Info ID:85842 as show below:
    1.Turn off all resources groups
    2.Turn off all device groups
    3.Disable all configured resources
    4.remove all resources
    5.remove all resources groups
    6.metaset �s < setname> -C purge
    7.boot to non cluster node, boot �sx
    8.Remove all the reservations from the shared disks
    9.Shutdown all the system
    Now, I am not able to see the two luns from the NEW system from the format command.cfgadm �al shows as below:
    Ap_Id Type Receptacle Occupant Condition
    C4 fc-fabric connected configured
    Unknown
    1.Is it possible to get back the data and mount back accordingly?
    2.Any configuration need to done on the SE3150 or the SAN_Manager?

    First, you will probably need to change the LUN masking on the SE3510 and probably the zoning on the switches to make the LUN available to another system. You'll have to check the manual for this as I don't have these commands committed to memory!
    Once you can see the LUNs on the new machine, you will need to re-create the metaset using the commands that you used to create it on the Sun Cluster. As long as the partitioning hasn't changed from the default, you should get your data back intact. I assume you have a backup if things go wrong?!
    Tim
    ---

  • Recommendations for Multipathing software in Sun Cluster 3.2 + Solaris 10

    Hi all, I'm in the process of building a 2-node cluster with the following specs:
    2 x X4600
    Solaris 10 x86
    Sun Cluster 3.2
    Shared storage provided by a EMC CX380 SAN
    My question is this: what multipathing software should I use? The in-built Solaris 10 multipathing software or EMC's powerpath?
    Thanks in advance,
    Stewart

    Hi,
    according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
    So at the end it all boils down to
    - cost: Solaris multipathing is free, as it is bundled
    - support: Sun can offer better support for the Sun software
    You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
    Hartmut

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Sun cluster quorum disk

    Hi,
    I just want to know how to assign quorum disk under Sun cluster. Can I use LUN disk that is shared for both node as quorum disk and do I need to bring the disk first to vxvm control before using it as quorum disk? Appreciate any response/advise.
    Thanks.

    No you don't need to bring the disk into VXVM control.
    First run scdidadm -L from either node. This will give you a list of shared disk devices. Find one that is shared between the nodes and note its DID. ie d21
    scconf -a -q globaldev=d21
    Once you have added a quorum disk you can set install mode to off.
    scconf -c -q installmodeoff
    I would also recommend reading this:
    http://docs.sun.com/app/docs/doc/816-3384/6m9lu6fig?q=sun+cluster+add+quorum+disk&a=view
    Then reset you quorum count.
    scconf -c -q reset

  • Shared Tuxedo 8.0 Binaries on a SUN Cluster 3.0

    I know pefectly well that in every installation document BEA strongly advise not to
    try to share executables across remote file systems (NFS etc.) Still i need to ask
    if one of you have any experience in a setup of a SUN 8 / SUN cluster 3.0 enviroment,
    where 2 or more nodes shares disks by the same SUN 3 cluster. The basic idea is to
    have the the Tux8 binaries installed only once, and then separate all the "dynamic"
    files tmconfig, tlogdevices etc in to its own respective katalog (/node1, /node2
    etc.) But stil they remain on the clusterd disks.
    Thank you for a quick response.
    Best of regards
    Raoul

    We have the same problem with 2 SUN E420 and a D1000 storage array.
    The problem is releted to the settings on the file :
    /etc/system
    added by the cluster installation :
    set rpcmod:svc_default_stksize=0x4000
    set ge:ge_intr_mode=0x833
    the second one line try to set a Gigabit Ethernet Interface that does not exist.
    We comment out the two line and every things work fine.
    I'm interesting to know what you think about Sun Cluster 3.0. and your experience.
    email : [email protected]
    Stefano

  • How the cluster works when shared storage disk is offline to the primary ??

    Hi All
    I have configured Cluster as below
    Number of nodes: 2
    Quorum devices: one Quorum server, shared disks
    Resource Group with HA-storage, Logical host name, Apache
    My cluster works fine when either the nodes looses connectivity or crashes but when I deny access for primary node ( on which HA storage is mounted ) to the shared disks.
    The Cluster didn’t failover the whole RG to other node.
    I tried to add the HAstorage disks to the quorum devices but it didn’t help
    Anyways i can't able to do any i/o on the HAstorage on the respective node
    NOTE:This is the same case even on Zone cluster
    Please guide me, below is the O/P of # cluster status command === Cluster Nodes ===
    --- Node Status ---
    Node Name Status
    sol10-1 Online
    sol10-2 Online
    === Cluster Transport Paths ===
    Endpoint1 Endpoint2 Status
    sol10-1:vfe0 sol10-2:vfe0 Path online
    --- Quorum Votes by Node (current status) ---
    Node Name Present Possible Status
    sol10-1 1 1 Online
    sol10-2 1 1 Online
    --- Quorum Votes by Device (current status) ---
    Device Name Present Possible Status
    d6 0 1 Offline
    server1 1 1 Online
    d7 1 1 Offline
    === Cluster Resource Groups ===
    Group Name Node Name Suspended State
    global sol10-1 No Online
    sol10-2 No Offline
    === Cluster Resources ===
    Resource Name Node Name State Status Message
    global-data sol10-1 Online Online
    sol10-2 Offline Offline
    global-apache sol10-1 Online Online - LogicalHostname online.
    sol10-2 Offline Offline
    === Cluster DID Devices ===
    Device Instance Node Status
    /dev/did/rdsk/d6 sol10-1 Fail
    sol10-2      Ok
    /dev/did/rdsk/d7 sol10-1 Fail
    sol10-2 Ok
    Thanks in advance
    Sid

    not sure what you mean with "deny access" but could be reboot of path failures is disabled. This should
    enable that:
    # clnode set -p reboot_on_path_failure=enabled +
    HTH,
    jono

  • QFS Meta data resource on sun cluster failed

    Hi,
    I'm trying to configure QFS on cluster environment, to configure metadata resource faced error. i tried with different type of qfs none of them worked.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/sharedqfs
    n1u332 - shqfs: Invalid priority (0) for server n1u332FS shqfs: validate_node() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/haqfs
    n1u332 - Mount point /global/haqfs does not have the 'shared' option set.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    [root @ n1u331]
    ~ # scrgadm -a -j mds -g qfs-mds-rg -t SUNW.qfs:5 -x QFSFileSystem=/global/hasharedqfs
    n1u332 - has: No /dsk/ string (nodev) in device.Inappropriate path in FS has device component: nodev.FS has: validate_qfsdevs() failed.
    (C189917) VALIDATE on resource mds, resource group qfs-mds-rg, exited with non-zero exit status.
    (C720144) Validation of resource mds in resource group qfs-mds-rg on node n1u332 failed.
    any QFS expert here?

    hi
    Yes we have 5.2, here is the wiki's link, [ http://wikis.sun.com/display/SAMQFSDocs52/Home|http://wikis.sun.com/display/SAMQFSDocs52/Home]
    I have added the file system trough webconsole, and it's mounted and working fine.
    after creating the file system i tried to put under sun cluster's management, but it asked for metadata resource and to create metadata resource I have got the mentioned errors.
    I need the use QFS file system in non-RAC environment, just mounting and using the file system. I could mount it on two machine in shared mode and high available mode, in both case in the second node it's 3 time slower then the node which has metadata server when you write and the same read speed. could you please let me know if it's the same for your environment or not. if so what do you think of the reason, i see both side is writing to the storage directly but why it's so slow on one node.
    regards,

  • Bizzare Disk reservation probelm with sun cluster 3.2 - solaris 10 X 4600

    We have a 4 node X4600 sun cluster with shared AMS500 storage. There over 30 LUN's presented to the cluster.
    When any of the two higher nodes ( ie node id 2 and node is 3 ) are booted, their keys are not added to 4 out of 30 LUNS. These 4 LUNs show up as drive type unknown in format. I've noticed that the only thing common with these LUN's is that their size is bigger than 1TB
    To resolve this I simply scrub the keys, run sgdevs than they showup as normal in format and all nodes keys are present on the LUNS.
    Has anybody come across this behaviour.
    Commands used to resolve problem
    1. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
    2. scrub keys #/usr/cluster/lib/sc/scsi -c scrub -d devicename
    3. #sgdevs
    4. check keys #/usr/cluster/lib/sc/scsi -c inkeys -d devicename
    all node's keys are now present on the lun

    Hi,
    according to http://www.sun.com/software/cluster/osp/emc_clarion_interop.xml you can use both.
    So at the end it all boils down to
    - cost: Solaris multipathing is free, as it is bundled
    - support: Sun can offer better support for the Sun software
    You can try to browse this forum to see what others have experienced with Powerpath. From a pure "use as much integrated software as possible" I would go with the Solaris drivers.
    Hartmut

  • File Server Failover Cluster without shared disks

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?

    i have two servers that i wish to cluster as my Hyper V hosts and also two file servers each with 10 4TB SATA disks, all i have read about implementing high availability at the storage level involves clustering the file servers (e.g SOFS) which require external
    shared storage that the servers in the cluster can access directly. i do not have external storage and do not have budget for it.
    Is it possible to implement some for of HA with windows server 2012 R2 file servers without shared storage? for example is it possible to cluster the servers and have data on one server mirrored real-time to the other server such that if one server goes
    down, the other server will take over processing storage request using the mirrored data?
    i intend to use the storage to host VMs for Hyper V fail-over cluster and an SQL server cluster. they will access the shared on the file server through SMB
    each file server also has 144GB SSD, how can i use it to improve performance?
    There are two ways for you to go:
    1) Build a cluster w/o shared storage using MSFT upcoming version of Windows (yes, finally they have that feature and tons of other cool stuff). We've recently build both Scale-Out File Server serving Hyper-V cluster and standard general-purpose File Server
    cluster with this version. I'll blog next week edited content (you can drop me a message to get drafts right now) or you can use Dave's blog who was the first one I know who build it and posted, see :
    Windows Server Technical Preview (Storage Replica)
    http://clusteringformeremortals.com
    Feature you should be interested in it Storage Replica. Official guide is here:
    Storage Replica Guide
    http://blogs.technet.com/b/filecab/archive/2014/10/07/storage-replica-guide-released-for-windows-server-technical-preview.aspx
    Will do things like on the picture below:
    Just be aware: feature is new, build is preview (not even beta) so failover does not happen transparently (even with CA feature of SMB 3.0 enabled). However I think tuning timeout and improving I/O performance will fix that. SoFS failover is transparent
    even right away.
    2) If you cannot wait 9-12 months from now (hope MSFT is not going to delay their release) and you're not happy with a very basic functionality MSFT had put there (active-passive design, no RAM cache, requirement for separated storage, system/boot and dedicated
    log disks where SSD is assumed) you can get advanced stuff with a third-party software doing things similar to the picture below:
    So it will basically "mirror" some part of your storage (can be even directly-accessed file on your only system/boot disk) between hypervisor or just Windows hosts creating fault-tolerant and distributed SAN volume with optimal SMB3/NFS shares.
    For more details see:
    StarWind Virtual SAN
    http://www.starwindsoftware.com/starwind-virtual-san/
    There are other guys who do similar things but as you want file server (no virtualization?) most of them who are Linux/FreeBSD/Solaris-based and VM-running are out and you need to check for native Windows implementations. Guess SteelEye DataKeeper (that's
    Dave who blogged about Storage Replica File Server) and DataCore.
    Good luck :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • SQL 2008 R2 cluster installation failure - Failed to find shared disks

    Hi,
    The validation tests in the SQL 2008R2 cluster installation (running Windows 2008 R2) fails with the following error. The cluster has one root mount point with multiple mount points :
    "The cluster on this computer does not have a shared disk available. To continue, at least one shared disk must be available'.
    The "Detail.txt" log has alot of "access is denied" errors and here is just a sample. Any ideas what might be causing this issue?
    2010-09-29 12:54:08 Slp: Initializing rule      : Cluster shared disk available check
    2010-09-29 12:54:08 Slp: Rule applied features  : ALL
    2010-09-29 12:54:08 Slp: Rule is will be executed  : True
    2010-09-29 12:54:08 Slp: Init rule target object: Microsoft.SqlServer.Configuration.Cluster.Rules.ClusterSharedDiskFacet
    2010-09-29 12:54:09 Slp: The disk resource 'QUORUM' cannot be used as a shared disk because it's a cluster quorum drive.
    2010-09-29 12:54:09 Slp: Mount point status for disk 'QUORUM' could not be determined.  Reason: 'The disk resource 'QUORUM' cannot be used because it is a cluster quorum drive.'
    2010-09-29 12:54:09 Slp: System Error: 5 trying to find mount points at path
    \\?\Volume{e1f5ca48-c798-11df-9401-0026b975df1a}\
    2010-09-29 12:54:09 Slp:     Access is denied.
    2010-09-29 12:54:09 Slp: Mount point status for disk 'SQL01_BAK01' could not be determined.  Reason: 'The search for mount points failed.  Error: Access is denied.'
    2010-09-29 12:54:10 Slp: System Error: 5 trying to find mount points at path
    \\?\Volume{e1f5ca4f-c798-11df-9401-0026b975df1a}\
    2010-09-29 12:54:10 Slp:     Access is denied.
    2010-09-29 12:54:10 Slp: Mount point status for disk 'SQL01_DAT01' could not be determined.  Reason: 'The search for mount points failed.  Error: Access is denied.'
    2010-09-29 12:54:10 Slp: System Error: 5 trying to find mount points at path
    \\?\Volume{e1f5ca56-c798-11df-9401-0026b975df1a}\
    2010-09-29 12:54:10 Slp:     Access is denied.
    Thanks,
    PK

    Hi,
    We were asked by the PSS engineer to give the following privileges the account used to install SQL Server - i am referring to the user domain account as apposed to the SQL service account. These privileges were already applied to the
    SQL service account prior to the SQL installation. Assigning these privileges to the user account resolved the issue.
      Act as Part of the Operating Sywstem = SeTcbPrivileg
      Bypass Traverse Checking = SeChangeNotify
      Lock Pages In Memory = SeLockMemory
      Log on as a Batch Job = SeBatchLogonRight
      Log on as a Service = SeServiceLogonRight
      Replace a Process Level Token = SeAssignPrimaryTokenPrivilege
    Thanks for everyones assistance.
    Cheers,
    PK

  • Wrong hostname setting after Sun Cluster failover

    Hi Gurus,
    our PI system has been setup to fail over in a sun cluster with a virtual hostname s280m (primary host s280 secondary host s281)
    The basis team set up the system profiles to use the virtual hostname, and I did all the steps in SAP Note 1052984 "Process Integration 7.1 High Availability" (my PI is 7.11)
    Now I believe to have substituted "s280m" in every spot where previously "s280" existed, but when I start the system on the DR box (s281), the java stack throws erros when starting. Both SCS01 and DVEBMGS00 work directories contain a file called dev_sldregs with the following error:
    Mon Apr 04 11:55:22 2011 Parsing XML document.
    Mon Apr 04 11:55:22 2011 Supplier Name: BCControlInstance
    Mon Apr 04 11:55:22 2011 Supplier Version: 1.0
    Mon Apr 04 11:55:22 2011 Supplier Vendor:
    Mon Apr 04 11:55:22 2011 CIM Model Version: 1.5.29
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 HTTP trace: false
    Mon Apr 04 11:55:22 2011 Data trace: false
    Mon Apr 04 11:55:22 2011 Using destination file '/usr/sap/XP1/SYS/global/slddest.cfg'.
    Mon Apr 04 11:55:22 2011 Use binary key file '/usr/sap/XP1/SYS/global/slddest.cfg.key' for data decryption
    Mon Apr 04 11:55:22 2011 Use encryted destination file '/usr/sap/XP1/SYS/global/slddest.cfg' as data source
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 *** Start SLD Registration ***
    Mon Apr 04 11:55:22 2011 ******************************
    Mon Apr 04 11:55:22 2011 HTTP open timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP send timeout     = 420 sec
    Mon Apr 04 11:55:22 2011 HTTP response timeout = 420 sec
    Mon Apr 04 11:55:22 2011 Used URL: http://s280:50000/sld/ds
    Mon Apr 04 11:55:22 2011 HTTP open status: false - NI RC=0
    Mon Apr 04 11:55:22 2011 Failed to open HTTP connection!
    Mon Apr 04 11:55:22 2011 ****************************
    Mon Apr 04 11:55:22 2011 *** End SLD Registration ***
    Mon Apr 04 11:55:22 2011 ****************************
    notice it is using the wrong hostname (s280 instead of s280m). Where did I forget to change the hostname? Any ideas?
    thanks in advance,
    Peter

    Please note that the PI system is transparent about the Failover system used.
    When you configure the parameters against the mentioned note, this means that in case one of the nodes is down, the load will be sent to another system under the same Web Dispatcher/Load Balancer.
    When using the Solaris failover solution, it covers the whole environment, including the web dispatcher, database and all nodes.
    Therefore, please check the configuration as per the page below, which talks specifically about the Solaris failover solution for SAP usage:
    http://wikis.sun.com/display/SunCluster/InstallingandConfiguringSunClusterHAfor+SAP

  • RAW disks for Oracle 10R2 RAC NO SUN CLUSTER

    Yes you read it correctly....no Sun cluster. Then why am I on the Forum right? Well we have one Sun Cluster and another that is RAC only for testing. Between Oracle and Sun, neither accept any fault for problems with their perfectly honed products. Currently, I have multipathed fiber hba's to a Storedge 3510, and I've tried to get Oracle to use a raw lun for the ocr and voting disks. It doesn't see the disk. I've made sure they are stamped for oracle:dba, and tried oracle:oinstall. When presenting /dev/rdsk/C7t<long number>d0s6 for the ocr, I get a "can not find disk path." Does Oracle raw mean SVM raw? Should I create metadisks?

    "Between Oracle and Sun, neither accept any fault for problems with their perfectly honed products"...more specific:
    Not that the word "fault" is characterization of any liability, but a technical characterization of acting like a responsible stakeholder when you sell your product to a corporation. I've been working on the same project for a year, as an engineer. Not withstanding a huge expanse of management issues over the project, when technical gray areas have been reached, whereas our team has tried to get information to solve the issue. The area has become a big bouncing hot potato. Specifically, when Oracle has a problem reading a storage device, according to Oracle, that is a Sun issue. According to Sun, they didn't certify the software on that piece of equipment, so go talk to Oracle. In the sun cluster arena, if starting the database creates a node eviction from the cluster, good luck getting any specific team to say, that's our problem. Sun will say that Oracle writes crappy cluster verify scripts, and Oracle will say that Sun has not properly certified the device for use with their product. Man, I've seen it. The first time I said O.K. how do we avoid this in the future, the second time I said how did I let this happen again, and after more issues, money spent, hours lost, and customers, pissed --do the math.   I've even went as far as say, find me a plug and play production model for this specific environment, but good luck getting two companies to sign the specs for it...neither wants to stamp their name on the product due to the liability.  Yes your right, I should beat the account team, but as an engineer, man that's not my area, and I have other problems that I was hired to deal with.  I could go on.  What really is a slap in face is no one wants to work on these projects, if given the choice with doing a Windows deployment, because they can pop out mind bending amounts of builds why we plop along figuring out why clusterware doesn't like slice 6 of a /device/scsi_vhci/ .  Try finding good documentation on that.  ~You can deploy faster, but you can't pay more!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Patch set on sun cluster

    Hi,
    I have to upgrade 9.2.0.6 to 9.2.08 on sun cluster.Can you tell me in what sequence I need to install patch set? Is there any pre-requisite which I need to take care in advance?If anybody will provide me exact doc then It will be very helpful.I have seen one doc in metalink but it is not sufficient.
    Thanks,
    Mk

    Have you checked the 9.2.0.8 patch README ? There are several references on how to patch clustered instances - if the information is insufficient or lacking, pl open an SR with Support.
    http://updates.oracle.com/ARULink/Readme/process_form?aru=8690150
    HTH
    Srini

  • No shared disks visible in the Cluster Configuration Storage dialog

    When installing the Oracle 10g clusterware the "Cluster Configuration Storage" dialog shows no shared disks.
    We are using:
    Windows 2003 Server
    HP Eva 4400

    Hello,
    all disks in cluster are visible from all nodes (2 of them).
    We tested it with unpartioned and partioned disks (primary and extended). No way to make them visible for the OUI.
    Automount is enabled in Windows like required from Oracle.
    Besides, we are using Standard Edition. Therefore we have to work with ASM.
    Any more information needed.
    Thanx in advance.

Maybe you are looking for

  • How to block the status mail for an inbound Idoc to a specific user

    Hi, I have to stop sending the error status mail to a specific user depenidng on Partner Type. This will trigger when an inbound Idoc contains status error(message type INVOIC &ORDRSP).This user needs other mails which are getting triggered with the

  • Importing cross references in R12.1 using an API

    Hello All, Is there any API to import cross references in Oracle R12.1.Basically the API should insert records in to mtl_cross_references_b table. Any pointers will be of great help. Regards, Sumi

  • Iphoto's will not open

    IPhoto 11 will not open, I tried reloading my ILife but that did not work either.  It says the certificate expired Mar 23, 2012

  • 960x720 ftg looks squooshed but ok in fcp

    I'm working w/ some green screen ftg. Captured in Final Cut at HD (960x720) w/ DVCPro HD 720p 60 codec. I export the ftg., w/ no compression, million colors, at current size, which strangely shows up as 637x358. I import that in AECS3. It comes in at

  • [PE9] noise or no sound when exporting

    I'm putting together a video from a trip. After a while, my computer started getting slow and Premiere Elements almost instantly showed a memory error when opening my project. So I arranged a faster computer to borrow for a while, which solved the me