Sun cluster, 6140 and 'cross-connections'

This was brought up in the storage forum by somebody else, but the responses never answered the original question:
In the 6140 setup document located at http://docs.sun.com/source/819-7497-11/chapter3.html#50589714_93886 it shows two different ways to cable a host to a 6140 via a SAN switch. figure 3-3 and 3-4
It states that the setup in 3-4 is not supported in a sun cluster environment.
The problem is that given the acive/passive nature of the 6140, the setup shown in 3-4 is the obvious one to use since it prevents one from having to force all of the luns over in the event of an hba port or switch failure.
To make life more interesting, the 6140 setup doc does not make not of what version of sun cluster it is not supported on. Or what the bugid is, or any informationto know if the restriction is still valid.
So, does this restriction still exist? If so, for what version of sun cluster? What version of solaris?

Thanks for the clarification.
As an aside, it would be nice if in the future, the documentation could contain a bit more information than just a simple note saying 'this is not supported'. A reference to a bugid or info doc would go a long way in helping folks determine if the restriction is still valid.
--john                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Similar Messages

  • Sun Cluster Tools and PBS/Torque support

    I hope this is the right forum...
    The Sun Cluster Tools 8.2 userguide says that PBS/Torque DRM is supported out of the box for OpenMPI and can be confirmed by issuing a command similar to:
    ompi_info | grep plm
    but I only get rsh and SLURM:
    MCA plm: rsh (MCA v2.0, API v2.0, Component v1.3.3)
    MCA plm: slurm (MCA v2.0, API v2.0, Component v1.3.3)
    Is the doco. wrong? What do I need to do to get it going with Torque?

    Does your ompi_info output the following line? The tm plm indicates PBS/Torque support.
    MCA plm: tm (MCA v2.0, API v2.0, Component v1.3.3)

  • IDS 5.0 and Sun-Cluster

    I want to make IDS 5.0 highly available using SunCluster. I have few questions about it.
    1. Can I install IDS on local disk and make it highly availbale. Sun-Cluster doc says it should be on shared disk.
    2. I have an already installed IDS I want to make it highly available using Sun-Cluster, what steps I should follow to achive this.

    I suppose that the answer is that it is not a simple task and it depends on the kind of cluster you want to deploy.
    I suggest that you carefully read the documentation of Sun Cluster and specifically the Directory Server specific parts.
    The way to do it is different with Sun Cluster 2 and Sun Cluster 3.0....
    Or you can request help from Sun Professional Services...
    Regards,
    Ludovic.

  • Does sun Cluster 3.0/3.1 HA Oracle agent support to use Oracle spfile?

    When defining the resource, the 'parameter_file' properties is usually sets to the Oracle pfile. Is it possible to use Oracle spfile?
    It is said if leaving 'parameter_file' NULL, it will be default to Oracle default. Suppose leaving it NULL and with Oracle spfile created in default location, will it use the spfile?

    You did not specify which Sun Cluster version and Oracle version you are running.
    Within SC 3.1 my understanding is that beginning with Oracle 9i it is possible to use the spfile.
    If you leave the "parameter_file" property empty (NULL) then the default behaviour for 9i should work:
    search under $ORACLE_HOME/dbs in the order:
    1. spfile$(ORACLE_SID}.ora
    2. spfile.ora
    3. init${ORACLE_SID}.ora
    Greets
    Thorsten

  • Sun Cluster Core Conflict - on SUN Java install

    Hi
    We had a prototype cluster that we were playing with over two nodes.
    We decided to uninstall the cluster by putting node into single user mode and running scinstall -r.
    Afterwards we found that the Java Availability Suite was a little messed up - maybe because the kernel/registry had not been updated - it though the cluster and agent software was uninstalled and would not let us re-install. All the executabvles from /etc/cluster/bin had been removed from the nodes.
    So, On both nodes we ran the uninstall program from /var/sadm/prod/... and then selected cluster and agents to uninstall.
    On the first node, this completely removed the sun cluster compoenets and then allowed us to re-install the cluster software successfully.
    On the second node, for some reason, it has left behind the component "Sun Cluster Core", and will not allow us to remove it with the uninstall.
    When we try to re-install we get the following:
    "Conflict - incomplete version of Sun Cluster Core has been detected"
    In then points us to the sun cluster upgrade guide on sun.com.
    My question is - how do we 'clean up' this node and remove the sun cluster core so we can re-install the sun cluster software from scratch?
    I don't quite understand how this has been left behind....
    thanks in advance
    S1black.

    You can use prodreg directly to clean up when your de-install has gone bad.
    Use:
    # prodreg browse
    to list the products. You may need to recurse down into the individual items. The use:
    # prodreg unregister ...
    to unregister and pkgrm to remove the packages manually.
    That has worked for me in the past. Not sure if it is the 'official' way though!
    Regards,
    Tim
    ---

  • DS6 in Sun cluster

    Hi
    I am doing DS 5.2 migrate to DS6.1. I have DS5.2 and messaging server 6.2 in cluster environment. I am OK with migration process but how to replace existing ldap resource in cluster with the new setting. My messaging resource is dependent on ldap (5.2) resource. I would like to add new ldap resource in cluster and link messaging resource to new ldap while keeping my old ldap resource offline. Can any body guide me what to do and how?
    Thanks in advance

    I believe you need to make a new Resource group for DS 6. Once it is up and running, it should be easy to change the MS dependency to the new RG.
    I am not familiar enough with Sun Cluster dependencies (and I don't have a cluster running now) to give you the exact details. But it seems like just changing the name of the Resource Group MS depends on.
    Regards,
    Ludovic.

  • Does SUN CLUSTER WARE support ASM?

    Does SUN CLUSTER WARE support ASM?
    Where can I find the answer ? Thanks.

    I am not an expert but here it goes. Sun Cluster is used for clustering machines processes but NOT really used for clustering disks. This has to be done through third party software like Veritas. BUT why Sun Cluster the machines when you can cluster via Oracle CRS? CRS then uses ASM for clustered disks. Bingo you save money on Sun Cluster Software and Veritas software.
    One day Oracle will rule the world!

  • Regarding Building of Sun Cluster

    I want to learn Sun Cluster in vmware , which is installed on top of windows xp 32 bit. I want to practice for sun cluster certification .Unfortunately i dont have SPARC. Whether is it possible to install a two node cluster in vmware ? I have shared iscsi storage ( open filer ). Shall u admins please guide me , which solaris version , sun cluster version, and other prerequisties to be taken for installation to proceed for vmware sun cluster configuration.
    Hardware : x86 - Intel Dual Core 2.5 ghz, ( not 64 bit emulated ), 4 GB RAM

    VirtualBox allows you to run a 64-bit Solaris even on a 32-bit operating system, as long as your hardware is 64-bit. (Yes, you need 64-bit HARDWARE, and the starter of this thread cleary stated, that his hardware is NOT 64-bit!)
    See:
    http://download.virtualbox.org/virtualbox/2.2.4/UserManual.pdf
    Pages 17-18...
    BUT: You can officially use the OpenSolaris Versions.
    Yes, they do have the 32-bit versions inside it!
    With OpenSolaris 2009.06 it is as easy as this:
    Just install OpenSolaris 2009.06 and add the packages from pkg.sun.com/opensolaris/ha-cluster
    See:
    http://docs.sun.com/app/docs/prod/open.ha.cluster?l=en&a=view
    and more specifically:
    http://docs.sun.com/app/docs/doc/820-7821/gikpv?l=en&a=view
    It's even easier then Solaris 10! There's no real technical difference between SC32u3 on Solaris 10 and OHAC for OpenSolaris. So, I highly do recommend to use OHAC on OpenSolaris!
    Matthias

  • Sun Cluster & 6130/6140 thru switch with cross-connections not supported?

    Hi:
    I noticed that the 6140 does not support cross-connecting the 2 controllers to 2 switches for higher availability when using Sun Cluster:
    http://docs.sun.com/source/819-7497-10/chapter3.html
    Does anyone know why this restriction is there?
    Thanks!

    Since there was no real answer to the question in this forum, I cross posted this issue to the cluster forum.
    See http://forum.java.sun.com/thread.jspa?threadID=5261282&tstart=0 for the full thread.
    Basically, the restriction against cross-connections is no longer valid and the documentation should be updated to remove the note.
    This is all a good thing, because I had my 6140's wired into my sun cluster environment via the 'cross-connections' method diagramed in figure 3-4. :-)

  • Storagetek 6140 - chunk size? - veritas and sun cluster tuning?

    hi, we've just got a 6140 and i did some raw write and read tests -> very nice box!
    current config: 16 fc-disks (300gbyte / 2gbit/sec): 1x hotspare, 15x raid5 (512kibyte chunk)
    3 logical volumes: vol1: 1.7tbyte, vol2: 1.7tbyte, vol3 rest (about 450gbyte)
    on 2x t2000 coolthread server (32gibyte mem each)
    it seems the max write perf (from my tests) is:
    512kibyte chunk / 1mibyte blocksize / 32 threads
    -> 230mibyte/sec (write) transfer rate
    my tests:
    * chunk size: 16ki / 512ki
    * threads: 1/2/4/8/16/32
    * blocksize (kibyte): .5/1/2/4/8/16/32/64/128/256/512/1024/2048/4096/8192/16384
    did anyone out there some other tests with other chunk size?
    how about tuning veritas fs and sun cluster???
    veritas fs: i've read so far about write_pref_io, write_nstream...
    i guess, setting them to: write_pref_io=1048576, write_nstream=32 would be the best in this scenario, right?

    I've responded to your question in the following thread you started:
    https://communities.oracle.com/portal/server.pt?open=514&objID=224&mode=2&threadid=570778&aggregatorResults=T578058T570778T568581T574494T565745T572292T569622T568974T568554T564860&sourceCommunityId=465&sourcePortletId=268&doPagination=true&pagedAggregatorPageNo=1&returnUrl=https%3A%2F%2Fcommunities.oracle.com%2Fportal%2Fserver.pt%3Fopen%3Dspace%26name%3DCommunityPage%26id%3D8%26cached%3Dtrue%26in_hi_userid%3D132629%26control%3DSetCommunity%26PageID%3D0%26CommunityID%3D465%26&Portlet=All%20Community%20Discussions&PrevPage=Communities-CommunityHome
    Regards
    Nicolas

  • Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch

    Sun Cluster 3.x connecting to SE3510 via Network Fibre Switch
    Hi,
    Currently the customer have 3 nodes clusters that are connected to the SE3510 via to the Sun StorEdge[TM] Network Fibre Channel Switch(SAN_Box Manager) and running SUN Cluster 3.X with Disk-Set.The customer want to decommised the system but want to access the 3510 data on the NEW system.
    Initially, I remove one of the HBA card from one the cluster nodes and insert it into the NEW System and is able to detect the 2 LUNS from the SE3150 but not able to mount the file system.After some checking, I decided follow the steps from SunSolve Info ID:85842 as show below:
    1.Turn off all resources groups
    2.Turn off all device groups
    3.Disable all configured resources
    4.remove all resources
    5.remove all resources groups
    6.metaset �s < setname> -C purge
    7.boot to non cluster node, boot �sx
    8.Remove all the reservations from the shared disks
    9.Shutdown all the system
    Now, I am not able to see the two luns from the NEW system from the format command.cfgadm �al shows as below:
    Ap_Id Type Receptacle Occupant Condition
    C4 fc-fabric connected configured
    Unknown
    1.Is it possible to get back the data and mount back accordingly?
    2.Any configuration need to done on the SE3150 or the SAN_Manager?

    First, you will probably need to change the LUN masking on the SE3510 and probably the zoning on the switches to make the LUN available to another system. You'll have to check the manual for this as I don't have these commands committed to memory!
    Once you can see the LUNs on the new machine, you will need to re-create the metaset using the commands that you used to create it on the Sun Cluster. As long as the partitioning hasn't changed from the default, you should get your data back intact. I assume you have a backup if things go wrong?!
    Tim
    ---

  • JMQ cluster and unstable connections

    Hello all.
    I have a few architectural questions about building an OpenMQ message-passing infrastructure between multiple offices which do not always have on-line internet connections. We also need to distribute the MQ mesh configuration info.
    From the scale of my questions it seems, that I or our developers don't fully understand MQ, because I think that many of our problems and/or solution ideas (below) should already be implemented within the MQ middleware, and not by us from outside it.
    The potential client currently has a (relatively ugly) working solution which they wanted to revise for simplification, if possible, but this matter is not urgent and answers are welcome at any timeframe :)
    I'd welcome any insights, ideas and pointers as to why our described approach may be plain wrong :)
    To sum this post up, here's my short questionnaire:
    1) What is a good/best way to distribute MQ mesh config when not all nodes are available simultaneously?
    2) What are the limitations on number of brokers and queues in one logical mesh?
    3) Should we aim for separate "internal" and "external" MQ networks, or can they be combined into one large net?
    4) Should we aim for partial solution external to OpenMQ (such as integration with SMTP for messaging, or SVN for config distribution), or can this quest be solved within OpenMQ functionality?
    5) Can a clustered broker be forced to fully start without available master broker connection?
    6) Are broker clusters inherently local-network, or is there some standard solution (pattern) for geographically disperse MQ clusters?
    7) How to enforce pushing of the messages from one broker to another? Are any priority assignments available for certain brokers and "their" queues?
    Detailed rumblings follow below...
    We are thinking about implementing JMQ in a geographically disperse project, where it will be used for asynchronous communications to connect application servers in different branch offices with a central office. The problematic part is, that the central and especially branch offices are not expected to be always on-line, hence the MQ - whenever a connection is available, queued messages (requests, responses, etc.) are to be pushed to the other side's MQ broker. And if all goes well with the project, there may eventually be hundreds of such branch offices and more than one central office for failover, and a mesh of interconnection MQ agreements.
    The basic idea is simple: an end-user of the app server in a branch generates a request, this request is passed via message queue to another branch or to a central office, then another app server processes it to generate a response and the answer is queued back to the requesting app server. At some time after the initial request, the end-user would see in his web-page that the request's status has been updated with a response value. A branch office's app server and MQ broker may be an appliance-server distributed as a relatively unmaintained "black box".
    During the POC we configured several JMQ broker instances in this manner and it worked. From what I gather from our developers, each branch office's request and response queues are separate destinations in the system, and requests (from a certain branch) may be subscribed by any node, and responces (to a certain branch) may be submitted by any node. This may be restricted by passwords and/or certificate-based SSL tunnel channels, for example (suggestions welcome, though).
    However, we also wanted to simplify spreading the configuration of the MQ nodes' network by designating "master brokers" (as per JMQ docs) which keep track of the config and each other broker downloads the cluster config from its master. Perhaps it was wrong on our side, and a better idea is available to avoid manual reconfiguration of each MQ broker whenever another broker or a queue destination is added?
    Problem here is: it seems an "MQ cluster" is a local-network oriented concept. When we have a master broker in a central office, and the inter-connection is not up, branch offices loop indefinitely waiting for connection to a master, and reject client connections (published JMS port remains 0, and appropriate comments in the log files). In this case the branch office can not function until its JMQ broker connects to a central office, updates the MQ config, and permits client connections to itself.
    Also we are not certain (and it seems to be a popular question on Google, too) how to enforce a queued message to be pushed to another side - to a broker "nearest" to the target app server? Can this be done within OpenMQ config, or does this require an MQ client application to read and manipulate such messages somehow? For example, when a branch office's "request" queue has a message, and a connection to central office comes online, this request data should end up in the central office's broker. Apparently, a message which physically remains in the branch office broker when the interconnection goes offline, is of little use to the central appserver...
    I was thinking along the lines of different-priority brokers for a certain destinations, so that messages would automatically flow from further brokers to neares ones - like water flows from higher ground to lower ground in an aqueduct. It would then be possible to easily implement transparent routing between branch offices (available at non-intersecting times) via central office (always up).
    How many brokers and destination can be interconnected at all (practically or theoretically/hardcoded)?
    Possibly, there are other means to do some or all of this?
    Ideas we've discussed internally include:
    * Multiple networks of MQ brokers:
    Have an "internal" broker (cluster) in each branch office which talks to the app server, and a separate "external" broker which is clustered with the central office's "master broker". Some branch office application transfers messages between two brokers local to its branch. Thus the local appserver works okay, and remote queuing works whenever network is available.
    Possibly, the central office should also have separate internal and external broker setups?
    * Multi-tiered net of MQ brokers:
    Perhaps there can be "clusters of clusters" - with "external" tier-1 brokers being directly master brokers for local "internal" tier-2 clusters? Otherwise the idea of "miltiple networks of MQ brokers" above, without an extra app to relay messages between MQ brokers local to this app.
    * Multi-protocol implementation of MQ+SMTP(+POP3/IMAP)
    Many of our questions are solvable by SMTP. That is, we can send messages to a mailbox residing on a specific server (local in each office), and local appserver clients retrieve them by POP3 from the local mailbox server, and then submit responses over SMTP. This is approximately how the client currently solves this task now.
    We don't really want to invent a bicycle, but maybe this approach can also be applied to JMQ (asynch traffic not over MQ protocol, but over SMTP like in SOAP-SMTP vs. SOAP-HTTP webservices)?
    * HTTP/RCS-based config file:
    The OpenMQ config allows for the detailed configuration file to be available in local filesystem or on a web server. It is possible to fetch the config file from a central office whenever the connection is up (wget, svn/cvs/etc.) and restart the branch broker.
    Why is this approach good or bad? Advocates welcome :)
    Thanks for reading up to the end,
    and thanks in advance for any replies,
    //Jim Klimov

    Hello all.
    I have a few architectural questions about building an OpenMQ message-passing infrastructure between multiple offices which do not always have on-line internet connections. We also need to distribute the MQ mesh configuration info.
    From the scale of my questions it seems, that I or our developers don't fully understand MQ, because I think that many of our problems and/or solution ideas (below) should already be implemented within the MQ middleware, and not by us from outside it.
    The potential client currently has a (relatively ugly) working solution which they wanted to revise for simplification, if possible, but this matter is not urgent and answers are welcome at any timeframe :)
    I'd welcome any insights, ideas and pointers as to why our described approach may be plain wrong :)
    To sum this post up, here's my short questionnaire:
    1) What is a good/best way to distribute MQ mesh config when not all nodes are available simultaneously?
    2) What are the limitations on number of brokers and queues in one logical mesh?
    3) Should we aim for separate "internal" and "external" MQ networks, or can they be combined into one large net?
    4) Should we aim for partial solution external to OpenMQ (such as integration with SMTP for messaging, or SVN for config distribution), or can this quest be solved within OpenMQ functionality?
    5) Can a clustered broker be forced to fully start without available master broker connection?
    6) Are broker clusters inherently local-network, or is there some standard solution (pattern) for geographically disperse MQ clusters?
    7) How to enforce pushing of the messages from one broker to another? Are any priority assignments available for certain brokers and "their" queues?
    Detailed rumblings follow below...
    We are thinking about implementing JMQ in a geographically disperse project, where it will be used for asynchronous communications to connect application servers in different branch offices with a central office. The problematic part is, that the central and especially branch offices are not expected to be always on-line, hence the MQ - whenever a connection is available, queued messages (requests, responses, etc.) are to be pushed to the other side's MQ broker. And if all goes well with the project, there may eventually be hundreds of such branch offices and more than one central office for failover, and a mesh of interconnection MQ agreements.
    The basic idea is simple: an end-user of the app server in a branch generates a request, this request is passed via message queue to another branch or to a central office, then another app server processes it to generate a response and the answer is queued back to the requesting app server. At some time after the initial request, the end-user would see in his web-page that the request's status has been updated with a response value. A branch office's app server and MQ broker may be an appliance-server distributed as a relatively unmaintained "black box".
    During the POC we configured several JMQ broker instances in this manner and it worked. From what I gather from our developers, each branch office's request and response queues are separate destinations in the system, and requests (from a certain branch) may be subscribed by any node, and responces (to a certain branch) may be submitted by any node. This may be restricted by passwords and/or certificate-based SSL tunnel channels, for example (suggestions welcome, though).
    However, we also wanted to simplify spreading the configuration of the MQ nodes' network by designating "master brokers" (as per JMQ docs) which keep track of the config and each other broker downloads the cluster config from its master. Perhaps it was wrong on our side, and a better idea is available to avoid manual reconfiguration of each MQ broker whenever another broker or a queue destination is added?
    Problem here is: it seems an "MQ cluster" is a local-network oriented concept. When we have a master broker in a central office, and the inter-connection is not up, branch offices loop indefinitely waiting for connection to a master, and reject client connections (published JMS port remains 0, and appropriate comments in the log files). In this case the branch office can not function until its JMQ broker connects to a central office, updates the MQ config, and permits client connections to itself.
    Also we are not certain (and it seems to be a popular question on Google, too) how to enforce a queued message to be pushed to another side - to a broker "nearest" to the target app server? Can this be done within OpenMQ config, or does this require an MQ client application to read and manipulate such messages somehow? For example, when a branch office's "request" queue has a message, and a connection to central office comes online, this request data should end up in the central office's broker. Apparently, a message which physically remains in the branch office broker when the interconnection goes offline, is of little use to the central appserver...
    I was thinking along the lines of different-priority brokers for a certain destinations, so that messages would automatically flow from further brokers to neares ones - like water flows from higher ground to lower ground in an aqueduct. It would then be possible to easily implement transparent routing between branch offices (available at non-intersecting times) via central office (always up).
    How many brokers and destination can be interconnected at all (practically or theoretically/hardcoded)?
    Possibly, there are other means to do some or all of this?
    Ideas we've discussed internally include:
    * Multiple networks of MQ brokers:
    Have an "internal" broker (cluster) in each branch office which talks to the app server, and a separate "external" broker which is clustered with the central office's "master broker". Some branch office application transfers messages between two brokers local to its branch. Thus the local appserver works okay, and remote queuing works whenever network is available.
    Possibly, the central office should also have separate internal and external broker setups?
    * Multi-tiered net of MQ brokers:
    Perhaps there can be "clusters of clusters" - with "external" tier-1 brokers being directly master brokers for local "internal" tier-2 clusters? Otherwise the idea of "miltiple networks of MQ brokers" above, without an extra app to relay messages between MQ brokers local to this app.
    * Multi-protocol implementation of MQ+SMTP(+POP3/IMAP)
    Many of our questions are solvable by SMTP. That is, we can send messages to a mailbox residing on a specific server (local in each office), and local appserver clients retrieve them by POP3 from the local mailbox server, and then submit responses over SMTP. This is approximately how the client currently solves this task now.
    We don't really want to invent a bicycle, but maybe this approach can also be applied to JMQ (asynch traffic not over MQ protocol, but over SMTP like in SOAP-SMTP vs. SOAP-HTTP webservices)?
    * HTTP/RCS-based config file:
    The OpenMQ config allows for the detailed configuration file to be available in local filesystem or on a web server. It is possible to fetch the config file from a central office whenever the connection is up (wget, svn/cvs/etc.) and restart the branch broker.
    Why is this approach good or bad? Advocates welcome :)
    Thanks for reading up to the end,
    and thanks in advance for any replies,
    //Jim Klimov

  • Looking for a script to connect bodytext to footnote and cross-references

    Hi,
    Need to do the following:
    I was provided with separate (tagged-text) files: bodytext, (foot)notes and cross-references. Now I am looking for a solution (script?) that can do the following:
    to restore the connection between the body-text, the footnotes and te cross-references after placing these in a document in Indesign.
    The cross-references should appear (in anchored text-frames) in the margin and the notes in a separate frame at the bottom of the page. If possible in the same way as footnotes in Indesign appear, but in a separate frame to be able to do some more formatting than native footnotes would accept.
    I have tested the plugin: Side Heads from InTools, and tis works well with Indesign files tthat contain footnotes. But this doesn't work with separately placed footnotes from a tagged-text-file, as they re not knotted together.
    I can't do any scripting myself.
    I would appreciate any help, and I am looking forward to your response...
    Regards,
    kkingg

    The first thing to do is:
    regexp "match regexp = ([0-9]+)" $_cli_result match count
    if $count eq 0
    exit 0
    end
    The second is a bit more challenging.  I think this will work:
    cli command "show call active voice br"
    foreach line $_cli_result "\n"
    regexp "^([0-9a-zA-Z]+) : " $line match callid
    if $_regexp_result eq 1
      continue
    end
    regexp "^dur 1d" $line
    if $_regexp_result eq 1
      cli command "show call active voice br | section $callid"
      syslog msg "$_cli_result"
    end
    end

  • Connected to an idle instance in sun cluster nodes.

    i have two sun cluster node sharing common storage.
    Two schema's:
    test1 for nodeA
    test2 for nodeB.
    My requirement is as below:
    Login into b node.
    export ORACLE_SID=test1.
    sqlplus / as sysdba.
    But i am getting as
    "connected to an idle instance"
    Is there any way to connect to node A schema from Node B.

    I found the answer..
    sqlplus <sysdbauser>/<password>@test1 as sysdba

  • RAC 10g on Sun Cluster 3.1 U3 and Interconnect

    Hello,
    I have the following Interconnects on my Sun Cluster:
    ce5: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6
         inet 1.1.1.1 netmask ffffff80 broadcast 1.1.1.127
         ether 0:3:ba:95:fa:23
    ce5: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 6
         ether 0:3:ba:95:fa:23
         inet6 fe80::203:baff:fe95:fa23/10
    ce0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 7
         inet 1.1.0.129 netmask ffffff80 broadcast 1.1.0.255
         ether 0:3:ba:95:f9:97
    ce0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 7
         ether 0:3:ba:95:f9:97
         inet6 fe80::203:baff:fe95:f997/10
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 8
         inet 1.1.193.1 netmask ffffff00 broadcast 1.1.193.255
         ether 0:0:0:0:0:1
    In the Installation of RAC the routine will ask me which Interface I will use for RAC Interconnect and I do not know if it does not matter which Interface I choose, because I nevertheless in any case I have an SPOF.
    Can anybody help??
    Thank you very much

    Sorry for the late reply, but the interface to pick is the clprivnet0. This load-balances over the available private interconnects under the covers and so does not represent a single point of failure.
    Tim
    ---

Maybe you are looking for