2 node clusters using SPARCstation 2's

Is it possible to make a two node cluster out of SPARCstation 2's, with Sun Cluster 3.0 or maybe an earlier version of the software?
Thanks,
Brandon

I don't know how well Cluster 3.x will work for you on sparc 2's as Solaris 8 is the first release to desupport the sun4c platform.
You could run SC 2.x, but that's very near to desupport if not already desupported. The only issue there is on the pizza box platform, you don't have a lot of space for redundancy (in NICs, SCSI or Fibre adapters, etc).
You can run without all the various redundancies, but you should have some minimums just to ensure system health. I'm assuming this is just a test environment...

Similar Messages

  • Connecting to multiple clusters using WAR file

    I am trying to connect to multiple clusters by creating separate WAR(web archive) files through Apache tomcat.
    My intention is to have each 'war' act as a node to a separate cluster. I started with the 'jmx-console' example available on the Tangosol website.
    So, my understanding is that even though all the war files are loaded in the same JVM (tomcat), since each of these war's has its own classloader, we can have multiple tangosol ndoes from the same JVM.
    So I have two basic questions:
    *) If I use system.setProperty("tangosol.coherence.override", myOverrideFile-x) in the class loader of each WAR, can I have the nodes connect to different clusters. is'nt the System.setProperty() setting the property on the JVM? If so, how can multiple classloaders running in the same JVM connect to different clusters?
    *) Where should I place config/override xml files under the tomcat directory structure for it to be accesible to the class loader.

    Hi sumax,
    Thanks for your response Robert,
    Maybe I did not understand your solution fully, I
    thought your idea mostly talks about how to configure
    the override files, and not how to configure the
    individual nodes to use different override files.
    That part is taken care of by the fact, that each war file loads the tangosol-coherence-override.xml from its own WEB-INF/classes directory.
    Classloading from a web app in Tomcat resolves classpath classes/resources in the following order:
    1. java.* and javax.* and com.sun.* packages are loaded from the system classloader. This, I believe, cannot be circumvented on a Sun JVM with a classloader extending a JDK classloader class.
    2. Reloadable JSP classloader (loads servlet classes generated from JSP pages). This is webapp specific, each webapp sees its own generated servlet classes.
    3. Reloadable Webapp classloader (loads from WEB-INF/classes and WEB-INF/lib of the war). This is webapp specific, each webapp sees its own WEB-INF/classes and WEB-INF/lib.
    4. Common classloader (loads from ${TOMCAT_HOME}/common/classes and ${TOMCAT_HOME}/common/lib). This is used by all webapps and the Tomcat container itself. You should not put anything here, except for libraries for resource drivers, etc. JDBC drivers or anything which needs to be published in the JNDI server in a non-application-specific way.
    5. Shared classloader (loads from ${TOMCAT_HOME}/shared/classes and ${TOMCAT_HOME}/shared/lib). This is shared between all webapps but not by the Tomcat container. You can put libraries here with which you want to communicate between webapps. Any singletons in classes in this classloader will be shared between webapps. A place for configuration files which are shared or if they have the application name in their filenames (so that they are not accidentally used by another application).
    The question still lingering in my mind is:
    How do I tell the classloader which override file to
    load.
    A tangosol-coherence-override.xml with different content should be put in the WEB-INF/classes directory of each war file. Each of those files will be loaded only by its containing webapp. This filename is defined in the deployment-mode-specific override files in coherence.jar or tangosol.jar (I don't remember which, off my head) which all chain to this same tangosol-coherence-override.xml file.
    That override file can contain all your overrides, if you do not want to change those between restarts, or it can chain with its xml-override attribute (as shown in the example) to another file with the application name in its filename.
    This chaining allows each webapp to refer to a differently named override file, so all those override files can be put into shared/classes folder. The uniquely named override file in shared/classes allows you to change the override configuration between restarts.
    As for cluster address and port, you can specify those in any of the override files.
    The full chaining path is the following (if you used the tangosol-coherence-override.xml example in my first reply):
    tangosol-coherence.xml -> tangosol-coherence-prod.xml -> tangosol-coherence.xml (you provide this file in your webapp WEB-INF/classes) -> tangosol-coherence-override-warfilename.xml (you provide this file in the ${TOMCAT_HOME}/shared/classes)
    I hope this makes it clear.
    Best regards,
    Robert

  • How to assign ALV for parent node and child node that uses supply method.?

    HI Dear friends,
        I need to display header details ( VBAK ) and Item details ( VBAP ). I have created two node like HEADER_NODE inside this i have created ITEM_NODE for this item node i use supply function 'GET_ITEMS'  any way it is working only when crete two separate table and binding but when i come to work with ALV i am totally confused .. i have created two 'View Controller UI Elements'   when i try to map HEADER_NODE  it mapped properly but for ITEM_NODE it shows mapping already defined. return status message as 'Action Cancelled' . In result both ViewContainer shows only HEADER_NODE data only.
    How to achive ALV for  Parent, child node that uses supply function ? ?
    Thank you

    Delete Mapping is not enabled, that means there is no mapping done yet.
    I just tried what you are saying and the application works and i am able to map the header table and item table and also again i could map the tables any number of times.. i didn't get any such message, sorry i couldn't recreate the scenario. might be there is something wrong in the context.
    i just did it like this.
    Please also move this to Web DynPro Discussion, Hope that would be helpful.
    Message was edited by: Syed Ghulam Ali

  • 11i load balancing web nodes without use of Hardware http load balancer

    I am looking at note 217368.1 (Advanced Configurations and Topologies for Enterprise Deployments of E-Business Suite 11i) and some other notes on load balancing but some aspects are not clear.
    Aim is to implement load balancing traffic to web nodes without using Hardware ( BigIP, cisco etc) for HTTP layer load balancing.
    Which is more preferable between dns or Apache Jserv load balancer ?
    Need details like failover capabilities, death detection of node, functionality testing and ways to monitor Apache Jserv load balancer.
    Any help in this regard is welcome .
    thx
    arun

    Oracle recommends using loadbalancing hardware rather than using DNS. If you want the features you mention above, you will need a hardware loadbalancer.
    http://blogs.oracle.com/stevenChan/2006/06/indepth_loadbalancing_ebusines.html
    http://blogs.oracle.com/stevenChan/2009/01/using_cisco_ace_series_hardware_load-balancers_ebs12.html
    HTH
    Srini

  • Soft-restart of Java node by  using command line utility

    Hello,
    Could anyone advise whether there is a way to soft-restart the java node by using a command line utility (if there is one)?
    I would like to script to run in unix.
    Kind regards,
    Murad.

    Thank you for all your reply.
    Does Jcmon issue soft-restart?
    We have problem with Veritas Cluster. When there failover occurs, Java nodes appears to be online when we check from SMICM, but in fact it looses connection to the central instance. We have to issue a soft-restart for each java node to create connection again. It is a known bug and this only can be fixed by using replicated enqueue server. This only available in SP, which we can not apply right now. What I want to do is to create a script to automate the soft-restart which will be run just after failover.
    Thanks,
    Murad

  • Start node manager using WLST

    I am trying to start node manager using WLST with following command
    wls:/offline> nmConnect('weblogic','weblogic123','localhost','5556','FirstDomain','C:\Oracle\Middleware\user_projects\domains\FirstDomain','plain')but getting below exception
    Traceback (innermost last):
    File "<console>", line 1, in ?
    File "<iostream>", line 123, in nmConnect
    File "<iostream>", line 618, in raiseWLSTException
    WLSTException: Error occured while performing nmConnect : Cannot connect to Node
    Manager. : Connection refused: connect. Could not connect to NodeManager. Check
    that it is running at localhost:5,556.
    Use dumpStack() to view the full stacktrace
    wls:/offline>
    I am using weblogic 11g.
    Can anybody let me know how to fix this issue.

    You can use something like this:
    beahome = '/home/oracle/soasuite';
    pathseparator = '/';
    adminusername = 'weblogic';
    adminpassword = 'magic11g';
    domainname = 'base_domain';
    domainlocation = beahome + pathseparator + 'user_projects' + pathseparator + 'domains' + pathseparator + domainname;
    nodemanagerhomelocation = beahome + pathseparator + 'wlserver_10.3' + pathseparator + 'common' + pathseparator + 'nodemanager';
    print 'START NODE MANAGER';
    startNodeManager(verbose='true', NodeManagerHome=nodemanagerhomelocation, ListenPort='5556', ListenAddress='localhost');
    print 'CONNECT TO NODE MANAGER';
    nmConnect(adminusername, adminpassword, 'localhost', '5556', domainname, domainlocation, 'ssl');
    print 'START ADMIN SERVER';
    nmStart('AdminServer');
    nmServerStatus('AdminServer');More information can be found here: http://middlewaremagic.com/weblogic/?p=6040
    in particular the "Starting the SOA environment" section

  • 3 WRV200 routers to create a 3 node WAN using VPN connections

    I have looked through some of the other posts to see if this question had been asked before, and I didnt see anything.
    I have 3 WRV200 that I want to install in 3 cities.
    I want each router to have its own Internet connection from the local ISP.
    I then want each router to connect to the other 2 routers and create a 3 node WAN using VPN connections.
    This is what I think I need.  I am hoping somone will correct me.
    WRV200-CA
    192.168.1.0 - CA Local LAN
    192.168.1.1 Default Gateway
    255.255.255.0 Subnet Mask
    192.168.1.10 Static Assigned for Printer
    192.168.1.11 Static Assigned for Printer
    192.168.1.12 Static Assigned for Printer
    192.168.1.13 Static Assigned for Printer
    192.168.1.101 - 120 DHCP addresses for workstations
    WRV200-NYC
    192.168.2.0 - NYC Local LAN
    192.168.2.1 Default Gateway
    255.255.255.0 Subnet Mask
    192.168.2.10 Static Assigned for Printer
    192.168.2.11 Static Assigned for Printer
    192.168.2.12 Static Assigned for Printer
    192.168.2.13 Static Assigned for Printer
    192.168.2.101 - 120 DHCP addresses for workstations
    WRV200-LI
    192.168.3.0 - LI Local LAN
    192.168.3.1 Default Gateway
    255.255.255.0 Subnet Mask
    192.168.3.10 Static Assigned for Printer
    192.168.3.11 Static Assigned for Printer
    192.168.3.12 Static Assigned for Printer
    192.168.3.13 Static Assigned for Printer
    192.168.3.101 - 120 DHCP addresses for workstations
    I know how to get the public IP address that is assigned to the broadband modem by each of the ISPs.
    Do I have to connect to each of the other public IP addresses to create this 3 location WAN?
    I dont think this is the best way since the IP address might change since it is assigned by the ISP via DHCP.
    Should I create a 192.168.4.0 network with a 255.255.255.248 subnet mask. and give each router its own address within the .4 network?  Im not sure where to do this if its different from the local LAN IP addresses listed above.
    Do I have to have 2 cable modems at each location in order to create a point to point connection with the other 2 routers?
    It seems like I should be able to send 2 seperate VPN signals over the same cable modem in order to connect with the other 2 routers.
    If 192.168.x.x is non routable, how is a PC at 192.168.1.101 going to route through to the local cable modem and connect to the cable modem that is in NYC, and then print to the printer located at 192.168.2.11
    Ultimately I want to:
    1. print to any printer at any of the 3 locations.
    2. Remote Desktop into any workstation at any of the 3 locations.
    3. Connect to the Internet via a public WiFi hotspot and use my laptop that would have some type of software that would allow me to connect to any of the 3 LANs.
    Thank you in advance.

    Appendix D of the RVS4000 admin guide has an example of configuring a site-to-site VPN tunnel between 2 routers that have dynamic WAN IP addresses. For your scenario, you can configure a site-to-site tunnel between each pair of WRV200 routers.
    http://www.cisco.com/en/US/docs/routers/csbr/rvs4000/administration/guide/RVS4000_AG_OL-22605.pdf

  • Automating Essbase clustering using Hyperion Provider Services

    Hi
    I am testing Essbase clustering using Hyperion Provider Services for high availability of cubes. I have created an Essbase cluster and added cubes to it using EAS. Since cubes get update during interday, I want to add/remove cubes from the cluster. I could do this process manually in EAS but prefer if I can automate this process. I was told that I could use either JAVA APIs (prefer) or XMLA to automate adding & removing cubes to/from the Essbase cluster. Unfortunately, I cannot find any documentation that mentions the names of the JAVA APIs that I should call to succeed this process. Could anybody help me please?
    Regards
    Chandra

    Hi,
    Assuming you are on V11 then if you have a look in \Hyperion\products\Essbase\aps\samples\japi and there is java example of creating clusters :- CreateCluster.java
    Java API docs available in \Hyperion\products\Essbase\aps\docs\japistart.htm
    If you are on an earlier version then the directory structure will be a little different.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Need to Set Node Attribute using XPath

    Hi,
    I have an XMLType coloum, I Need to Set/Update/Remove Root Node Attribute using XPath query.
    Regards,
    Rajesh

    Here you go:
           Node nameNode =
                    (Node) XPathFactory.newInstance().newXPath().evaluate(
                            "/root/name", doc, XPathConstants.NODE);
            nameNode.setTextContent("bob");

  • Call library function node with array of clusters using array data pointer

    Hello all.
    I am writing a LabVIEW wrapper for an existing DLL function.
    The function has, as one of its parameters, an array of structs.  The struct is very simple, containing two integers.  I am using the call library function node to access it.
    In Labview I created an array of clusters, where the cluster has two 32-bit integers as its members.  So far, so good.
    Now I have to pass this in to the Call Library Function Node.  Here I am running into trouble.
    I have used The topic in LAVA and The topic in the knowledge base as my primary sources of information, though I have read a bunch of forum topics on the subject too.
    I do understand that I could write a new function which takes as a parameter a struct with the size as the first member and an array as the second, and I might just do this and have it call the regular function, but I was hoping to do it more simply.
    According to the C file which LabVIEW generates for me from the CLFN when I choose "Adapt to Type" and "Array Data Pointer", the prototype it is expecting is:
    int32_t myFunc(uint32_t handle, uint16_t channel,
    int32_t FIFOnumber, void data[], int32_t numWords, int32_t *actualLoaded,
    int32_t *actualStartIndex);
    And the prototype of the function in my DLL is
    int borland_dll myFunc(DWORD handle, usint channel,
    int FIFOnumber, struct mStruct *data, int numWords, int *actualLoaded, int *actualStartIndex);
    This looks like a match to me, but it doesn't work (I get garbage in data).  From the topic in LAVA referenced above, I understood that it would work.  It does not.
    If I cast data to the pointer-to-pointer I get when I generate c code by wiring my struct to a CIN and generating, then I seem to get what I expect. But this seems to work when I choose "pointers to handles" too, and I would expect array data pointer to give a different result.
    Is there any way to get this to work directly, or will I have to create a wrapper?  (I am currently using LabVIEW 2011, but we have customers using 2009 and 2012, if not other versions as well).
    Thank you.
    Batya
    Solved!
    Go to Solution.

    OK, here is more detailed information.
    I have attached the VI.
    This is the code from the  "C" file created by right-clicking the CLN and creating a "C" file. 
    When the parameter in the CLN is set to "array data pointer":
    /* Call Library source file */
    #include "extcode.h"
    int32_t Load_Transmit_FIFO_RTx(uint32_t handle, uint16_t channel,
    int32_t FIFOnumber, void data[], int32_t numWords, int32_t *actualLoaded,
    int32_t *actualStartIndex);
    int32_t Load_Transmit_FIFO_RTx(uint32_t handle, uint16_t channel,
    int32_t FIFOnumber, void data[], int32_t numWords, int32_t *actualLoaded,
    int32_t *actualStartIndex)
    /* Insert code here */
     When the parameter is "pointers to handles":
    /* Call Library source file */
    #include "extcode.h"
    /* lv_prolog.h and lv_epilog.h set up the correct alignment for LabVIEW data. */
    #include "lv_prolog.h"
    /* Typedefs */
    typedef struct {
    int32_t control;
    int32_t data;
    } TD2;
    typedef struct {
    int32_t dimSize;
    TD2 data[1];
    } TD1;
    typedef TD1 **TD1Hdl;
    #include "lv_epilog.h"
    int32_t Load_Transmit_FIFO_RTx(uint32_t handle, uint16_t channel,
    int32_t FIFOnumber, TD1Hdl *data, int32_t numWords, int32_t *actualLoaded,
    int32_t *actualStartIndex);
    int32_t Load_Transmit_FIFO_RTx(uint32_t handle, uint16_t channel,
    int32_t FIFOnumber, TD1Hdl *data, int32_t numWords, int32_t *actualLoaded,
    int32_t *actualStartIndex)
    /* Insert code here */
     When the parameter is set to "handles by value":
    /* Call Library source file */
    #include "extcode.h"
    /* lv_prolog.h and lv_epilog.h set up the correct alignment for LabVIEW data. */
    #include "lv_prolog.h"
    /* Typedefs */
    typedef struct {
    int32_t control;
    int32_t data;
    } TD2;
    typedef struct {
    int32_t dimSize;
    TD2 data[1];
    } TD1;
    typedef TD1 **TD1Hdl;
    #include "lv_epilog.h"
    int32_t Load_Transmit_FIFO_RTx(uint32_t handle, uint16_t channel,
    int32_t FIFOnumber, TD1Hdl *data, int32_t numWords, int32_t *actualLoaded,
    int32_t *actualStartIndex);
    int32_t Load_Transmit_FIFO_RTx(uint32_t handle, uint16_t channel,
    int32_t FIFOnumber, TD1Hdl *data, int32_t numWords, int32_t *actualLoaded,
    int32_t *actualStartIndex)
    /* Insert code here */
    As to the DLL function, it is a bit more complicated than I explained above, in the current case.  My VI calls the function by this name in one DLL, and that DLL loads a DLL and calls a function (with the same name) in the second DLL, which does the work. (Thanks Rolfk, for helping me with that one some time back!)
    Here is the code in the first ("dispatcher") DLL:
    int borland_dll Load_Transmit_FIFO_RTx(DWORD handle, usint channel, int FIFOnumber, struct FIFO_DATA_CONTROL *data, int numWords, int *actualLoaded, int *actualStartIndex)
    t_DispatchTable *pDispatchTable = (t_DispatchTable *) handle;
    int retStat = 0;
    retStat = mCheckDispatchTable(pDispatchTable);
    if (retStat < 0)
    return retStat;
    if (pDispatchTable->pLoad_Transmit_FIFO_RTx == NULL)
    return edispatchercantfindfunction;
    return pDispatchTable->pLoad_Transmit_FIFO_RTx(pDispatchT​able->handlertx, channel, FIFOnumber, data, numWords, actualLoaded, actualStartIndex);
    borland_dll is just "__declspec(dllexport)"
    The current code in the DLL that does the work is:
    // TEMP
    typedef struct {
    int control;
    int data;
    } TD2;
    typedef struct {
    int dimSize;
    TD2 data[1];
    } TD1;
    typedef TD1 **TD1Hdl;
    // END TEMP
    int borland_dll Load_Transmit_FIFO_RTx(int handlertx, usint channel, int FIFOnumber, struct FIFO_DATA_CONTROL *data, int numWords, int *actualLoaded, int *actualStartIndex){
    struct TRANSMIT_FIFO *ptxFIFO; //pointer to transmit FIFO structure
    usint *pFIFOlist; //pointer to array of FIFO pointers to FIFO structures
    int FIFOentry, numLoaded;
    usint *lclData;
    usint nextEntryToTransmit;
    // TEMP
    FILE *pFile;
    int i;
    TD1** ppTD = (TD1**) data;
    TD1 *pTD = *ppTD;
    pFile = fopen("LoadFIFOLog.txt", "w");
    fprintf(pFile, "Starting Load FIFO with %d data words, data pointer 0x%x, with the following data&colon; \n", numWords, data);
    for (i = 0; i < numWords; i++) {
    fprintf(pFile, "%d: control--0x%x, data--0x%x \n", i, data[i].control, data[i].data);
    fflush(pFile);
    fprintf(pFile, "OK, using CIN generated structures: dimSize %d, with the following data&colon; \n", pTD->dimSize);
    for (i = 0; i < numWords; i++) {
    fprintf(pFile, "%d: control--0x%x, data--0x%x \n", i, pTD->data[i].control, pTD->data[i].data);
    fflush(pFile);
    // END TEMP
    if ((handlertx) <0 || (handlertx >= NUMCARDS)) return ebadhandle;
    if (cardrtx[handlertx].allocated != 1) return ebadhandle;
    pFIFOlist = (usint *) (cardrtx[handlertx].segaddr + cardrtx[handlertx].glob->dpchn[channel].tr_stk_ptr​);
    pFIFOlist += FIFOnumber;
    ptxFIFO = (struct TRANSMIT_FIFO *)(cardrtx[handlertx].segaddr + *pFIFOlist);
    //use local copy of ptxFIFO->nextEntryToTransmit to simplify algorithm
    nextEntryToTransmit = ptxFIFO->nextEntryToTransmit;
    //on entering this routine nextEntryToLoad is set to the entry following the last entry loaded
    //this is what we need to load now unless it's at the end of the FIFO in which case we need to wrap around
    if ( ptxFIFO->nextEntryToLoad >= ptxFIFO->numEntries)
    *actualStartIndex = 0;
    else
    *actualStartIndex = ptxFIFO->nextEntryToLoad;
    //if nextEntryToLoad points to the last entry in the FIFO and nextEntryToTransmit points to the first, the FIFO is full
    //also if nextEntryToLoad == nextEntryToTransmit the FIFO is full and we exit without loading anything
    if (( (( ptxFIFO->nextEntryToLoad >= ptxFIFO->numEntries) && (nextEntryToTransmit == 0)) ||
    ( ptxFIFO->nextEntryToLoad == nextEntryToTransmit)) && (ptxFIFO->nextEntryToLoad != INITIAL_ENTRY)){
    *actualLoaded = 0; //FIFO is full already, we can't add anything
    return 0; //this is not a failure, we just have nothing to do, this is indicated in actualLoaded
    numLoaded = 0;
    lclData = (usint *)data; //must use 16 bit writes to the module
    //conditions are dealt with inside the for loop rather than in the for statement itself
    for (FIFOentry = *actualStartIndex; ; FIFOentry++) {
    //if we reached the end of the FIFO
    //if the module is about to transmit the first element of the FIFO, the FIFO is full and we're done
    //OR if the module is about to transmit the element we're about to fill in, we're done - the
    //exception is if this is the first element we're filling in which means the FIFO is empty
    if ((( FIFOentry >= ptxFIFO->numEntries) && (nextEntryToTransmit == 0)) ||
    ((FIFOentry == nextEntryToTransmit) && (FIFOentry != *actualStartIndex) )){
    *actualLoaded = numLoaded;
    //set nextEntryToLoad to the end of the FIFO, we'll set it to the beginning next time
    //this allows us to distinguish between full and empty: nextEntryToLoad == nextEntryToTransmit means empty
    ptxFIFO->nextEntryToLoad = FIFOentry;
    return 0;
    //we reached the end but can continue loading from the top of the FIFO
    if ( FIFOentry >= ptxFIFO->numEntries)
    FIFOentry = 0;
    //load the control word
    ptxFIFO->FifoData[FIFOentry * 3] = *lclData++;
    //skip the high of the control word, the module only has a 16 bit field for control
    lclData++;
    //now put in the data
    ptxFIFO->FifoData[(FIFOentry * 3) + 2] = *lclData++;
    ptxFIFO->FifoData[(FIFOentry * 3) + 1] = *lclData++;
    numLoaded++;
    //we're done because we loaded everything the user asked for
    if (numLoaded >= numWords) {
    *actualLoaded = numLoaded;
    ptxFIFO->nextEntryToLoad = FIFOentry+1;
    return 0;
    //if we reached here, we're done because the FIFO is full
    *actualLoaded = numLoaded;
    ptxFIFO->nextEntryToLoad = FIFOentry;
    fclose (pFile);
    return 0;
     As you can see, I added a temporary diagnostic with the structures that were created in the "Handles by value" case, and print out the data.  I see what is expected, whichever of the options I pick in the CLN!  
    I understood (from the information in the two links I mentioned in my original post, and from the name of the option itself) that "array data pointer" should pass the array of data itself, without the dimSize field.  But that does not seem to be what is happening.
    Batya
    Attachments:
    ExcM4k Load Transmit FIFO.vi ‏15 KB

  • How do I identify the actual /dev nodes in use by ASM?

    I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so the scan picks up both devices.
    #/etc/init.d/oracleasm querydisk -p ASMVOL_01
    Disk "ASMVOL_01" is a valid ASM disk
    /dev/emcpowera3: LABEL="ASMVOL_01" TYPE="oracleasm"
    /dev/sda3: LABEL="ASMVOL_01" TYPE="oracleasm"
    But I don't think it can be using both. How do I see which one it's actually using?
    *They said:
    ORACLEASM_SCANORDER="emcpower*"
    ORACLEASM_SCANEXCLUDE="sd"
    But I think that should be "sd*".

    Powerpath supports multiple I/O paths. Most HBA (Fibre Channel PCI cards) have dual ports.
    This means 2 fibres running from the server into the FC switch(es). And more than one I/O path for that server to read/write a SAN LUN.
    That SAN LUN will be seen multiple times by the sever - so it will create multiple scsi devices (in the +/dev/+ directory). One device for each I/O path.
    There are 2 basic reasons why you should not use these scsi devices directly.
    They can and do change device names after each reboot. The sequence the SAN LUNs are named in, depends on how the I/O fabric layer enumerates the LUNs when the kernel runs a device discovery on each I/O path. So LUN1 can be device sdg and sdk on one boot, and the same LUN1 can be sde and sdx on another boot. There is thus no naming consistency which makes it very difficult to use the device names directly as these names change.
    The second reason is redundancy. If you use device sdg and that I/O path fails, your s/w using that device fails. Despite a second path being available to the LUN on the SAN via device sdk.
    Powerpath (and Open Source Multipath) addresses this issue. The unique scsi ID of each device is determined. The devices with the same ID are I/O paths to the same LUN. A single logical device (an emcpower* device) is created - and it serves as the interface to the LUN, supporting all the I/O paths to the LUN (providing load balancing, and redundancy should an I/O path fail).
    The powermt command (if I recall correctly) will show you how this logical device is configured and what scsi devices are used as I/O paths to the EMC LUN.
    Personally - we do not use PowerPath any more for a number of years now. We instead use the Open Source Multipath solution. This was build for very large Linux clusters (1000's of nodes and Pentabytes of SAN storage) and is now a standard driver in the Enterprise distros of the Linux kernel. It works fine with EMC (we have used it with Clariion, Symmetrix SANs, and currently with VNX SANs).
    Multipath does not taint the kernel. Multipath allows for far easier kernel upgrades. Multipath supports a number of different I/O fabric layers transparently. Multipath is very easy to configure.

  • Anybody Succeeded in RAC 2 node installation using vmware?

    Friends,
    i tried to install 2 node 10gR2 RAC in vmware workstation and also in vmware server 2.
    i check the clustering will be supporting in vmware workstation (ofcourse it will give a warning that clustering will not support)
    i got struck in connecting the first node to the host.
    ok here goes my problem..
    1. i am doing everything in dell precision m4600 i7 2720qm
    2. host is windows 7 pro version(172.16.1.3)
    3. installed vmware workstation 8 in win7
    4. installed the rhel 4.8 as a guest os with two network card. one is bridged(172.16.1.4) and other one is host only(10.10.10.31).
    5. after installation of os, i tried to ping the guest os from host and viceversa....but no use...not pinging....
    6. in the both win7 and guest os i set the gateway as 172.16.1.1
    so this is the problem...how to solve this network issue...? once i solve this, then only i can go with the next step....
    for the past 10 hrs...i was keep on searching in the forums,irc etc.....still i didnt get a solution....
    i hope somebody in this forum...have succeeded....so i hope somebody will help me in solving this issue.
    i followed many links...all are stating to add two network cards one is for public and another one is for private (virtualip will be added in the host file).
    thanks

    did you install l VMware Client Tools?
    after install vmware tools you can copy and paste files between guest os and main windows system.
    check you /etc/hosts file in the guest Operating system. need to ping between the two node guest os only. no need to worry about primary windows 7 os and guest os
    refer:-http://oracleinstance.blogspot.in/2010/03/oracle-10g-installation-in-linux-5.html
    http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnCentos4UsingVMware.php
    hope, this will helps you,
    Regards,
    Rajesh Kumar Govindarajan.
    https://oracleinstance.blogspot.com

  • Migrate a Single node clustered SQL SharePoint 2010 farm to an un-clustered single node SQL VM (VMware)

    Silly as it sounds, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
    We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
    Our current setup is
        shareclus  
    = cluster (1 node – sharedb (SharePoint database server))
        shareapp1
    = application server (LB)
        shareapp2 
    =  application server (LB)
        shareweb1  =  WFE (LB)
        shareweb2  = WFE (LB)
    and would like to go to
        sharedb01vm = SharePoint Database server
        shareapp1
    = application server (LB)
        shareapp2 
    =  application server (LB)
        shareweb1  =  WFE (LB)
        shareweb2  = WFE (LB)
    So at the moment the database is referenced in Central Addministration as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
    Can anyone help? Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.

    I havent done this specifically with sharepoint, but I dont think it will be any different.
    Basically you build the new VM with the name sharedb01vm. Now when you are doing the cut-over, ie when you are basically moving all the databases, you rename the servers.. ie the new VM will be renamed as Shareclus and the old cluster can be named anything
    you like .
    At this point the sharepoint server should point to the new VM where you have already migrated the db's.
    Another option is to create a Alias in the sharepoint server "shareclus" to point to sharedb01vm.
    I have seen both of this in different environments but I bascially dont prefer the Alias option as it creates confusion to people who dont know this.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Migrate a Single node clustered SharePoint 2010 farm to an un-clustered single node SQL VM (VMware)

    Silly as it sounds, yes, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
    So our current setup is an SQL cluster (1 node), 2 App servers (VM’s) and 2 Web servers (VM’s)
    We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
    I’ve had a look around and seen the option to use SQL aliases but I’m not sure that’s the best option. I was thinking of rebuilding the DB server but was wondering if there is any other options
    Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.

    Hi, yes that's correct, but my query really is about the SharePoint side with regard to SharePoint and maybe using SQL aliases.
    My current setup is
    shareclus
       = cluster (1 node – sharedb (SharePoint database server))
    shareapp1
     = application server (LB)
    shareapp2  
    =  application server (LB)
    shareweb1  =  WFE (LB)
    shareweb2  = WFE (LB)
    and would like to go to
    sharedb01vm = SharePoint Database server
    shareapp1
     = application server (LB)
    shareapp2  
    =  application server (LB)
    shareweb1  =  WFE (LB)
    shareweb2  = WFE (LB)
    So at the moment the database is referenced in CA as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
    Can anyone help?

  • Connecting 3 machines as seperate clusters using synchronized-site example

    I am trying to configure three machines as seperate clusters and want to have replication between them. I followed the synchronized-site example and configured two machines successfully.
    For connecting the third machine, I am adding the remote address of the third machine in the remote-invocation-scheme of the other two machines and also defining a proxy-scheme for the third machine.
    I observed that in case of three machines the cache is getting replicated only between any two machines and not in the third machine. I am not able to identify the faulty node in the above mentioned setup.
    Please forward any information as to how to configure more than two nodes using the synchronized site example.
    P.S. Attaching the error log.
    Error Log:_
    Map (india-cache): cache london-cache
    2009-07-23 18:46:43.350/593.493 Oracle Coherence GE 3.5/459 <D5> (thread=Distrib
    utedCache:LondonPartitionedCache, member=2): Service LondonPartitionedCache join
    ed the cluster with senior service member 1
    2009-07-23 18:46:43.360/593.503 Oracle Coherence GE 3.5/459 <D5> (thread=Distrib
    utedCache:LondonPartitionedCache, member=2): Service LondonPartitionedCache: rec
    eived ServiceConfigSync containing 258 entries
    <distributed-scheme>
    <!--
    no backups, since this is a replica
    -->
    <scheme-name>london-partitioned</scheme-name>
    <service-name>LondonPartitionedCache</service-name>
    <thread-count>4</thread-count>
    <backup-count>0</backup-count>
    <backing-map-scheme>
    <class-scheme>
    <class-name>com.tangosol.examples.extend.ExtendBackingMap</class-name>
    <init-params>
    <init-param>
    <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
    <param-value>{manager-context}</param-value>
    </init-param>
    <init-param>
    <param-type>string</param-type>
    <param-value>RemoteCache</param-value>
    </init-param>
    <init-param>
    <param-type>string</param-type>
    <param-value>london-cache</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    Map (london-cache): get done
    2009-07-23 18:46:56.980/607.123 Oracle Coherence GE 3.5/459 <Error> (thread=main
    , member=2):
    (Wrapped: Failed request execution for LondonPartitionedCache service on Member(
    Id=1, Timestamp=2009-07-23 18:36:39.152, Address=192.168.14.122:8088, MachineId=
    26234, Location=site:mcgrawhill.co.in,machine:TCS047891,process:4152, Role=Coher
    enceServer)) com.tangosol.net.RequestTimeoutException: request timed out after 1
    0000 millis
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:293)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
    rid.tagException(Grid.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.DistributedCache.onGetRequest(DistributedCache.CDB:40)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.DistributedCache$GetRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(Daem
    onPool.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(Daem
    onPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(Daem
    onPool.CDB:69)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
    at java.lang.Thread.run(Thread.java:619)
    Caused by: com.tangosol.net.RequestTimeoutException: request timed out after 100
    00 millis
    at com.tangosol.coherence.component.net.extend.message.Request$Status.wa
    itForResponse(Request.CDB:58)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.C
    DB:20)
    at com.tangosol.coherence.component.net.extend.Channel.request(Channel.C
    DB:1)
    at com.tangosol.coherence.component.net.extend.RemoteNamedCache$BinaryCa
    che.get(RemoteNamedCache.CDB:11)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterColl
    ections.java:1522)
    at com.tangosol.coherence.component.net.extend.RemoteNamedCache.get(Remo
    teNamedCache.CDB:1)
    at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCac
    he.CDB:1)
    at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterColl
    ections.java:1522)
    at com.tangosol.examples.extend.ExtendBackingMap.get(ExtendBackingMap.ja
    va:320)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
    rid.DistributedCache.onGetRequest(DistributedCache.CDB:25)
    ... 6 more
    Map (london-cache): get tcs
    null
    Map (london-cache):
    Edited by: user11285641 on Jul 23, 2009 7:58 AM

    Thanx a lot for your suggestion.
    I have implemented Push Replication pattern to connect three machines in the hub-spoke model. But I am facing another problem.
    I want replicate the message (or cache data) at any one of the listeners across all the listener nodes. My hub publisher is taking care of that. But how to get the message from the listener site to the hub in the first place?
    I read somewhere about the active-active configuration of the Push Replication pattern, but am unable to implement it.
    Will it suit my purpose? Any suggestions would be helpful.
    Regards,
    Arunava

Maybe you are looking for

  • Printer sharing print test page

    I purchased a new Dell Studio XP 8000 running Windows 7 and connected my HP Photosmart D7500 to the computer. My older Dell Dimension 2400 is running Windows XP Professional Ver. 2002 Ser.Pak 3. I networked two and can see the shared HP D7500 printer

  • Help!  I'm stuck on the Task Bar and I can't get up!

    I recently upgraded to Premiere Elements 8, and now I can start the program but as soon as I open a project the window minimizes to the task bar.  I can not restore the window to show-up on the screen.  The Organizer works OK, and the Adobe startup w

  • Netbook wont start

    wew, where do i start,,, ummm, i tampered with the battery and when i inserted it on the netbook something burned (i saw smoke coming out of the board where the battery is inserted particularly the positive and negative or inbetween them), why, becau

  • Burrito code Hinting issue

    Hello there, I've a big issue with code hinting on the brand new Burrito! For all class (native and created...) when i hit ctrl space i've nothing appear in th code hint panel! Maybee somthing new must bee installed? What a bug!

  • Can't backup my PSA file

    Hello, I recently lost a PSA file somehow when I started a new catalog up in Photoshop Elements 4.0. The new catalog was given a different name, but for some reason my original catalog of over 10,000 photos is gone. In trying to figure this out, I we