Nodes/Clusters

When I try to start the default server in the Java EE 5 SDK I get the message:
Incorrect admin username and/or password
I was able to start the domain earlier but then I went to the "Admin console" and selected an option to allow Clusters, it warned me that it would reset my system.
Now apparently it reset the username and password and I don't know how to set them or from where to enter them again.
I'm sure the answer is very simple. I'm just very new to the Java EE 5 SDK and everything about it is new to me.
Any suggestion on how to get the domain started again?
Thank you,

This likely isn't the forum for this question - maybe [this one|http://forums.sun.com/forum.jspa?forumID=754] ?

Similar Messages

  • Migrate a Single node clustered SQL SharePoint 2010 farm to an un-clustered single node SQL VM (VMware)

    Silly as it sounds, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
    We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
    Our current setup is
        shareclus  
    = cluster (1 node – sharedb (SharePoint database server))
        shareapp1
    = application server (LB)
        shareapp2 
    =  application server (LB)
        shareweb1  =  WFE (LB)
        shareweb2  = WFE (LB)
    and would like to go to
        sharedb01vm = SharePoint Database server
        shareapp1
    = application server (LB)
        shareapp2 
    =  application server (LB)
        shareweb1  =  WFE (LB)
        shareweb2  = WFE (LB)
    So at the moment the database is referenced in Central Addministration as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
    Can anyone help? Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.

    I havent done this specifically with sharepoint, but I dont think it will be any different.
    Basically you build the new VM with the name sharedb01vm. Now when you are doing the cut-over, ie when you are basically moving all the databases, you rename the servers.. ie the new VM will be renamed as Shareclus and the old cluster can be named anything
    you like .
    At this point the sharepoint server should point to the new VM where you have already migrated the db's.
    Another option is to create a Alias in the sharepoint server "shareclus" to point to sharedb01vm.
    I have seen both of this in different environments but I bascially dont prefer the Alias option as it creates confusion to people who dont know this.
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Migrate a Single node clustered SharePoint 2010 farm to an un-clustered single node SQL VM (VMware)

    Silly as it sounds, yes, we have a SQL2008r2 Clustered SharePoint farm with only one node. We did intend to have 2 nodes but due to costs and other projects taking priority it was left as is.
    So our current setup is an SQL cluster (1 node), 2 App servers (VM’s) and 2 Web servers (VM’s)
    We have decided to virtualise the Database server (to be SQL2008r2 un-clustered) to take advantage of VMware H/A etc.
    I’ve had a look around and seen the option to use SQL aliases but I’m not sure that’s the best option. I was thinking of rebuilding the DB server but was wondering if there is any other options
    Has anyone done this before? Any tips/advice to migrate or otherwise get the SQL DB virtualised would be greatly received.

    Hi, yes that's correct, but my query really is about the SharePoint side with regard to SharePoint and maybe using SQL aliases.
    My current setup is
    shareclus
       = cluster (1 node – sharedb (SharePoint database server))
    shareapp1
     = application server (LB)
    shareapp2  
    =  application server (LB)
    shareweb1  =  WFE (LB)
    shareweb2  = WFE (LB)
    and would like to go to
    sharedb01vm = SharePoint Database server
    shareapp1
     = application server (LB)
    shareapp2  
    =  application server (LB)
    shareweb1  =  WFE (LB)
    shareweb2  = WFE (LB)
    So at the moment the database is referenced in CA as shareclus. If I break the cluster, shareclus will not exist so I don’t think I will be able to use aliases(?) but I’m not sure.
    Can anyone help?

  • Printing issue in a 2-node clustered environment

    Hi,
    We have 2-node Apps Tiers (PCP enabled) with print queues running on each node. When users run a job to print a job using the same program; the print queue to the printer gets messed up.
    For example: user 1 prints 10 items to printer A and user 2 prints the same set of 10 items but with a different numbers. The final outputs on the printer goes out of sequence -- 20 items printed with a combination of items from user1 and 10 items from user2.
    The question is, in a PCP environment, is there a way to isolate the print job ..ie to allow the print job to complete on that printer and block it for other users/print jobs?
    Environment: 11.5.10.1, 10.2.0.3, RedHat 4.x, 2-node PCP enabled apps tiers.
    Thanks,
    Subroto

    Hi,
    The question is, in a PCP environment, is there a way to isolate the print job ..ie to allow the print job to complete on that printer and block it for other users/print jobs?I do not think can be controlled from the application (unless you use incompatibilities -- See the link below for details).
    http://forums.oracle.com/forums/search.jspa?threadID=&q=Concurrent+AND+Incompatibility&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    You may also check with your printer vendor and see if you can control this at the Printer/OS level.
    Thanks,
    Hussein

  • 2 node clusters using SPARCstation 2's

    Is it possible to make a two node cluster out of SPARCstation 2's, with Sun Cluster 3.0 or maybe an earlier version of the software?
    Thanks,
    Brandon

    I don't know how well Cluster 3.x will work for you on sparc 2's as Solaris 8 is the first release to desupport the sun4c platform.
    You could run SC 2.x, but that's very near to desupport if not already desupported. The only issue there is on the pizza box platform, you don't have a lot of space for redundancy (in NICs, SCSI or Fibre adapters, etc).
    You can run without all the various redundancies, but you should have some minimums just to ensure system health. I'm assuming this is just a test environment...

  • Grid installation: root.sh failed on the first node on Solaris cluster 4.1

    Hi all,
    I'm trying to install the Grid (11.2.0.3.0) on the 2 node-clusters (OSC 4.1).
    When I run the root.sh on the first node, I got the out put as follow:
    xha239080-root-5.11# root.sh
    Performing root user operation for Oracle 11g
    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /Grid/CRShome
    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    /usr/local/bin is read only. Continue without copy (y/n) or retry (r)? [y]:
    Warning: /usr/local/bin is read only. No files will be copied.
    Creating /var/opt/oracle/oratab file...
    Entries will be added to the /var/opt/oracle/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /Grid/CRShome/crs/install/crsconfig_params
    Creating trace directory
    User ignored Prerequisites during installation
    OLR initialization - successful
    root wallet
    root wallet cert
    root cert export
    peer wallet
    profile reader wallet
    pa wallet
    peer wallet keys
    pa wallet keys
    peer cert request
    pa cert request
    peer cert
    pa cert
    peer root cert TP
    profile reader root cert TP
    pa root cert TP
    peer pa cert TP
    pa peer cert TP
    profile reader pa cert TP
    profile reader peer cert TP
    peer user cert
    pa user cert
    Adding Clusterware entries to inittab
    CRS-2672: Attempting to start 'ora.mdnsd' on 'xha239080'
    CRS-2676: Start of 'ora.mdnsd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'xha239080'
    CRS-2676: Start of 'ora.gpnpd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xha239080'
    CRS-2672: Attempting to start 'ora.gipcd' on 'xha239080'
    CRS-2676: Start of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'xha239080'
    CRS-2672: Attempting to start 'ora.diskmon' on 'xha239080'
    CRS-2676: Start of 'ora.diskmon' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.cssd' on 'xha239080' succeeded
    ASM created and started successfully.
    Disk Group DATA created successfully.
    clscfg: -install mode specified
    Successfully accumulated necessary OCR keys.
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    CRS-4256: Updating the profile
    Successful addition of voting disk 9cdb938773bc4f16bf332edac499fd06.
    Successful addition of voting disk 842907db11f74f59bf65247138d6e8f5.
    Successful addition of voting disk 748852d2a5c84f72bfcd50d60f65654d.
    Successfully replaced voting disk group with +DATA.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    ## STATE File Universal Id File Name Disk group
    1. ONLINE 9cdb938773bc4f16bf332edac499fd06 (/dev/did/rdsk/d10s6) [DATA]
    2. ONLINE 842907db11f74f59bf65247138d6e8f5 (/dev/did/rdsk/d8s6) [DATA]
    3. ONLINE 748852d2a5c84f72bfcd50d60f65654d (/dev/did/rdsk/d9s6) [DATA]
    Located 3 voting disk(s).
    Start of resource "ora.cssd" failed
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'xha239080'
    CRS-2672: Attempting to start 'ora.gipcd' on 'xha239080'
    CRS-2676: Start of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'xha239080'
    CRS-2672: Attempting to start 'ora.diskmon' on 'xha239080'
    CRS-2676: Start of 'ora.diskmon' on 'xha239080' succeeded
    CRS-2674: Start of 'ora.cssd' on 'xha239080' failed
    CRS-2679: Attempting to clean 'ora.cssd' on 'xha239080'
    CRS-2681: Clean of 'ora.cssd' on 'xha239080' succeeded
    CRS-2673: Attempting to stop 'ora.gipcd' on 'xha239080'
    CRS-2677: Stop of 'ora.gipcd' on 'xha239080' succeeded
    CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'xha239080'
    CRS-2677: Stop of 'ora.cssdmonitor' on 'xha239080' succeeded
    CRS-5804: Communication error with agent process
    CRS-4000: Command Start failed, or completed with errors.
    Failed to start Oracle Grid Infrastructure stack
    Failed to start Cluster Synchorinisation Service in clustered mode at /Grid/CRShome/crs/install/crsconfig_lib.pm line 1211.
    /Grid/CRShome/perl/bin/perl -I/Grid/CRShome/perl/lib -I/Grid/CRShome/crs/install /Grid/CRShome/crs/install/rootcrs.pl execution failed
    xha239080-root-5.11# history
    checking the ocssd.log, I see some thing as follow:
    2013-09-16 18:46:24.238: [    CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.3.0, in (clustered) mode with uniqueness value 1379371584
    2013-09-16 18:46:24.239: [    CSSD][1]clssscmain: Environment is production
    2013-09-16 18:46:24.239: [    CSSD][1]clssscmain: Core file size limit extended
    2013-09-16 18:46:24.248: [    CSSD][1]clssscmain: GIPCHA down 1
    2013-09-16 18:46:24.249: [    CSSD][1]clssscGetParameterOLR: OLR fetch for parameter logsize (8) failed with rc 21
    2013-09-16 18:46:24.250: [    CSSD][1]clssscExtendLimits: The current soft limit for file descriptors is 65536, hard limit is 65536
    2013-09-16 18:46:24.250: [    CSSD][1]clssscExtendLimits: The current soft limit for locked memory is 4294967293, hard limit is 4294967293
    2013-09-16 18:46:24.250: [    CSSD][1]clssscGetParameterOLR: OLR fetch for parameter priority (15) failed with rc 21
    2013-09-16 18:46:24.250: [    CSSD][1]clssscSetPrivEnv: Setting priority to 4
    2013-09-16 18:46:24.253: [    CSSD][1]clssscSetPrivEnv: unable to set priority to 4
    2013-09-16 18:46:24.253: [    CSSD][1]SLOS: cat=-2, opn=scls_mem_lockdown, dep=11, loc=mlockall
    unable to lock memory
    2013-09-16 18:46:24.253: [    CSSD][1](:CSSSC00011:)clssscExit: A fatal error occurred during initialization
    Do anyone have any idea what going on and how can I fix it ?

    Hi,
    solaris has several issues with DISM, e.g.:
    Solaris 10 and Solaris 11 Shared Memory Locking May Fail (Doc ID 1590151.1)
    Sounds like Solaris Cluster  has a similar bug. A "workaround" is to reboot the (cluster) zone, that "fixes" the mlock error. This bug was introduced with updates in september, atleast to our environment (Solaris 11.1). Prior i did not have the issue and now i have to restart the entire zone, whenever i stop crs.
    With 11.2.0.3 the root.sh script can be rerun without prior cleaning up, so you should be able to continue installation at that point after the reboot. After the root.sh completes some configuration assistants need to be run, to complete the installation. You need to execute this manually as you wipe your oui session
    Kind Regards
    Thomas

  • Service accounts rights in Sql Server 2008 clustered installation.

    I have to install  Sqlserver 2008 in a 2 node clustered environment in
    Windows Server 2008 R2. For that I have set up 4 less privileged
    a/c in domain for Db engine, Sql agent, Reporting services and Analysis
    service. During the installation I plan to specify these a/c's in the
    domain to run the above 4 services under these a/c. I understand the sql server agent
    a/c should have 6 rights in the local computer security policy
    ie a)Adjust memory quotas for process,b)Act as a part of os,c)Bypass
    traverse chechking,d)Log on as a batch job and e)Log on as a service.
    Will these rights get automatically assigned during installation
    or should it be manually assigned in each node under its local security
    policy. Also what are rights for the other 3 service a/c and do these
    rights get assigned automatically during installation.

    I have to install  Sqlserver 2008 in a 2 node clustered environment in
    Windows Server 2008 R2. For that I have set up 4 less privileged
    a/c in domain for Db engine, Sql agent, Reporting services and Analysis
    service. During the installation I plan to specify these a/c's in the
    domain to run the above 4 services under these a/c. I understand the sql server agent
    a/c should have 6 rights in the local computer security policy
    ie a)Adjust memory quotas for process,b)Act as a part of os,c)Bypass
    traverse chechking,d)Log on as a batch job and e)Log on as a service.
    Will these rights get automatically assigned during installation
    or should it be manually assigned in each node under its local security
    policy. Also what are rights for the other 3 service a/c and do these
    rights get assigned automatically during installation.
    You should get Domain account created before starting cluster installation and specifically give these rights to the account.
    Regarding rights below link might be helpful
    http://blogs.msdn.com/b/askjay/archive/2011/02/28/required-rights-for-sql-server-service-account.aspx
    When installing cluster make sure you use Domain account which is added as local administrator on both nodes.
    It should have righst to create Computer name object(CNO) in domain where cluster is being created
    Windows CNO must have complete rights on SQL server CNO.You should also take help from AD team in providing these rights and understanding if any.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Load-balancing issues with iPlanet and multiple clusters

    We're in performance test of a large-scale clustered deployment based on WLS 5.1sp10.
    Due to scalability/functionality issues, some of which we've seen firsthand and
    some of which we've been informed of by associates as well as BEA representatives,
    we've chosen to implement multiple clusters with a maximum of three nodes each.
    These clusters will be fronted by a web server tier consisting of iPlanet servers
    using the proxy plugin.
    Due to hardware constraints (both in test and in production), however, we've configured
    the iPlanet servers to route across the multiple clusters. In our test environment,
    for instance, we've got a single iPlanet server routing across two 3-node clusters,
    and the configuration in obj.conf is as follows:
    <Object name="application" ppath="*/application">
    Service fn="wl-proxy" \
    WebLogicCluster="clusterA_1:9990,clusterB_1:9991,clusterA_2:9990,clusterB_2:9991,clusterA_3:9990,
    clusterB_3:9991" \
    CookieName="ApplicationSession"
    </Object>
    Our issue is that the load-balancing doesn't appear to work across the clusters.
    We're seeing one cluster get about 90% of the load, while the other receives
    only 10%.
    So, the question (finally!) is: Is this configuration correct (i.e., will it
    work according to the logic of the proxy plugin), and is it appropriate for this
    situation? Are there other alternative approaches that anyone can recommend?
    Thanks in advance,
    cramer

    I use weblogic6.1 with sp2+windows 2000.I develop a web application and deploy
    it to cluster.Through HttpClusterServlets proxy of weblogic I found that a server
    in cluster almost get 95% of requests but another only get 5% of requests.Why???
    I don't set any special parameter.And the weight of the two clustered server is
    equal.I use round-robin arithmetic.
    Thanks!
    "cramer" <[email protected]> wrote:
    >
    We're in performance test of a large-scale clustered deployment based
    on WLS 5.1sp10.
    Due to scalability/functionality issues, some of which we've seen firsthand
    and
    some of which we've been informed of by associates as well as BEA representatives,
    we've chosen to implement multiple clusters with a maximum of three nodes
    each.
    These clusters will be fronted by a web server tier consisting of iPlanet
    servers
    using the proxy plugin.
    Due to hardware constraints (both in test and in production), however,
    we've configured
    the iPlanet servers to route across the multiple clusters. In our test
    environment,
    for instance, we've got a single iPlanet server routing across two 3-node
    clusters,
    and the configuration in obj.conf is as follows:
    <Object name="application" ppath="*/application">
    Service fn="wl-proxy" \
    WebLogicCluster="clusterA_1:9990,clusterB_1:9991,clusterA_2:9990,clusterB_2:9991,clusterA_3:9990,
    clusterB_3:9991" \
    CookieName="ApplicationSession"
    </Object>
    Our issue is that the load-balancing doesn't appear to work across the
    clusters.
    We're seeing one cluster get about 90% of the load, while the other
    receives
    only 10%.
    So, the question (finally!) is: Is this configuration correct (i.e.,
    will it
    work according to the logic of the proxy plugin), and is it appropriate
    for this
    situation? Are there other alternative approaches that anyone can recommend?
    Thanks in advance,
    cramer

  • Windows 2012 Standard File Server Clustering SMB Share Error: Access is denied.

    Hi All,
    My setup consist of 2 nodes clustered with File Server role.  I can successfully failover role to either node with no issues.  But if I try to modify the permissions of any file share on my file server cluster I get the following error: Error
    Occurred while updating an SMB share: Access is denied.  Access is denied.
    Now I played around with the permissions on these shares and noticed that when I add the "everyone" group to these shares with change permissions I can successfully modify the shares with no errors.  If I removed the "everyone" group
    I get the error.  So to tell its like some service or account needs permission to these shares to be able to modify them.  I don't want to keep "everyone" group on these shares.  Can anyone please shed some light on what group, user,
    or service account needs permissions on these shared in order for me to modify these SMB shares without getting Access is denied.  Thanks

    Hi,
    It seems your account don’t have the enough right to modify this clustered folder permission.
    More information:
    Create a Shared Folder in a Clustered File Server
    http://technet.microsoft.com/en-us/library/cc732302.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Adding node back into cluster after removal...

    Hi,
    I removed a cluster node using "scconf -r -h <node>" (carried out all the other usual removal steps before getting this command to work).
    Because this is a pair+1 cluster and the node i was trying to remove was physically attached to the quroum device (scsi), I had to create a dummy node before the removal command above would work.
    I reinstalled solaris, SC3.1u4 framwork, patches etc. and then tried to run scsinstall again on the node (reintroduced the node to the cluster again first using scconf -a -T node=<node>).
    However! during the scsinstall i got the following problem:
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    scrconf: RPC: Unknown host
    scinstall:  Failed communications with "bogusnode"
    scinstall: scinstall did NOT complete successfully!
    Press Enter to continue:
    Was not sure what to do at this point, but since the other clusternodes could now see my 'new' node again, i removed the dummy node, rebooted the new node and said a little prayer...
    Now, my node will not boot as part of the cluster:
    Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: CMM: Node bogusnode (nodeid = 4) with votecount = 0 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205346037.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #18 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    NOTICE: CMM: Node (nodeid = 4) with votecount = 0 removed.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #19 completed.
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    WARNING: clcomm: per node IP config clprivnet0:-1 (349): 172.16.193.1 failed with 19
    cladm: CLCLUSTER_ENABLE: No such device
    UNRECOVERABLE ERROR: Sun Cluster boot: Could not initialize cluster framework
    Please reboot in non cluster mode(boot -x) and Repair
    syncing file systems... done
    WARNING: CMM: Node being shut down.
    Program terminated
    {1} ok
    Any ideas how i can recover this situation without having to reinstall the node again?
    (have a flash with OS, sc3.1u4 framework etc... so not the end of the world but...)
    Thanks a mil if you can help here!
    - headwrecked

    Hi - got sorted with this problem...
    basically just removed (scinstall -r) the sc3.1u4 software from the node which was not booting, and then re-installed the software (this time the dummy node had been removed so it did not try to contact this node and the scinstall completed without any errors)
    I think the only problem with the procedure i used to remove and readd the node was that i forgot to remove the dummy node before re-adding the actaul cluster node again...
    If anyone can confirm this to be the case then great - if not... well its working now so this thread can be closed.
    root@n20-1-sup # /usr/cluster/bin/scinstall -r
    Verifying that no unexpected global mounts remain in /etc/vfstab ... done
    Verifying that no device services still reference this node ... done
    Archiving the following to /var/cluster/uninstall/uninstall.1036/archive:
    /etc/cluster ...
    /etc/path_to_inst ...
    /etc/vfstab ...
    /etc/nsswitch.conf ...
    Updating vfstab ... done
    The /etc/vfstab file was updated successfully.
    The original entry for /global/.devices/node@1 has been commented out.
    And, a new entry has been added for /globaldevices.
    Mounting /dev/dsk/c3t0d0s6 on /globaldevices ... done
    Attempting to contact the cluster ...
    Trying "n20-2-sup" ... okay
    Trying "n20-3-sup" ... okay
    Attempting to unconfigure n20-1-sup from the cluster ... failed
    Please consider the following warnings:
    scrconf: Failed to remove node (n20-1-sup).
    scrconf: All two-node clusters must have at least one shared quorum device.
    Additional housekeeping may be required to unconfigure
    n20-1-sup from the active cluster.
    Removing the "cluster" switch from "hosts" in /etc/nsswitch.conf ... done
    Removing the "cluster" switch from "netmasks" in /etc/nsswitch.conf ... done
    ** Removing Sun Cluster framework packages **
    Removing SUNWkscspmu.done
    Removing SUNWkscspm..done
    Removing SUNWksc.....done
    Removing SUNWjscspmu.done
    Removing SUNWjscspm..done
    Removing SUNWjscman..done
    Removing SUNWjsc.....done
    Removing SUNWhscspmu.done
    Removing SUNWhscspm..done
    Removing SUNWhsc.....done
    Removing SUNWfscspmu.done
    Removing SUNWfscspm..done
    Removing SUNWfsc.....done
    Removing SUNWescspmu.done
    Removing SUNWescspm..done
    Removing SUNWesc.....done
    Removing SUNWdscspmu.done
    Removing SUNWdscspm..done
    Removing SUNWdsc.....done
    Removing SUNWcscspmu.done
    Removing SUNWcscspm..done
    Removing SUNWcsc.....done
    Removing SUNWscrsm...done
    Removing SUNWscspmr..done
    Removing SUNWscspmu..done
    Removing SUNWscspm...done
    Removing SUNWscva....done
    Removing SUNWscmasau.done
    Removing SUNWscmasar.done
    Removing SUNWmdmu....done
    Removing SUNWmdmr....done
    Removing SUNWscvm....done
    Removing SUNWscsam...done
    Removing SUNWscsal...done
    Removing SUNWscman...done
    Removing SUNWscgds...done
    Removing SUNWscdev...done
    Removing SUNWscnmu...done
    Removing SUNWscnmr...done
    Removing SUNWscscku..done
    Removing SUNWscsckr..done
    Removing SUNWscu.....done
    Removing SUNWscr.....done
    Removing the following:
    /etc/cluster ...
    /dev/did ...
    /devices/pseudo/did@0:* ...
    The /etc/inet/ntp.conf file has not been updated.
    You may want to remove it or update it after uninstall has completed.
    The /var/cluster directory has not been removed.
    Among other things, this directory contains
    uninstall logs and the uninstall archive.
    You may remove this directory once you are satisfied
    that the logs and archive are no longer needed.
    Log file - /var/cluster/uninstall/uninstall.1036/log
    root@n20-1-sup #
    Ran the scinstall again:
    >>> Confirmation <<<
    Your responses indicate the following options to scinstall:
    scinstall -ik \
    -C N20_Cluster \
    -N n20-2-sup \
    -M patchdir=/var/cluster/patches \
    -A trtype=dlpi,name=qfe1 -A trtype=dlpi,name=qfe5 \
    -m endpoint=:qfe1,endpoint=switch1 \
    -m endpoint=:qfe5,endpoint=switch2
    Are these the options you want to use (yes/no) [yes]?
    Do you want to continue with the install (yes/no) [yes]?
    Checking device to use for global devices file system ... done
    Installing patches ... failed
    scinstall: Problems detected during extraction or installation of patches.
    Adding node "n20-1-sup" to the cluster configuration ... skipped
    Skipped node "n20-1-sup" - already configured
    Adding adapter "qfe1" to the cluster configuration ... skipped
    Skipped adapter "qfe1" - already configured
    Adding adapter "qfe5" to the cluster configuration ... skipped
    Skipped adapter "qfe5" - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Adding cable to the cluster configuration ... skipped
    Skipped cable - already configured
    Copying the config from "n20-2-sup" ... done
    Copying the postconfig file from "n20-2-sup" if it exists ... done
    Copying the Common Agent Container keys from "n20-2-sup" ... done
    Setting the node ID for "n20-1-sup" ... done (id=1)
    Verifying the major number for the "did" driver with "n20-2-sup" ... done
    Checking for global devices global file system ... done
    Updating vfstab ... done
    Verifying that NTP is configured ... done
    Initializing NTP configuration ... done
    Updating nsswitch.conf ...
    done
    Adding clusternode entries to /etc/inet/hosts ... done
    Configuring IP Multipathing groups in "/etc/hostname.<adapter>" files
    IP Multipathing already configured in "/etc/hostname.qfe2".
    Verifying that power management is NOT configured ... done
    Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done
    Ensure network routing is disabled ... done
    Updating file ("ntp.conf.cluster") on node n20-2-sup ... done
    Updating file ("hosts") on node n20-2-sup ... done
    Updating file ("ntp.conf.cluster") on node n20-3-sup ... done
    Updating file ("hosts") on node n20-3-sup ... done
    Log file - /var/cluster/logs/install/scinstall.log.938
    Rebooting ...
    Mar 13 13:59:13 n20-1-sup reboot: rebooted by root
    Terminated
    root@n20-1-sup # syncing file systems... done
    rebooting...
    R
    LOM event: +103d+20h44m26s host reset
    screen not found.
    keyboard not found.
    Keyboard not present. Using lom-console for input and output.
    Sun Netra T4 (2 X UltraSPARC-III+) , No Keyboard
    Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved.
    OpenBoot 4.10.1, 4096 MB memory installed, Serial #52960491.
    Ethernet address 0:3:ba:28:1c:eb, Host ID: 83281ceb.
    Initializing 15MB Rebooting with command: boot
    Boot device: /pci@8,600000/SUNW,qlc@4/fp@0,0/disk@w21000004cfa3e691,0:a File and args:
    SunOS Release 5.10 Version Generic_127111-06 64-bit
    Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
    Use is subject to license terms.
    Hostname: n20-1-sup
    Configuring devices.
    devfsadm: minor_init failed for module /usr/lib/devfsadm/linkmod/SUNW_scmd_link.so
    Loading smf(5) service descriptions: 24/24
    /usr/cluster/bin/scdidadm: Could not load DID instance list.
    Cannot open /etc/cluster/ccr/did_instances.
    Booting as part of a cluster
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) with votecount = 0 added.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) with votecount = 2 added.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) with votecount = 1 added.
    NOTICE: clcomm: Adapter qfe5 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being constructed
    NOTICE: clcomm: Adapter qfe1 constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being constructed
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being constructed
    NOTICE: CMM: Node n20-1-sup: attempting to join cluster.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 being initiated
    NOTICE: CMM: Node n20-2-sup (nodeid: 2, incarnation #: 1205318308) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-2-sup:qfe1 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 being initiated
    NOTICE: CMM: Node n20-3-sup (nodeid: 3, incarnation #: 1205265086) has become reachable.
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-3-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe5 - n20-2-sup:qfe5 online
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 being initiated
    NOTICE: clcomm: Path n20-1-sup:qfe1 - n20-3-sup:qfe1 online
    NOTICE: CMM: Cluster has reached quorum.
    NOTICE: CMM: Node n20-1-sup (nodeid = 1) is up; new incarnation number = 1205416931.
    NOTICE: CMM: Node n20-2-sup (nodeid = 2) is up; new incarnation number = 1205318308.
    NOTICE: CMM: Node n20-3-sup (nodeid = 3) is up; new incarnation number = 1205265086.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #23 completed.
    NOTICE: CMM: Node n20-1-sup: joined cluster.
    ip: joining multicasts failed (18) on clprivnet0 - will use link layer broadcasts for multicast
    NOTICE: CMM: Votecount changed from 0 to 1 for node n20-1-sup.
    NOTICE: CMM: Cluster members: n20-1-sup n20-2-sup n20-3-sup.
    NOTICE: CMM: node reconfiguration #24 completed.
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe1
    Mar 13 14:02:23 in.ndpd[351]: solicit_event: giving up on qfe5
    did subpath /dev/rdsk/c1t3d0s2 created for instance 2.
    did subpath /dev/rdsk/c2t3d0s2 created for instance 12.
    did subpath /dev/rdsk/c1t3d1s2 created for instance 3.
    did subpath /dev/rdsk/c1t3d2s2 created for instance 6.
    did subpath /dev/rdsk/c1t3d3s2 created for instance 7.
    did subpath /dev/rdsk/c1t3d4s2 created for instance 8.
    did subpath /dev/rdsk/c1t3d5s2 created for instance 9.
    did subpath /dev/rdsk/c1t3d6s2 created for instance 10.
    did subpath /dev/rdsk/c1t3d7s2 created for instance 11.
    did subpath /dev/rdsk/c2t3d1s2 created for instance 13.
    did subpath /dev/rdsk/c2t3d2s2 created for instance 14.
    did subpath /dev/rdsk/c2t3d3s2 created for instance 15.
    did subpath /dev/rdsk/c2t3d4s2 created for instance 16.
    did subpath /dev/rdsk/c2t3d5s2 created for instance 17.
    did subpath /dev/rdsk/c2t3d6s2 created for instance 18.
    did subpath /dev/rdsk/c2t3d7s2 created for instance 19.
    did instance 20 created.
    did subpath n20-1-sup:/dev/rdsk/c0t6d0 created for instance 20.
    did instance 21 created.
    did subpath n20-1-sup:/dev/rdsk/c3t0d0 created for instance 21.
    did instance 22 created.
    did subpath n20-1-sup:/dev/rdsk/c3t1d0 created for instance 22.
    Configuring DID devices
    t_optmgmt: System error: Cannot assign requested address
    obtaining access to all attached disks
    n20-1-sup console login:

  • Upgrade of SOA  2 nodes 10.1.3.3 and patch failure

    During upgrade of SOA 10.1.3.3 two node clusters, the first node is made successfully, however, when continue on 2nd node, during the configuration assistant stage, it hangs at BPEL CA : [java] 09/09/23 09:57:37 Notification ==>Binding web application(s) to site default-web-site begins...
    And no further progress.
    When log into the deploy_bpel.log at the SOA_HOME/install, the following following should be the one executing without response:
    Calling redeploy command D:\oracle\product\10.1.3.1\soa\ant\bin\ant -lib D:\oracle\product\10.1.3.1\soa\ant/lib -f D:/oracle/product/10.1.3.1/soa/bpel/system/services/install/ant-tasks/redeploy.xml -Doc4jAdminPassword=<password> -Dinstall.oc4jAdminPassword=<password> -Dinstall.http.host=risdev08 -Dinstall.hostname=risdev08 -Dinstall.admin.port=6005 -Dinstall.opmn.requestport=6005 -Dinstall.oc4j.adminID=oc4jadmin -Dinstall.oc4j.instance=oc4j_soa -Dinstall.oc4j.primary=false
    In the oc4j_soa of opmn/logs folder, the following message appear at about the same time as the bpel ca hangs:
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor47]
    [Unloading class sun.reflect.GeneratedMethodAccessor26]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor12]
    [Unloading class sun.reflect.GeneratedMethodAccessor21]

    MccLok wrote:
    During upgrade of SOA 10.1.3.3 two node clusters, the first node is made successfully, however, when continue on 2nd node, during the configuration assistant stage, it hangs at BPEL CA : [java] 09/09/23 09:57:37 Notification ==>Binding web application(s) to site default-web-site begins...
    And no further progress.
    When log into the deploy_bpel.log at the SOA_HOME/install, the following following should be the one executing without response:
    Calling redeploy command D:\oracle\product\10.1.3.1\soa\ant\bin\ant -lib D:\oracle\product\10.1.3.1\soa\ant/lib -f D:/oracle/product/10.1.3.1/soa/bpel/system/services/install/ant-tasks/redeploy.xml -Doc4jAdminPassword=<password> -Dinstall.oc4jAdminPassword=<password> -Dinstall.http.host=risdev08 -Dinstall.hostname=risdev08 -Dinstall.admin.port=6005 -Dinstall.opmn.requestport=6005 -Dinstall.oc4j.adminID=oc4jadmin -Dinstall.oc4j.instance=oc4j_soa -Dinstall.oc4j.primary=false
    In the oc4j_soa of opmn/logs folder, the following message appear at about the same time as the bpel ca hangs:
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor47]
    [Unloading class sun.reflect.GeneratedMethodAccessor26]
    [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor12]
    [Unloading class sun.reflect.GeneratedMethodAccessor21]Upgrade for the cluster environment should be done one node by one node, without runing in clustered environment.

  • P2V SQL Server 2008 R2 HYPER-V Clustering

    Hi all,
    i have current sql server 2008 environment which have the following requirements:
    1. SERVER A with failover cluster 1 node server (physical server)
    2. 4 shared disk storage (SAN)
    3. The connectivity is HBA
    i am supposed to migrate the existing into hyper-v environment which the following information detail:
    1. provision CSV in the same storage with existing 4 shared disk storage
    2. SERVER B (HYPER-V will be deployed for future sql server cluster)
    the scenario is we want to migrate the server A to Server B. 
    Please correct my plan as the following detail:
    - We want to P2V the existing into SERVER B (Hyper-V).
    here is my questions 
    1. is it possible the vhd after P2V process is attached into CSV storage ?
    2. is it possible the shared storage in the previous environment is re-mapping volume into the Hyper-V ?
    sorry if my thread is disorganized, i am novice in this case. need your kind advice for the best practice solve this issue.
    please tell me if there is anything less clear.
    Best Regards,
    ari

    First question is why do you want to P2V?  It does not take that much effort to create a new VM and then use SQL tools to backup and restore the databases into the new environment.  Then you are using known and proven tools instead of trying to
    make everything work from a P2V.
    When you P2V, the process will create virtual hard drives from each physical drive.  If you want to make your SQL Server into a VM, it does not make sense to keep the storage on pass-through disks.  It is better to use VHDs for the storage of a
    VM.
    Having single node clusters does not make much sense, unless the idea is to immediately add a second node when available. 
    . : | : . : | : . tim

  • Telemetry & Zone Clusters

    Does anyone know a good source for configuring cluster telemetry, specifically with zone clusters? I can't find much in the cluster documentation or by searching oracle's website. The sctelemetry man page wasn't very useful. The sun cluster essentials book provides a brief example but not with zone clusters.
    I'm wanting to use this feature to monitor memory/cpu usage in my various zone clusters. In our environment, we will have a few three node clusters with all applications running inside zone clusters with the "active" cluster nodes being staggered across the 3 nodes.
    Lastly is telemetry really worth the hassle? We are also deploying Ops Center (which I don't really know its capabilities yet) I briefly used an older version of XVM Ops Center at my last gig but only as a provisioning tool. So with Ops Center and the myriad of dtrace tools available, is telemetry worth messing with?
    Thx for any info,
    Chuck

    That's correct. I checked with the features author and telemetry pre-dates the introduction of zone clusters. So "SC only can do cpu.idle monitoring for a zone itself. Anything below that are not monitored, include RG/RS configured inside zones. " is what I got back.
    Tim
    ---

  • 9ir2 Single Node RAC + RH80

    Hi all.
    I'm triying to install a single node RAC based in the single node rac for linux document in the metalink.
    The document was made based in a oracle 9iR1 database an RH71, I have followed the steps and i have a working kernel made by my own, the raw devices, and softdog hacked for being testing only. But the service does not go up.
    I don't know if someone have made a single node rac instalation and can help me to choose the right pieces.
    I don't have a RHAS2.1 but I do have a UnitedLinux 1.0 Server Patched.
    I want to try the RAC tech but here (ecuador-sudamerica) is dificult to find firewire cards and disks so I´m working around a single node instalation.
    Any help will be apreciate.
    Fernando

    Hi
    Database version and GI versions are 11.2.0.2.2. And these are not multinode RAC configuration at any given time only single instance will be there for any given database. Some thing like ACTIVE and PASSIVE in hardware clusters such are VCS (Veritas Clusters).
    I agree with you failover sinario in multi node (2 or more) RAC environments. In single node clustering only one instance will be there, like services in multinode, here whole instance will be re created on available nodes.
    Hi
    Database version and GI versions are 11.2.0.2.2. And these are not multi node RAC configuration at any given time only single instance will be there for any given database. Some thing like ACTIVE and PASSIVE in hardware clusters such are VCS (VERITAS Clusters).
    I agree with you failover scenario in multi node (2 or more) RAC environments. In single node clustering only one instance will be there, like services in multi node, here whole instance will be re created on available nodes.
    Thanks,
    Thanks,

  • Moving active node resources to the inactive node

    We have a 2 node , majority disk, Windows 2008R2 failover cluster.   The active node is currently showing all resources and services on node A.   Node B is the inactive node.    we are about to perform a DataCenter shutdown
    test.   Our application manager would like us to make Node B inactive during the DataCenter Shutdown test but he believes we can make Node B active if we do it this way:
    1.  power down Node B (inactive node)
    2. power down Node A (active node)
    3. power up Node A (active node)
    4. power up Node B (inactive node)
    By doing this way, he believes Node B will automatically become the Active node.
    I do not believe this is correct.   I believe it has to failover to get Node B active.  So, my proposal is:
    1.  power down Node A (active node)  This should cause a failover of resources and services to Node B
    2. Once all resources and services are on Node B, power Node B down.   Now, both nodes are powered down and Node B is holding the cluster resources and services and is the Active node.
    3.  Power up Node B (now the new Active node)
    4. Power up Node A ( the new inactive node)
    Which is correct?
    Thanks in advance

    Earlier question on ownership ... In a two-node cluster, ownership does not mean much.  In multi-node clusters, it is often used with the anti-affinity property to try to keep certain resources from running on the same node.  For example, if you
    have a four node host cluster running two VMs that are also in a cluster, you would most likely not want both of the SQL VMs to reside on the same node of the cluster.  Using anti-affinity and preferred owners, you can pretty much ensure the two VMs never
    end up on the same node.  In a two node cluster, ownership doesn't mean much.
    When SQL is installed in a cluster, SQL is installed on two or more nodes of the cluster.  SQL is actually running on each node on which you have installed it.  In your case, both nodes are running SQL.  By default, if something happens to
    the SQL instance running on one node, it will try to restart on the same node, i.e. it will not immediately failover.  There is a retry count and a time limit set up on the resource.  If the retry count is reached in the specified time limit, then
    the resource, in your case the SQL instance, will fail over to the other node of the cluster.
    In the Failover Cluster Manager console you will see a selection option for Cluster events.  That will show you all the event log entries that have occurred on both nodes of your cluster relating to cluster events.
    .:|:.:|:. tim

Maybe you are looking for