Cluster Upgrade..

Hi Experts..
I have 11.1 Cluster with 10.2 Db with out ASM,
My Plan is I want to upgrade the cluster to 11.2 and Convert DB to ASM. so the plan would be below
1.Upgrade 11.1 to 11.2 Grid (Rolling)
2.Upgrade DB to 11.2
3.Make sure db is good after Upgrade and check application is fine
4.Configure Devices For ASM and create ASM instance and DIskgroup using ASMCA
5.Convert Existing Db from CFS to ASM
So My question here is,
Can we Configure ASM after we upgraded Grid without ASM Option, using ASMCA?
Can we move Cluster Configration Files to ASM after we upgraded?

Hi,
I think yes, you must create a ASM disk group with redundancy 5 way for OCR and VOTE Disk, and you can move to OCR and VOTE into ASM Disk group.
Check : http://docs.oracle.com/cd/E11882_01/rac.112/e16794/votocr.htm
and
http://martincarstenbach.wordpress.com/2010/01/29/how-to-move-ocr-and-voting-disks-into-asm/
Regards
Mahir M. Quluzade
www.mahir-quluzade.com

Similar Messages

  • Rolling Cluster Upgrade (OES2 to OES11) - GPT partition

    We've got an OES2 SP3 cluster that we'll be rolling cluster upgrading to OES11 SP2.
    We are currently at max capacity for several of the NSS volumes (2 TB).
    If I'm reading/interpreting the OES11 SP2 docs correctly:
    AFTER the cluster upgrade is complete, the only way I can get to larger volumes will be to create new LUNs on the SAN and intialize those with GPT. Then I could do the NSS Pool Move feature to get the existing 2 TB volumes to the larger setup?
    Is that correct?
    Or is there a better way that doesnt' require massive downtime?

    Originally Posted by konecnya
    In article <[email protected]>, Kjhurni wrote:
    > We are currently at max capacity for several of the NSS volumes (2 TB).
    I was on the understanding that you could bind multiple partitions to
    create NSS volumes up to 8TB. But I'd be hesitant to do that for a
    clustered volume as well.
    > I could do the NSS Pool Move feature to get the existing 2 TB
    > volumes to the larger setup?
    > Or is there a better way that doesnt' require massive downtime?
    My first thought is the migration wizard so that the bulk of the copy can
    be done while system is live but in the quieter times. Then the final
    update with down time should be much faster.
    But then do you really need down time for the Pool Move feature?
    https://www.novell.com/documentation...a/bwtebhm.html
    Certainly indicates it can be done live on a cluster with the move
    process being cluster aware.
    Andy of
    http://KonecnyConsulting.ca in Toronto
    Knowledge Partner
    http://forums.novell.com/member.php/75037-konecnya
    If you find a post helpful and are logged in the Web interface, please
    show your appreciation by clicking on the star below. Thanks!
    Migration = pew pew (re-doing IP's/etc and copying 4 TB of data is ugly especially when it's lots of tiny stuff).
    haha
    Anyway, when I asked about a better way that doesn't require massive downtime what I meant was:
    Is there a better way vs. Pool Move that doesn't require massive downtime (in other words, the "other way" having the massive downtime, not the Pool Move).
    Choice A = Pool Move = No downtime
    But let's say that's not a good option (for whatever reason) and someone says use Choice B. But Choice B ends up requiring downtime (like the data copy option).
    I just didn't know if Pool Move required that you create the partition ahead of time (so you can choose GPT) or if it kinda did it all for you on the fly (I'll have to read up when I get to that point).
    I'm not terribly keep on having multiple DOS partitions, although that technically would work. Just always scares me. It's just temporary for the next 8 months anyway while we migrate to NAS from OES, but I'm running out of space and am on an unsupported OS anyway.

  • What are best practices for rolling out cluster upgrade?

    Hello,
    I am looking for your input on the approaches to implement Production RAC upgrade without having a spare instance of RAC servers. We have a 2-node database RAC 11.1 on Production. We are planning to upgrade to 11.2 but only have a single database that the pre-Production integration can be verified on. Our concern is that the integration may behave differently on RAC vs. the signle database instance. How everybody else approaches this problem?

    you want to test RAC upgrade on NON RAC database. If you ask me that is a risk but it depends on may things
    Application configuration - If your application is configured for RAC, FAN etc. you cannot test it on non RAC systems
    Cluster upgrade - If your standalone database is RAC one node you can probably test your cluster upgrade there. If you have non RAC database then you will not be able to test cluster upgrade or CRS
    Database upgrade - There are differences when you upgrade RAC vs non RAC database which you will not be able to test
    I think the best way for you is to convert your standalone database to RAC one node database and test it. that will take you close to multi node RAC

  • EM for Coherence - Cluster upgrade

    Hi Gurus,
    I noticed that there are "Coherence Node Provisioning" process in EM12c, and it says "You can also update selected nodes by copying configuration files and restarting the nodes.". Does the EM will internally check the service HA status before update (stop/start) each node?  The "NODE-SAFE" should be the minimum HA Status criterion to meet to ensure there is no data loss. 
    Thanks in advance
    Hysun

    Thanks for your hints, but it didn't work either. Maybe because the metaset uses the disks DID-name and those are not available when the node is not booted as part of the cluster.
    What I hope will work is this:
    - deactivate the zones resourcegroup
    - make a backup of the non-global zones root
    - restore the backup to a temporary filesystem on the nodes bootdisk
    - mount the temporary filesystem as the zones root (via vfstab)
    - upgrade this node including the zone
    - reboot as part of the cluster (the zone should not start because of autoboot=false and the RG being deactivated)
    - acquire access to the zones shared disk resource
    - copy the content of the zones root back to its original place
    - activate the zones resourcegroup
    - upgrade the other node
    - and of cource backups, backups and even more backups at the right moments :-)
    I will test this scenario as soon as I can find the time for it. If I am successful I will post again.
    Regards, Paul

  • CCM Cluster upgrade/rebuild

    i am upgrading my cluster from CCM4.0 to 4.1. i am looking for any advice that would make the process go smooth. i will open a TAC case for backup.
    my OS version is 2000.2.7sr4.
    thanks in advance.

    The only recommendation that I have is to have a backup just in case that something goes wrong. I have made many many upgrades and just the 5% of them have had problems.
    The TAC case should be enough.

  • [asr9k cluster upgrade procedure]

    Dear CSC (and hopefully Xander):
    What is the proper way of upgrading an asr9k cluster?
    Do i have to break the cluster and upgrade both 9ks separately? then rebuild the cluster?
    Or you just treat the cluster as one box and when you upgrade one of them, both are upgraded simultaneously?
    (is there a document that describes this procedure for a cluster specifically?)
    Thanks in advance!
    c.    

    Hello Carlos,
    you can proceed with the following Cisco's recommendations thanks to Lenin Pedu:
    https://supportforums.cisco.com/docs/DOC-34114#13_Cluster_RackByRack_Upgrade_
    HTH,
    Michel.

  • Patch for cluster upgrade from 10201-10203

    Hi,
    i want to upgrade my cluster from 10.2.0.1 to 10.2.0.3
    can somebody help me in finding the upgrade patch from 10.2.0.1 to 10.2.0.3
    Thanks
    Bala

    patch for which operating system? and why you would like to patch 10203 ?
    if you have metalink account you can download latest patches and documentation for particular patches.
    search metalink knowledge tab for documentation.
    for LINUX os patch download from metalink oracle support select tab patch&upgrade and search the patch i already mentioned.
    for windows and mac os you can directly download from OTN.
    http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html
    Oracle Database 10g Release 2 (10.2.0.4) for MAC OS X on Intel x86-64
         Oracle Database 10g Release 2 (10.2.0.4) for Microsoft Windows Vista x64 and Microsoft Windows Server 2008 x64
         Oracle Database 10g Release 2 (10.2.0.3/10.2.0.4) for Microsoft Windows Vista and Windows 2008

  • RAC 9i to 11g 3 node cluster upgradation.

    Hi,
    I have been given the requirement to migrate a 9i RAC 3 node database( 9i cluster) to 11g R2 RAC database iam looking for a document or steps to do this activity. any help on this is highly appreciated.
    should I upgrade the cluster first and then upgrade each node one by one or I should upgrade the cluster and upgrade all the home in one go ?
    Thanks,
    Satish Abraham.J

    hi,
    you may want to take look this first:
    Upgrade 9i to 11gR2

  • Sun cluster upgrade

    Any one did an upgrade from Sun cluster 3.0 07/01 release to 12/01 release
    was there any problem in the upgrade and what was the approach
    any suggestion

    This upgrade is delivered as a patch set and some
    additional packages. The upgrade procedure is
    basically the same as a normal patch procedure.
    You might consider getting the latest core Sun Cluster
    patch as well, just to be sure you are at the latest rev.
    The additional packages provide new features. The
    most popular is the Generic Data Service which can
    really save development time for simple agents.
    -- richard

  • Database Mount and open only exclusive after cluster upgrade to 11R2 Window

    Hi,
    i just finish my 2 nodes upgrade of my cluster from 11.1.7 to 11.2.0.1 under windows. In the upgrade, everything work fine.
    Now, when i connect to my database i can see only one of the 2 nodes have the database open. When i check the log i can see the first node open the database in exclusive mode, so, the second one can not opened because a instance already working with it.
    How can i change the mount of the database in share mode so the cluster can open the database for the 2 nodes?
    I would like to connect in local mode to close and open mount it in share mode but,
    i'm not able to connect anymore in sqlplus in local mode, Why?
    What have been change, the listener is not the same, now i have 3 listeners, one for each node and on global?
    I'm shure i'm missing something, but i do not find the info the the documents.
    Any help will be welcome

    I finaly found the problem.
    After the upgrade the oracle_home was not set correctly

  • Rolling Cluster Upgrade - removing old NetWare node?

    According to the docs, you remove the old NetWare node from the tree, and then delete the Cluster Node, Server object and all corresponding objects related to the downed NetWare server.
    I'm assuming by "cluster node" object they mean you go into the cluster object (the thingy that looks like 3 red balls) and find the "cluster" node that looks like an NCP server object with a red ball on it?
    Secondly,
    After you delete all those objects, are you still supposed to see it in the cluster?
    Meaning you go into iManager, Cluster -> Cluster Manager, and it still shows that old server as belonging in the cluster, just that it's "unknown" state?

    Well got an SR open at this point. Something definitely wrong with the cluster. eDir shows all the correct information, but when you go into iMangler -> Clusters -> Cluster Manager it has the following problems:
    a) It shows a new resource called: CN (that has a triangle/pyramid icon) and shows "edir sync" with a yellow status icon. But you don't see this "resource" anywhere else.
    b) The cluster node shows up, when you click on it, it shows it has an IP address that's one less than what it really is. (ie, 10.10.1.9 and it's really 10.10.1.10). It does this for EVERY cluster node
    But, if you go into the Cluster Options, instead, and click on the cluster node, it shows the correct information there. eDir objects have the correct IP, etc.
    I vaguely remember running into this years ago when we tried to remove/reinsert a cluster node on NetWare.
    Where in the world does the cluster hold/get its information? It can't be from eDir because eDir shows up okay, so where the cluster thinks these other resources and IP's are is beyond me.

  • Upgrading a S&S 5.3.0 failover cluster to 5.3.7

    Hi!
    we need to upgrade our S&S to 5.3.7. Currently we have S&S and DMM 5.3.0 in failover cluster.
    In pre-prod, were we have no cluster, just a single DMM and a single S&S, we upgraded the S&S easily.
    But when facing the procedure for upgrading the S&S in the Failover cluster, after reading the official Cisco docs, we have few doubts:
    1) We know that we'll need to break the cluster, reverting to standalones, before upgrading, but, since we just need to upgrade the S&S to 5.3.7 (DMM remains 5.3.0), is it enough if we break only the S&S cluster and revert to standalones, while keeping the DMM failover cluster with no changes?
    2) Once we have upgraded the active/primary, I don't understand point 4 of the upgrading procedure, where it says “Reimage the secondary DMM and Show and Share appliances. Install the same software release that you upgraded the primary appliances to.”?
    What does “REIMAGE” mean? does it mean that we'll need to install the secondary unit from zero to 5.3.7 and then, when both are 5.3.7, re-do the failover cluster? Isn't it enough to break the cluster, upgrade each member to 5.3.7, and then re-do the cluster again? or does it mean that the secondary unit will be unusable after breaking the cluster?
    Does any of you know any more specific guide (or your own experience) of upgrading a failover cluster?
    many thanks from Spain,
    DAVID

    The Skype extension for Firefox has a long history of being problematic in Firefox. Various older version of that extension have been "blocklisted" by Mozilla from time to time due to causing a high number of Firefox crashes in the last 3 or 4 years. <br />
    http://kb.mozillazine.org/Problematic_extensions
    You don't need that Skype extension in Firefox to be able to use Skype '''or''' to use Firefox.

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • CUCM 8.6 upgrade to 9.1(2)

    Hello all,
    I've done some 8.5 upgrades to 9.1 so I'm quite familiar with the refresh upgrade process and the additional server downtime vs. say and 8 to 8.5.
    On reading the upgrade guide for CUCM 9.1(1), I stumbled across this line on page 5 -
    "You cannot install upgrade software on your node while the system continues to operate."
    In the past when doing an Linux to Linux upgrade without a refresh required this certainly wasn't the case. You can't make changes and EM should be turned off but the upgrade was always going on in the background and a switch version was run when complete.
    Has this changed?
    Can someone chime in who has done a 8.6 to 9.1(x) upgrade on this?
    I've got a really tight outage/downtime window to get a cluster upgraded and with 2 servers still on MCS, I don't think I can do it if I can't get a change freeze in place and get the Pub upgraded without taking it down. Having to start and finish the Pub, subs and the physicals in the window might not be possible even with a parallel upgrade.
    Thanks
    Jon

    Hi Jon,
    +5 to my friend Aman for his good notes here :)
    This has NOT changed....here is the note from the upgrade guide;
    Linux to Linux (L2) upgrade
    There is very little server downtime during an L2 upgrade.
    An L2 upgrade is accomplished by installing the new software release in the inactive
    partition while the node continues to run and operate on the existing software
    in the active partition.
    The software releases are switched on reboot. The reboot can be either automatic,
    after the new software release is installed, or initiated manually at a later time through
    an administrator command. Some examples of an L2 upgrade are upgrades from
    Release 6.1(3) to 7.1(5), from 7.1(2) to 8.0(3), or from 8.6(1) to 9.1(1).
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/elmuserguide/9_1_1/license_migration/CUCM_BK_CBF8B56A_00_cucm-license-upgrade-guide.pdf
    Cheers!
    Rob

  • Your does not appear to me the option of Upgrade existing installation

    Your does not appear to me the option of Upgrade existing installation to a clustered installation
    I am installing sap 4,7 ext: 110 in cluster formed with Windows 2003 enterprise and SQL enterprise 2000, the manual of sap says to me that it installs SQL of local way in the node a of cluster, later mind to install service pack 3 in the node a, later mind to install sap in the node a, after that requests to me that realise upgrade of SQL with the Upgrade option existing installation to a clustered installation but that option does not appear to me qualified

    Here is a short (not necessarily complete) list of Cluster Option Requirements
    - The SQL Server binaries are installed on a local non-
      shared drive (Only Node A).
    - The disc for the database files and the log file are
      belongs to the SQL Server Cluster Group (MSSQL).
    - On node B is no SQL Server with the same name (named
      instance name or default instance name ) installed.
      Check the registry on node B for the branches
      HLM\SOFTWARE\Microsoft\Microsoft SQL Server
       and
      HLM\SOFTWARE\Microsoft\MSSQLServer
      Deinstall the SQL Server on node B, if it exists and
      delete the registry branches on node B, if they still
      exists.
    - The executing user for the setup has appropriate
      rights on both nodes
      (domain administrator).
    - The SQL Server database files (master, msdb, etc.) are
      on a shared disc.
    - Node B is available via private and/or public net.
    - The Cluster service is running on node A and on node B.
    - You are using the right CD (not mat.number 51011908)
      or have set the version by means of the .reg file
      sqlverfix.reg (see node 377430).
    - All needed discs (SQL DB, SQL Log, Quorum) are on
      node A.
    Please check all the requirements and run the cluster upgrade of SQL Server again.
    Best regards
      Clas

Maybe you are looking for

  • Error in saving and Project Siena close/crash

    I am new to Project Siena and when I try to save my work, I receive an error: "Error: test.siena cannot be saved. The file may be in use by another application" Another problem is when I load a WADL, the Project Sienna closes or crashes. The WADL I a

  • Network connection fails with a "self assigned IP" or "no IP" msg displayed

    I am having problems staying connected to the wireless network. I have four Macs - 3 MBPros and 1 iMac. They have all been upgraded to Snow Leopard and have had all recent updates from Apple. One of the MBP, and just now the iMac, keeps disconnecting

  • Overhead planning in Project System- Result Analysis

    Hi .. I want to plann over head for project system. Please suggest me the procedure.

  • Locking in RAC

    Hi.. In my RAC database freqently i am facing more locking objects problem.how can I solve permanently.but I am killing the sessions of users who is holding locking objects..but is there any permanent solution. And plz can any one give me the script

  • How to install the driver for airport

    Hello, yesterday my mac was running normally, and now I use it and the airport was not installed and was not running itunes music... today is simply not working nothing = / if they can help me , Thanks in advance