Moving db to cluster

i have a customer that wants to move the sql db from sccm to a cluster,
i have the procedure how to do this, but the question is: what about the collation that is required for sccm?
if the sql cluster has not the correct collation, can we create a  new instance and define the collation there, I thought collation is a sql server setting?

if the sql cluster has not the correct collation, can we create a  new instance and define the collation there, I thought collation is a sql server setting?
Collation is set per-instance but can also be set per DB -- it is not a system wide system. I would highly recommend that if they go down this route they create a separate instance no matter what though so that resources (like memory) can be allocated specifically
and separately to the ConfigMgr DB and that the sysadmin requirement of the site server's computer account does not become an issue (note though the site server requiring local admin permissions on all systems/nodes in the cluster hosting the SQL instance
will most likely become an issue here though).
I'm with Garth here also -- doing this only provides on-paper benefits while increasing complexity, negatively affected perf, and not truly increasing availability.
Jason | http://blog.configmgrftw.com | @jasonsandys

Similar Messages

  • VM will not boot after moving using Failover Cluster Manager - "a disk read error occurred......"

    My current Configuration:
    3 node cluster, using clustered shared storage and about 22 VM's.   The Host servers are running 2012 Data Center while all guest are running 2012 Standard.  The SAN is EqualLogic and we are using HIT Kit 4.5.
    I have a CSV that is running out of space, so I created another CSV so that I could move some of the VM's to a new home.    I tested this by creating a test VM, and moved it successfully 3 times.     I then moved an actual
    LIVE VM and while it seemed to move ok, it will now not start.   The message is "a disk read error occurred Press ctrl+alt+del to restart".     I moved the test VM and it failed as well.    
    I have read several things about this, but nothing seems to relate to my specific issue.   I have verified that VSS is working and free of errors as well.    From the Settings menu for the VM, if I select "Inspect" the drive,
    the properties all look fine.    It is a VHDX and both the current file size and maximum disk size seem correct.
    The VM's were moved using the "move - virtual machine storage" option within Failover Cluster Manager.
    Suggestions?
    Thanks.

    Lets see if I can answer all of those and I appreciate the brain storming.   This really needs to work, correctly.
    1.  The Storage is moving.
    2.   VM's and SAN are on same device.
    3.  No, my  Clustered Shared Volume, CSV, is out of room, (more one that later)
    4.  No, I actually have 2 sans grouped together.   However, I'm moving the VM', form one CSV to another CSV on the Same san.  EqualLogic PS 6110 is the one I am trying to move VMS around on, and the other SAN not involved in any way except
    for it is in a SAN group is an EqualLogic PS6010.
    5.  No error During move, it took about 5-10 minutes, no error messages.   Note, I did a test and it worked GREAT 3 times.   Now both a live VM, and the test VM are doing the same thing.
    6.  No, the machine is not to large.   The test making was a 50 gig drive, just 2012 standard installed with updates.   The live VM was a 75 gig VM that was my Trend Micro Server, or anti-virus host.
    7.  Expand the existing SCV?   Yes I should be able to, but there is an issue there.   The volume was expanded correctly, Equallogic sees the added space, Fail Over cluster manager sees the added space, however disk manager only
    sort of does.    When looking at disk manager, there are 2 areas that tell you a little bit about the drive.   The top part and then the bottom part.   The top part only shows 500G, the original size, while the bottom part
    says that it is 1 TB in size.   I call Dell's technical support and after they looked at it I was told by the technician that they had seen this a couple of times and the only way to fix it was to move all the VM's to another CSV and delete the troubled
    CSV.   I thought about adding more space to the troubled CSV, but its on a production server with about 12 VM's running on it and I did not want to take a chance.   The Trend VM was running on CSV-1 and working fine.   
    I must admit that the test VM, was on CSV-2.    I moved the Test VM from csv-2 to csv-3 back and forth several times with no errors.   The Trend Server was on CSV-1 and was moved to CSV-3, however it failed.  Again, I then moved
    the test VM from CSV-2 to CSV-3 and it failed the same way.   I could not test the "TEST - VM" on csv-1 due to csv-1 not having enough space.
    8.   I did disable the network from the VM to see if that mattered it did not. 
    9.   I have not yet had a chance to connect the VHDX to a new VM, but I will do that in about an hour, hopefully.    Once I am able to test that suggestion I will post the results as well.
    Again, thanks for all the suggestions and comments, as I had rather have lots to look at and try.   I hope I answered them well enough.
    Kenny

  • Moving Physical SQL Cluster into Virtual SQL Cluster based on Hyper-V Failover

    Hello All............I have a SQL Cluster based on Physical Hardware that has Three-3 instances as well. I have setup a Hyper-V Failover Cluster (2012 R2) and have built Virtual/Guest SQL Cluster (2012 R2) upon it.  Now, I intend to move
    the instances/databases from Physical SQL Cluster to Virtual SQL Cluster.
    1.  Is this supported?  If so, I would appreciate any guidance on it?
    2.  Is P2V of SQL Cluster supported in Hyper-V Failover Cluster based on Windows Server 2012 R2?

    Hi Sir,
    Please refer to the following blog regarding moving SCCVM SQL database to another SQL server:
    http://blogs.technet.com/b/configurationmgr/archive/2013/04/02/how-to-move-the-configmgr-2012-site-database-to-a-new-sql-server.aspx
    It is quoted from the  similar thread :
    https://social.technet.microsoft.com/Forums/en-US/a1558842-cdf5-4e5f-8f10-d660e96eae1b/migration-sql-for-sccm-2012?forum=configmanagermigration
    But it seems that is a migration of system center production , I would suggest you to post the question to system center forum :
    https://social.technet.microsoft.com/Forums/en-US/home?forum=configmanagermigration%2Coperationsmanagergeneral&filter=alltypes&sort=lastpostdesc
    Best Regards,
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Unable to Migrate VM's after moving from older cluster

    Hello Everyone,
    I have migrated 4 VM's from a 2008R2 SP1 failover cluster to a 2012 R2 failover cluster.  The migration went off without a hitch, at least until I tried to live migrate one of the VM's to another node in the cluster.  Below is the error I
    get when attempting to live migrate the VM:
    Cluster resource 'Virtual Machine Configuration TutorTrac' of type 'Virtual Machine Configuration' in clustered role 'TUTOR' failed. The error code was '0x2' ('The system cannot find the file specified.').
    Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster
    Manager or the Get-ClusterResource Windows PowerShell cmdlet.
    The volume which stores the VHD's for the VM's can be migrated to all the nodes without incident.  Does anyone know if this is a common problem?  Does it have a "common" solution?
    Thanks

    Hi Sir,
    1) You need to validate your cluster using cluster validation feature in-side Failover Cluster Manager. For doing this it would be better if you can bring all your cluster resources (VMs) down as it will validate all the resources and doing this in production
    will result of service down. Validate all the cluster nodes for recommended all tests.
    2) Make sure that within your Hyper-V manager on all cluster nodes you are keeping the Virtual Networks names same, ,like on your Hyper-V if you have created server_network for one of your virtual network for VM then this name should be same on all nodes
    in the cluster.
    3) Also for your VM make sure that ISO file is not attached when you trigger live migration, as if the cluster will not try to find it as shared between all nodes or on the destination node then it will through an error during live migration.
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Moving a CUCM cluster

    Hi,
    I have a few questions, we have a CUCM cluster, one Publisher and two Subscribers and one Voice mail server running on Linux servers from Cisco, version 6.2.
    Is it possible to move the servers into a datacenter in another country and run the servers on VMware?
    Can you install survilianse applications on a Cisco Linux CUCM server, like IBM's TSM?
    Have anyone done this before so there is some examples on successfull moves?
    Regards
    Stellan Sveman

    Brent,
    I dont know if you implementing UC on UCS B or C series servers, but here are some links that have helped me in the past:
    UC on UCS CUCM 8 (UCS C-Series foucused)
    http://www.youtube.com/watch?v=6_WPGjtZCKY
    Cisco Unified Communications on the Cisco Unified Computing System
    http://www.cisco.com/en/US/prod/collateral/voicesw/ps6790/ps5748/ps378/solution_overview_c22-597556.html
    Unified Communications Virtualization Sizing Guidelines
    http://docwiki.cisco.com/wiki/Unified_Communications_Virtualization_Sizing_Guidelines
    I hope these links help, and I apologize if you already have the above information.

  • Sapjup cannot find correct shares on Windows 2008 R2 cluster

    Hello Guru's,
    I have succesfully installed CRM 5 Abap+Java on Windows 2008 R2 based hardware. This is a HA setup.
    I am now in the process of upgrading to CRM 7.01. Unfortunately, the java upgrade program SAPjup runs into an error (share
    Host_A\sapmnt does not exist)  I cannot solve up to now.
    This is my setup:
    I have two application servers: Host_A and Host_B. Both servers form a Microsoft Failover Cluster. The shared disk for the abap and java central services is the F:\ drive, the cluster name is Cluster_X.
    The Central Instance is installed on the local disk G:\ of Host_A; The Dialog instance on local disk G:\ of Host_B.
    As instructed in the manual, I have moved the SAP cluster to Host_A and started the the upgrade process on G:\ drive of the same host.
    During the phase PREPARE/INIT/INPUT_SAPSERVICESID_PWD_HA, the SAPjup program is trying to map the
    Host_A\sapmnt share to a drive letter ('net use' command). This step results in an error.
    I can explain this error since the proper share in my opinion should be
    Cluster_X\sapmnt instead of
    Host_A\sapmnt.
    In Windows 2003 this was not a problem, because
    Cluster_X\sapmnt and
    Host_A\sapmnt can both be accessed and refers to the same disk location when the cluster group is active on Host_A. However, in Windows 2008 R2 this is not the case anymore and
    Host_A\sapmnt is not a valid share.
    Has any one of you run into this same problem or have suggestions how to solve?
    Rob Veenman
    SAP Technology.

    I owe you an answer for the issue we had.
    I opened a message with SAP, and got acklowledge that this is indeed a bug in SAPJup. SAP is working on a solution.
    The work around (which was succesful) was as follows:
    1. I moved the SAP Cluster group to Host_B (the DI host).
    2. I created g:\sapmnt on Host_A and copied content from the shared drive
    Cluster_X\sapmnt
    to this local drive.
    3. I shared diretory g:\sapmnt with share name sapmnt on Host_A and gave
    SAP_<SID>_GlobalAdmin Read/write access.
    Now SAPJup can continue. At the next interrup, you can remove the local directory g:\sapmnt and move the cluster disk back to Host_A.
    I will give an update and close this thread as soon as I get a definitive solution from SAP AG for this issue.
    Rob Veenman
    SAP Technical Consultant.

  • Active-Passive Failover cluster in Windows Server 2008 R2

    Hi,
    As per the client requirements, I build an active-passive Oracle 11g R2 database cluster in Windows Server 2008 R2.
    (Used Microsoft cluster as Client don't have Fail Safe licence)
    As per the configuration, I follow the below mentioned steps:
    a)OS vendor configured Active-Passive Windows Cluster
    b)Installed Oracle 11g R2 on both the nodes with same ORACLE_HOME name
    c)Create Listener and configured it with cluster IP in Node1
    d)Create Database from Node1. Physical files location are all in Storage
    e)Create Oracle Service with the same name in the 2nd node and copy all the files like spfile,tnsnames.ora,listener.ora,password file to Node2
    f)Configure Listener with the same Oracle SID.
    g)Test database failover from Node2 with Listener registration
    h)Open the Windows Failover Manager to configure the Oracle Database and Listener Service in the Cluster Group
    Now I am facing problem by moving Cluster Group movement. Whenever trying to moving the group, Listerner service is not getting up by Cluster Manager as quorum is not included in the group and that quorum disk is not moving in the failover node with the Oracle Cluster Group. As per my understanding Quorum having information of Cluster IP which has been configured with Listener.
    But when I am shutdown one node, then in the other node all the resources successfully moving and cluster able to online all the resources. I guess at that time Quorum is also moving and thus cluster can able to make Listener online.
    Please suggest how to resolve this issue. Means how can I make Listener up instead having any dependencies with Quorum in the fail over node?
    Thanks a lot for your help.

    hello
    I was going through your post and i am also doing the same thing here at our organisation for Oracle 10g R2
    Can you pls send me any docs u r having for configuration of Oracle in windows clusters .
    And, can you pls elaborate on this point
    e)Create Oracle Service with the same name in the 2nd node and copy all the files like spfile,tnsnames.ora,listener.ora,password file to Node2.
    Pls send me the details at [email protected] or you can contact me at 08054641476.
    Thanks in advance.

  • Hyperv - 2008 R2 cluster migration

    I want to migrate a Hyperv- 3 node cluster to another domain in a different forest. Is this possible, so help me with steps.

    Hi,
    the best way will be to rebuild the cluster at the new domain.
    Here some informations_
    Windows 2008 failover Cluster domain migration
    http://social.technet.microsoft.com/forums/windowsserver/en-US/079cf6e9-9597-4265-b240-afa7972b2da7/windows-2008-failover-cluster-domain-migration
    Moving a Hyper-V cluster to a new AD Domain
    http://namitguy.blogspot.de/2012/03/moving-hyper-v-cluster-to-new-ad-domain.html
    Hope that helps
    Regards
    Sebastian

  • Migrating a Sun Cluster Running Oracle to New Hardware

    Has anyone attempted this? Essentially we are moving a Sun Cluster from one location to hardware at another location while maintaining the same node names. From what I can tell, I need to (on an install lan):
    1) Load the OS
    2) Configure the IPs
    3) Install Sun Cluster
    4) Install Oracle Parallel Server/RAC
    5) Restore the data on a per node basis
    6) Restore the shared data
    7) Adjust, tweak, and run
    Are there any pitfalls or suggestions on the approach? The shop is relatively new to clustering much less oracle clustering and the original cluster was installed by admins gone bye.

    I would say that Apple should be able to update your 36-months maintenance agreement with a OSXS 10.4 serial number.
    As far as I know, the structure of 10.3 and 10.4 serial numbers is different (wasn't the case between 10.2 and 10.3) so I'm short of a technical answer here.
    Maybe you could try :
    /System/Library/ServerSetup/serversetup -setServerSerialNumber xxxx-xxx-xxx-x-xxx-xxx-xxx-xxx-xxx-xxx-x
    in a Terminal window on the server. It's theorically the same as using Server Admin but maybe this could help.

  • Deploy Application to cluster

              Hi,
              I deployed a war file to clustered server(wls 6.1 sp3),with admin server console,
              but I did not see the physical file(jar/war/explored format) copied to each clustered,how
              should I
              know files were correctly deployed to clustered server, Thanks
              

              Each cluster instance should get a copy either during deploy or restart time, but
              I forget the location since I moved to WL7 cluster. On the other hand, even if
              a cluster instance retrieves a physical copy successfully, it does not necessarily
              mean the deploy is suceessful. I don't care where the physical instance stores
              the local copy if the WEB applications can be accessed via each individual instance,
              this is the way I verify my production cluster network.
              "david" <[email protected]> wrote:
              >
              >"Wenjin Zhang" <[email protected]> wrote:
              >>
              >>A simple method is to access one of your JSP or HTML page in the WAR
              >>file through
              >>each managed server. If you have instances as host1:port1 and host2:ports
              >>and
              >>you have an appliacation "testwar" which has a JSP test.jsp, try http://host1:port1/testwar/test.jsp
              >>and http://host2:port2/testwar/test.jsp.
              >>
              >>
              >>"david" <[email protected]> wrote:
              >>>
              >>>Hi,
              >>> I deployed a war file to clustered server(wls 6.1 sp3),with admin
              >>>server console,
              >>>but I did not see the physical file(jar/war/explored format) copied
              >>to
              >>>each clustered,how
              >>>should I
              >>>know files were correctly deployed to clustered server, Thanks
              >>>
              >>
              >
              >so the physical file(no matter what format) actually only one
              >copy located in the admin server, the clustered servers didn't
              >get its own copy, right? Thanks
              >
              

  • Cluster reconfiguration

    Hi all,
    We have moved one Sun Cluster 3.0 (Solaris 8, and HA-Oracle) to a new datacenter so all the storage have changed from one EMC symmetrics to a new one.
    We decided to start all the services manually (boot -x, importt disk groups, etc) to study the new situation and make a workaround to reconfigure the cluster.
    Have any idea of the problems we can encounter? As we can se, we have to discover the new disks (scdidadm -r) and change the quorum device
    Bye

    We don't see both symmetrics now.
    When the migration was done both DMX were syncronized. We stopped one node and moved it to the new datacenter. After that, the application was stopped and the disks were 'splitted'. After that the application was started importing de disk groups.
    The problem is the following: we've got a cluster stopped but al the DID configuration has changed (includind quorum disk) because the new disks are availabre through a new path (corresponding to the new fiber path)
    For example:
    # scdidadm -l
    3 peai01:/dev/rdsk/c5t500604843E0C1412d11 /dev/did/rdsk/d3
    That disk doesn't exist and now it's named "c5t500604843E0C2713d11s2". That happens with all the disk.
    To make 'scdidadm -r' we need the cluster up and running but that's not the situation

  • Phone migration between 8.6 clusters

    Hello:
    hoping someone has experience with this migration and can answer this.
    situation:
    existing phones on 8.6 cluster A ,company A - moving to another cluster  that is also 8.6 cluster B, company B
    in order to move these phones successfully onto the other cluster i have two questions after reading the pre8.0 rollback procedure:
    * in order to clear the ITL, you set the rollback to pre 8.x service parameter on cluster A
    1. do i need to actually revert to pre 8.0 software on Cluster A from an inactive partition? Yes? No?
    2. can you reset only the phones that you want to migrate on Cluster A  after setting parameter, or do you have to reset ALL the phones in the cluster A?
    - i am hoping after you set the service parameter, that you can just run job to reset those phones that you want to migrate. This is a company aquistion so I only need to migrate n-phones.
    regards,
    John

    Hi Brian:
    Thanks for the information. I thought i would test part of this out in the cluster I am in which is cluster B.
    I changed the enterprise parameter - not applying config or reset
    I reset TVS and TFTP on all nodes
    I reset my phone
    I still see the ITL file on my phone though. I went back to Enterprise Parameters and the setting was still False.
    So i guess i forgot to save the setting.
    I repeated the same process in order this time
    changed enterprise parameter
    save
    reset TVS on all nodes
    reset TFTP on all nodes
    reset phone
    ITL file = its there but i cannot delete it or do anything from the phone. It shows configuration (signed). When i select the ITL file it has numbers there, and all the TVS entries are blank. I moved the phone onto another cluster and it boot up. I took the phone back to the original cluster and it would not boot up.
    So I deleted the ITL file from the phone at that point so it could boot up onto its original cluster. I think in smaller deployments for remote sites, I will be using the manual process. In my scenario, I am migrating phones on a site by site basis in customer environment onto another cluster ( via aquisition).
    Thanks Brian.
    regards,
    John

  • JCO with poolmanager in a clustered server environment

    Hi experts: 
    The issue I am having is the following: we are creating connection pools for each user that logs in.  In other words, a connection pool would be created for both userids XXXX and YYYY if both logged into the system.  As per best practices, all connections are being managed by one PoolManager.  We release connections as we go.
    This works fine for a single server.  However, now that we are moving to the cluster, we are trying to figure out the best way to implement.  The best option seems to be to implement the PoolManager as a cluster singleton, and then do a JNDI lookup every time we want to access.  That way, if a managed server goes up or down, we don't have to worry about it.  Any other option we could think of was sketchy, like trying to create the connection pool on each managed server with our code, since servers could (in theory) be added or removed at any time (including after the connection pools were established for a user), and as soon as a server is added to the cluster, the load balancers would begin routing requests to it.
    Long story short, as much as JCO has been used, we can't really find many examples of people using the PoolManager as a singleton.
    So can this be done? and if so are there any tricks to getting this accomplished?
    thanks,
    chris.

    I am not familior with anything called Quartz but I think this issue should be handled task scheduler itself.
    In the place I work the task scheduler we use (I house developed one) has following approach
    Once the task is posted it is in "posted" state and once a batch server (Thats what we call the service that executes it) picks a task up it changes the state to "executing". Once the execution is complete it change the state to "ready". If an exception occures it will abort the operation and set the state to "error".
    Batch Server can pick up only the tasks with state "Posted" so two services will not pick up same task.
    By the way the tasks with error state can be reset to posted state by the user.
    probably you need a solution like this. Either you have to develop one or find one which considers the existance of multiple execution services

  • Naming convention best practice for PDB pluggable

    In OEM, the auto discovery for a PDB produces a name using the cluster as the prefix and the database name suffix, such as:
    odexad_d_alpcolddb_alpcolddb-scan_PDBODEXAD
    If that PDB is moved to another cluster, I imagine that name will not change but the naming convention has been violated.
    Am I wrong and does anyone have a suggestions for a best practice naming the PDB's

    If the PDB moves to another cluster, OEM would auto-discover it in the new cluster.  So it would "assign" it a new name. 
    As a separate question, would you be renaming the PDB (the physical name) when you move it to another cluster ?
    Hemant K Chitale

  • NEED ARCHIVE SOLUTION NOW!!!

    We are ready to cut the check for purchasing FCS, but after seeing a demo of FCS, we were informed that a Superloader 3A LTO3 wouldn't work for archiving. Is that correct or did I hear that incorrectly?
    We work in a sports facility and have hundreds of hours of game footage from our own Jumbotrons and broadcast feeds. Our goal is to have the current season's footage and the previous years footage up on our SAN and into FCS. When we move on to the next season, we want to archive the footage from 2 years prior, but keep the proxy files within FCS so we can search them.
    If the Superloader 3A LTO3 does work, when you go to archive clips onto it, does it know when to switch the tape? And after the autoloader is full and you stick the LTO3 on a shelf, will FCS know that it isn't in the autoloader?

    FCS will not manage your tape library directly, you will have to employ another software product to do that for you, Time Navigator or Bak Bone can get your volumes across to tape and catalogue them. To make use of the 'archive' function in FCS for this scenario, you have to create an 'archive device' and then ensure that is backed up. When you archive assets in FCS, they'd be moved to the archive device, the backup would then run, and after it completes you could then delete the files from the archive volume. FCS will think they are still there and maintain access to your proxy content and metadata, but you would have to restore them from tape before you ran restore in FCS.
    Unless you must use tape, you should definitely consider going disk to disk for your backups which is much more elegant. We use Object Matrix MatrixStore to manage our backups directly from Final Cut Server. MatrixStore is software that allows us to create a cluster of secure redundant storage out of a Xserves and RAID hardware. You don't have to use expensive, new or particularly fast hardware for this, we used our old PPC metadata controllers and Xserve RAIDs to build a cluster.
    Integration with FCS is built in, and offers you some advantages over tape, for instance we set a production status 'For archive', and all assets within are moved to the cluster, but the proxy content is still viewable and asset metadata still searchable. Then, restoring individual assets is just a right click away, or restore the whole production in the same way.

Maybe you are looking for