9i - AS3.0 cluster on SAN

Hi all,
I am testing Linux and Oracle for our company trying to prove that it's stable and worth implementing in Production. Also trying to setup a neat architecture by booting diskless,off the SAN through fiber channels.. anyways..
I am having a dilemma here with my architecure:
Our DBAs have standards for installing oracle, /opt/oracle/version_here. I have 2 x RH3AS that have access to the same disks on the SAN also, mount points /ocfs1 and /ocfs2.
I'm trying to setup a failover configuration. Server_A and Server_B will both have /opt/oracle/9.x as ORACLE_HOME, and Server_A will be responsible for DB in /ocfs1, Server_B will be responsible for DB in /ocfs2, and if Server_A fails/crashes/dies/put_your_scene_here:) Server_B mounts /ocfs1 and recovers the DB and continues with the operations. But.. I wish too it was as easy as I described it!..
I tried sharing the same ORACLE_HOME by both machines, and with the filesystem ext3 it's not a good idea to have 2 servers write to the same partition, (we need file locking, etc..), so I formatted my /opt/oracle with ocfs but its TOO slow, we've been installing 9i since this morning and it's still at 18%..So I don't think it would be a good idea running from an ocfs ORACLE_HOME with ocfs-1.0.11. Until we see ocfs2 I'm stuck with this problem. The company doesn't want to go with RAC. The DBA assigned with me on this project tested Dataguard and says that failover should be done manually, etc.. anyways, do you guys have any suggestions? worst case scenario I need the command line for dataguard failover so I can include that in the RedHat cluster manager shell script for automatic failover..
any feedback would be appreciated! architecture, solutions, suggestions..
Thanks,
Edmond Baroud

The documentation states that regular files are not supported, but should work. I've been using ocfs with regular files without any performance issues. I suspect your issue is actually the installer trying to write to the same ORACLE_HOME from both nodes. I have always used ORACLE_HOME on the local disks, this allows patching each node without taking the database down. Try configuring it this way, or installing the software as a standalone and then converting it to a RAC install.

Similar Messages

  • SQL 2008 R2 SP1 Cluster SAN Migration

    Dears,
    I am doubting about procedure and steps that must be followed while doing SAN Migration for my cluster 2008 R2 SP1 SQL.
    For me, I already did cluster Quorum SAN Migration, but I am doubting about all remaining steps such:
    Some one told me about disk signatures, what should I do for this?
    What are steps for MSDTC migration?
    How to Migrate SQL Instance?
    What about Master and other system databases?
    What about Apps Database, logs, and temps?
    What about below commands: FOR WHAT IT IS NEEDED?
    ALTER DATABASE
    msdb
    MODIFY FILE ( NAME = MSDBData,
    FILENAME = '
    H:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\
    MSDBData.mdf' )
    ALTER DATABASE
    msdb
    MODIFY FILE ( NAME = MSDBLog,
    FILENAME = '
    H:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\
    MSDBLog.ldf' )
    SELECT name, physical_name AS CurrentLocation, state_desc
    FROM sys.master_files WHERE database_id = DB_ID(N'msdb');
    ALTER DATABASE
    model
    MODIFY FILE ( NAME = modeldev,
    FILENAME = '
    H:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\
    model.mdf' )
    ALTER DATABASE
    model
    MODIFY FILE ( NAME =
    modellog,
    FILENAME = '
    H:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\
    modellog.ldf' )
    Please help on this coz I am afraid about my information!

    Some one told me about disk signatures, what should I do for this?
    If you can present in disks from both SANs at the same time then this shouldn't be an issue. At a high-level you can add the new disks in and remove the old ones from the cluster when you're finished with them. The disks are just seen as new disks...not swapped
    ones.
    What are steps for MSDTC migration?
    - Personally I've simply deleted and re-created the MSDTC, there might be a nicer way to do this, but re-creating it works. 
    How to Migrate SQL Instance?
    - You mean elements that are on a SAN attached drive that aren't part of the cluster? You can stop the instance and copy the files (use
    xcopy to preserve permissions), Then swap the drive letters around & restart the instance. 
    What about Master and other system databases?
    For system databases I would
    - bring the new disk into the cluster, give it a temporary drive letter, make sure sql server has a dependency on it (look at how the current drives are setup)
    - stop the services
    - copy the folders/files to the new drive using xcopy preserving permissions
    - give the current drive an unused drive letter (via cluster manager)
    change the new drive letter to match the original drive (via cluster manager)
    - remove the old drive (remove dependencies before doing this)
    - bring the server online
    What about Apps Database, logs, and temps?
    -You can do this 'online' while the sql server is running, just bring the new disk into the cluster and following the Microsoft recommended approach outlined
    here
    What about below commands: FOR WHAT IT IS NEEDED?
    - that's for msdb and model, you'll have already handled those databases with the other system dbs.

  • Oracle10g RAC with ASM for stretch cluster

    Assuming suitable network between sites is in place for RAC interconnect (e.g. dark fibre / DWDM), does it make sense (or is it possible) to stretch a RAC cluster across 2 sites, using ASM to mirror database files between SAN storage devices located at each site? The idea being to combine local high availability with a disaster recovery capability, using hardware that is all active during normal operation (rather than say a single RAC cluster on one site with Data Guard to transport data to the other site for DR).
    Or, for a stretch cluster, would SAN / OS implemented remote mirroring be a better idea? (I'd have thought this is likely to incur even more overhead on the network than ASM, but that might be dependant on individual vendors' implementations).
    Any thoughts welcome!
    Rob

    Please refer the thread Re: 11GR2 ASM in non-rac node not starting... failing with error ORA-29701
    and this doc http://docs.oracle.com/cd/E11882_01/install.112/e24616/presolar.htm#CHDHAAHE

  • Sun cluster 3.1 io error

    Hi,
    I have 2 cluster nodes with solaris 9/05 with sun cluster 3.1,After a migration from Hitachi AMS1000 storage to SUN storagetek 9985v when i shutdown one node in the cluster the mounted volumes on the second node giving io error.I already installed the new patches for os,cluster and san but the problem still persists.Please help me
    Regards,
    Arun

    Arun,
    You say you migrated to the 9985v - did you do that with backup and restore or with a replication technology? If it was the latter, you might have inadvertantly copied over some SCSI reservation keys. Otherwise, I can't see any reason for the problem.
    SCSI keys can be removed (with extreme care) using the scsi and pgre commands in the /usr/cluster/lib/sc directory.
    Tim
    ---

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • Can I hook up a NAS via Fibre Channel?

    I am building a Linux NAS to be accessed by my Macs. I've purchased fibre cards for all of my macs, an LSI fibre card for my Linux machine, and a McData 4400 fibre switch. I'm trying to configure my fibre network so I can share files from my NAS. I've got the fabric set up, but everything that I read is talking about using SAN software to connect to a SAN, not a NAS. I can't find any information at all on how to do what I want to do, or whether or not it's possible. Can someone please help me figure out how to set up this Fibre Channel network, or at least figure out if it's possible or not? Thank you!
    -Ryan

    Is what you want technically possible? Probably.
    In an earlier era, you could have a network or an image scanner or other devices connected to a host via SCSI; via a storage bus. There was all manner of odd SCSI-connected gear. (Think of SCSI as expensive multi-host USB with big expensive cables and with expensive peripheral devices, and you'll have the general idea. And FWIW, SCSI is the underpinnings of USB.)
    You would end up writing a whole lot of driver code for the devices and hosts you have, too.
    Which is where folks end up with commercial SAN solutions, or with NAS solutions, and not with using a SAN as a network.
    The various Fibre Channel controller vendors discussed but never seemed to have sorted out the network interface designs for their FC controllers.
    If you want to roll your own storage arrays akin to the [Apple Xsan|http://www.apple.com/xsan> or the HP [MSA|http://h18000.www1.hp.com/storage/diskstorage/msa_diskarrays/sanarrays/index.html] or [EVA|http://h18000.www1.hp.com/storage/diskstorage/eva_diskarrays/evaarrays/index.html] SAN arrays, well, have at. You'll likely end up needing to write host disk drivers, as well as the firmware within the SAN controller. Networking drivers might be a bit more tricky; I haven't looked at those device interfaces in a while, and you'd need to tie those drivers into the host network stacks. Once you have some or all of that working, then the hosts can see the block storage out on the SAN. If you need sharing, you'll need to sort out a cluster or SAN file system to run atop the block storage or atop the network connection you've built; a file system that can coordinate distributed access.
    Possible? Sure. On a budget? Probably not.

  • Guest Snapshot/Disconnected Network Continues. NOT FIXED.

    After about a gazillion patches, disable TCP chimney, change backup schedules, updated drivers, updated hotfixes...
    I STILL have VM's whose nework cards become disconnected  see
    KB2263829  (this does not fix the issue btw)
    Clearly it is related to DPM guest snapshots. 
    3 Node Cluster
    EqualLogic SAN
    ASM (Auto snapshot manager) installed
    Disabled TCP Chimney on all
    Reordered Binding on Hosts
    Applied above KB
    Happens about every other day.. .sometimes goes a week without any issue...  Still happens though... I have followed every  KB/MVP recommendation.. from disabling TCP Chimney on both hyper-v hosts, clients and DPM.. to reordering the binding order
    of the NICs. No joy.
    I guess PSS is the only next step (other than just forgetting snaps altogether)...  That should only take a few weeks/months .. LOL  Anyway.. just for information.. ITS NOT FIXED. :)
    Good luck with it :)

    BTW - disabling TCP Chimney would do nothing.  It is TCP TAsk Offloading settings that you need to pay attention to.  Not Chimney.
    Can you tell us what is happening at the time the VM goes offline?
    Does the Hyper-V Server have a dedicated managment NIC? (a physical NIC that is the IP you use for management, no virtual networks attached, this would also be where DPM would stream from).
    Also, are you using any NIC teaming drivers?  If so, break the team and use non-teamed drivers is possible.
    And, what is the OS of the VMs that go offline?
    I am trying to better understand your symptoms. 
    For example, if it is a connection to a database that is being lost, most likely a process is interrupting the connection and the application is not reconnecting.
    Lots of things can be happening.
    Brian Ehlert (hopefully you have found this useful) http://ITProctology.blogspot.com
    Learn. Apply. Repeat.
    Disclaimer: Backup, test your backup, try new things. Attempting change is of your own free will.

  • DFS redesign ideas?

    Hi,
    Basically I am looking for a few ideas on how to redesign our file servers.
    We have multiple physical file servers and a few virtual servers and what is replicated and what is not is quite confusing. Total storage size is around 6TB made up of home directories, and shared resources - no particularly special file types etc. Using
    DFS with home directories however does mean that I need to essentially have only a single point of reference to be supported by Microsoft as per:
    http://blogs.technet.com/b/askds/archive/2010/09/01/microsoft-s-support-statement-around-replicated-user-profile-data.aspx
    What I am thinking about doing is consolidating everything onto 4 servers.
    We have a large single site with a few remote sites. The remote sites have had their links upgraded to 1Gb and we have been removing our server infrastructure from these areas due to not having environmental/physical space/security in place.
    On our main site we have two separate buildings which each contain a SAN (Not linked to each other).
    Microsoft's guides show concepts of using a DFS Failover Cluster in a main site with replication to a single server at a remote site.
    I could do this model but just in one site, but due to the fact I have two equally sized SAN's on the main site, the issue with this is that I would like to spread the load to both SAN's. Therefore if I have anything running on the single server as primary I
    am creating an SPOF.
    What I am thinking of doing is create 2 x 2 node DFS Failover Clusters (One in each building connected to that building's SAN).
    This means:
    I can load balance the primary DFS shares at the cluster level (SAN's)
    Rapid failover can occur if needed between individual nodes within a cluster
    The single point of failure (Storage) in just using a single DFS cluster is eliminated
    However I am not sure if this is supported or recommended?

    Not sure if i'm misunderstanding things here, but the way I see it you have three levels at which you need to provide redundancy
    DFS Namespace servers - I would suggest you host these on your domain controllers that you already have, actual file load will be minimal and they will provide adequate failover.  It also has the benifit of keeping your namespaces all under the same
    parent domain.  \\domainname\share\foldertarget
    DFS Folder Targets - These point to shares on your actual file servers that do the heavy lifting, one copy of each share per SAN with DFS replication to keep them in sync.
    The actual file servers - A standard cluster running a clustered file service role.
    This way you have each san servering files via a 2 node file cluster.  (active/passive)
    Each file cluster is replicating accross to the other via DFS replication. (active/active)
    Let your domain controllers handle the actuall exposing of DFS shares. (active/active)
    The only issue will be keeping on top of DFS replication between your two file server clusters to make sure users in the same building do not see differnet files depending on which file server cluster they are currently using.
    Hope this helps.

  • OS Migrations from Winwows 2003 to 2008 Server

    Hello Experts,
    I wanted to know if we can Migrate our OS  From windows 2003 to Windows 2008 without System copy/Database Refresh.
    Here is what we have currently installed in Windows 2003 Server
    OS: Windows 2003 Server
    SID: ECP
    Database Oracle: 10g
    SAP: ECC 6.0
    Cluster: MSCS
    SAN: NetApp Storage, with Snapshot from snapdrive technology
    Local Drive:
    C - Windows 2003 Related files
    D - SAP and Oracle Executable files
    Shared Drive:
    E Drives - Where SAP DATAFILE resides (sapdata1...n)
    F Drives - Oracle logs (mirrorlogs, origilog, sapbackup, oraarch, etc..)
    S Drives - Message Server relelated files including Kernel/Global/Profile Directories
    Here is our plan to migrate to windows 2008 server.
    While ECC - Production running, we decided to build SAP on 2 different separate hardware with same specs, on Windows 2008 with MSCS. Then install the fresh NetWeaver/ECC system with SAME SID and build the initial Microsoft clustering for Oracle and SAP like as we did on the windows 2003 Server. Make it looks like Windows 2003 ECP except the OS from SAP and Oracle perspective. 
    Here is the plan.
    Since it is the
    - same System ID (SID)
    - E, F drives are shared
    With NetApp Snapdrive technology, we have an options, disconnect the E and F drive, and connect the drives on the Windows 2008 R1 New Server.
    We will also copy the all the Global directory files from old server (Windows 2003) to Windows 2008 (Joblog, spool logs, etc)
    Question 1:
    Now my question would be, is his will be an good practice option to migrate our OS from 2003 to 2008 windows knowing the technology in hands?
    Question 2:
    When we install a Fresh SAP system using SAPinst, do SAP Sapinst place any Windows related settings/config automatically into the database?
    Question 3:
    am I missing any other key facts?
    Any advice would be much appreciated.
    Thanks In Advance
    Kumar

    You can do that approach but keep in mind that
    - you need to install Oracle 10.2.0.4 directly using a different DVD (Note 1303262 - Oracle on Windows Server 2008). If the source system is still on 10.2.0.2 the procedure is not going to work
    - after you switch the disks from old to new all permissions and windows ACLs are gone. Windows stores ACLs as GUIDs, not as names so you will need to replace the old with new ones on the disks you present to the new server. This is also true for other files (joblogs etc.)
    - the SAP kernel uses OPS$ to connect to the database, I'd delete and recreate the user after the database is mounted and open.
    Markus

  • Migrate from server core 2008 r2 hyper-v with failover cluster volumes to server core 2012 r2 hyper-v with failover cluster volumes on new san hardware

    We are getting ready to migrate from server core 2008 r2 hyper-v with failover cluster volumes on an iscsi san to server core 2012 r2 hyper-v with failover cluster volumes on a new iscsi san.
    I've been searching for a "best practices" article for this but have been coming up short.  The information I have found either pertains to migrating from 2008 r2 to 2012 r2 with failover cluster volumes on the same hardware, or migrating
    to different hardware without failover cluster volumes.
    If there is anyone out there that has completed a similar migration, it would be great to hear any feedback you may have on your experiences.
    Currently, my approach is as follows:
    1. Configure new hyper-v with failover cluster volumes on new SAN with new 2012 r2 hostnodes and 2012 r2 management server
    2. Turn off the virtual machines on old 2008 r2 hyper-v hostnodes
    3. Stop the VMMS service on the 2008 r2 hostnodes
    4. copy the virtual machine files and folders over to the new failover cluster volumes
    5. Import vm's into server 2012 r2 hyper-v.
    Any feedback on the opertain I have in mind would be helpful.
    Thank you,
    Rob

    Hi Rob,
    Yes , I agree with that "file copy " can achieve migration.
    Also you can try "copy cluster wizard " :
    https://technet.microsoft.com/en-us/library/dn530779.aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • Sun Cluster 3.3 Mirror 2 SAN storages (storagetek) with SVM

    Hello all,
    I would like to know if you have any best practice for mirroring two storage systems with svm on sun cluster without corrupting/loosing data from the storages.
    I currently have enabled the multipath on the fc (stmsboot) after that configure the cluster and created the SVM mirror with the did devices.
    I have some issues that i wan to know if there's gonna be any problem.
    a) 4 quorum votes. As i have two (2) nodes and 2 storages that i need to know which is up i have 4 votes, so in order the cluster to start needs 3 votes. Is there any solution on this like cldevice combine ?
    b) The mirror is on SVM level so when a failover happens the metasets go to the other node. Is there any change to start the mirror from the second SAN insteand of the first and have any kind of corruption? Is there someway to better protect the storage ?
    c) The storagetek has option for snapshots, is there a good way of using this feature or not?
    d) Is there any problem by failling over global filesystems (global option in mount)? The only thing that may write in this filesystem is the application itself that belongs in the same resource group, so when it will need to fail over it will stop all the proccesses accessing this filesystem and it would be ok to unmount it.
    Best regards to all of you,
    PiT

    Thank you very much for your answers Tim, they are really very helpfull, i only have some comments on them to be fully answered.
    a) Its all answered to me. I thing that i will add the vote from only one storage and if the storage goes down, i will tell the customer to check the quorum status and add the second storage as QD. The quorum server is not a bad idea, but if the network is down for some reason i thing that bad thing will happen so i dont wont to relly on that.
    b) I think you are clear enough.
    c) I thing you are clear enough! (just as i thought this would happen for the snapshots....)
    d) Finally, if this filesystem is in a metadisk that is been started from the first node and the second node is proxing to the first node for the metaset disks, is there any change to lock the filesystem/ metaset group and don't be able to take it?
    Thanks in advance,
    Pit
    (I will also look the document you mention, a lot of thanks)

  • SAN boot disk in a cluster node?

    As far as I remember, SC requires local boot disk.
    Is it possible to use the SAN boot disk? Can this configuration work being just not supported by SUN or there is some technical limitation ?
    I am thinking of low price configuration with two diskless X2100s + HBAs and SAN storage. Possible?
    thanks
    -- leon

    As far as I remember, SC requires local boot disk.
    Is it possible to use the SAN boot disk? Can this
    configuration work being just not supported by SUN or
    there is some technical limitation ?The rule for boot disks gos like this:
    Any local storage device, supported by the base platform as a boot device, can be used as a boot device for the server in the cluster as well. A shared storage device cannot be used as a boot device for a server in a cluster. It is recommended to mirror the root disk. Multipathed boot is supported with Sun Cluster when the drivers associated with SAN 4.3 (or later) are used in conjunction with an appropriate storage device (i.e. the local disks on a SF v880 or a SAN connected fiber storage device).
    So your boot disk can be in a SAN as long the base platform supports it as a boot disk and it is not configured as a shared lun in the SAN (i.e. visible to other nodes than the one that uses it as boot disk).
    I am thinking of low price configuration with two
    diskless X2100s + HBAs and SAN storage. Possible?You need to check the support matrix of the storage device you plan to use, if it is supported as a boot device for the X2100 + HBA. If the answer is yes, you just must make sure that this lun is only visible to that X2100.
    Greets
    Thorsten

  • Windows 2008 R2 Cluster - migrating data to new SAN storage - Detailed Steps?

    We have a project where some network storage is falling out of warranty and we need to move the data to new storage.  I have found separate guides for moving the quorum and replacing individual drives, but I can't find an all inclusive detailed step
    by step for this process, and I know this has to happen all the time.
    What I am looking for is detailed instructions on moving all the data from their current san storage to new san storage, start to finish.  All server side, there is a separate storage team that will present the new storage.  I'll then have to install
    the multi-pathing driver; then I am looking for a guide that picks up at this point.
    The real concern here is that this machine controls a critical production process and we don't have a proper set up with storage and everything to test with, so it's going to be a little nerve racking.
    Thanks in advance for any info.

    I would ask Hitachi.  As I said, the SAN vendors often have tools to assist in this sort of thing.  After all, in order to make the sale they often have to show how to move the data with the least amount of impact to the environment.  In fact,
    I would have thought that inclusion of this type of information would have been a requirement for the purchase of the SAN in the first place.
    Within the Microsoft environment, you are going to deal with generic tools.  And using generic tools the steps will be similar to what I said. 
    1. Attach storage array to cluster and configure storage.
    2. Use robocopy or Windows Server Backup for file transfer.  Use database backup/recovery for databases.
    3. Start with applications that can take multiple interruptions as you learn what works for your environment.
    Every environment is going to be different.  To get into much more detail would require an analysis of what you are using the cluster for (which you never state in either of your posts), what sort of outages you can operate with, what sort of recovery
    plan you will put in place, etc.  You have stated that your production environment is going to be your lab because you do not have a non-production environment in which to test it.  That makes it even trickier to try to offer any sort of detailed
    steps for unknown applications/sizing/timeframes/etc.
    Lastly, the absolute best practice would be to build a new Windows Server 2012 R2 cluster and migrate to a new cluster.  Windows Server 2008 R2 is already out of mainstream support. That means that you have only five more years of support on your
    current platform, at which point in time you will have to perform another massive upgrade.  Better to perform a single upgrade that gives you a lot more length of support.
    . : | : . : | : . tim

  • Accessing Windows 2008 File Service Cluster with Windows 2012R2 File Services Cluster using same SAN

    I am in the process on converting my existing Windows 2008 host servers (2 host cluster) to a Windows 2012 R2  (2 host cluster).
    All 4 servers will be accessing the same SAN via ISCSI. 
    Currently the Windows 2008 host servers are running both File server roles and Hyper-V in clustered environment.
    How can I transfer the File Server roles from 2008 to 2012R2 will still keeping access to the same SAN and LUNS?
    Thanks
    --Steven B.

    You would have a short outage, but all it takes is to set up the 2012R2 cluster with the appropriate roles.  Then remove the LUNs from the 2008 cluster and present them to 2012R2 cluster.  You could initially build the 2012R2 cluster with just
    its witness disk sitting on the iSCSI SAN, then move the other LUNs over at your leisure.
    .:|:.:|:. tim

  • Windows 2003 File Share 4 node Cluster: Does Cluster Resources need to be brought offline prior removing / unmapping any LUN's from SAN end?

    Hello All,
    Recently, on a 4 node Windows 2003 File Share Cluster, we encountered a problem where when trying to remove few shares (that were no longer in use) directly from SAN end crashed the entire Cluster (i.e., other shares also lost their SAN connectivity). I
    suppose the Cluster resources need to be brought offline prior removing it from SAN but I've been advised that these shares were not the root and instead a 'mount point' created within the share; and hence there is no need of bringing down any Cluster resources
    offline.
    Please can someone comment on the above and provide me detailed steps as to how we go about reclaiming SAN space from specific shares on a W2003 Cluster?
    p.s., let me know if you need any additional information.
    Thanks in advance.

    Hi Alex,
    The problem started when SAN Support reclaimed few storage LUNs by unmapping them from our clustered file servers.  When they reclaimed the unused LUNs, other SAN drives which were there also disappeared causing the unavailability of file shares.
    Internet access is not enabled on these servers. Servers in question are running 64-bit Windows Server 2003 Sp2. This is a four node file share cluster. When the unsued LUN's were pulled, the entire Cluster lost its SAN connectivity. Windows cluster service
    was not starting on any of  the cluster nodes. To resolve the problem all the four cluster nodes were rebooted after which cluster service started on all the cluster nodes and resources came online.
    Some of the events at the time of problem occurrence were,
    Event ID     : 57                                                      
    Raw Event ID : 57                                                      
    Record Nr.   : 25424072                                                
    Category     : None                                                    
    Source       : Ftdisk                                                  
    Type         : Warning                                                 
    Generated    : 19.07.2014 10:49:46                                     
    Written      : 19.07.2014 10:49:46                                     
    Machine      : ********                                             
    Message      : The system failed to flush data to the transaction log.   
    Corruption may occur.                                                  
    Event ID     : 1209   
    Raw Event ID : 1209                                                    
    Record Nr.   : 25424002                                                
    Category     : None                                                    
    Source       : ClusDisk                                                
    Type         : Error                                                   
    Generated    : 19.07.2014 10:49:10                                     
    Written      : 19.07.2014 10:49:10                                     
    Machine      : ***********                                             
    Message      : Cluster service is requesting a bus reset for device      
    \Device\ClusDisk0.                                                     
    Event ID     : 15     
    Raw Event ID : 15                                                      
    Record Nr.   : 25412958                                                
    Category     : None                                                    
    Source       : Disk                                                    
    Type         : Error                                                   
    Generated    : 11.07.2014 10:54:53                                     
    Written      : 11.07.2014 10:54:53                                     
    Machine      : *************                                            
    Message      : The device, \Device\Harddisk46\DR48, is not ready for access yet.                                                            
    Let me know if you need any additional info, many thanks.

Maybe you are looking for

  • HOW TO ADD TEXT TO IMAGES

    how do you add text to images in Aperture?

  • LMS 4.0 software update, wrong package?

    Is this software update correct for this version of LMS? (Successfully downloaded package(s) cwcm5_2_1_sol from cisco.com) Per all the readme's and other docs, this seems to be for LMS 3.2. I downloaded this via the gui Software Center in the applica

  • Primary Key modification

    Hello, I have a table with a primary key. There is an implicit index that is created with primary key - now I wish to alter this primary key so that I can use an another existing non-unique index (or by creating a new unique index). is there a way to

  • Not able to connect devices with password

    Can someone help me understand why I can't add devices and computers with my password.  I have always had to save settings and set up to a thumb drive to add a computer.  Now I need to add my iPad and can't use the password.  Can you help me get conn

  • Test OSB proxy services

    Hi, I am new to OSB. I have created some proxy serices and business services. I want to know about the testing procedure. I m using soapUI to test, for that i need to create some script (Ant, Maven) to run in hudson. Can any one help me, what could b