VDI virtualization servers: a separate cluster or a part of a common cluster?

We've bought a new storage system, and I'm in process of rebuilding our Hyper-V virtualization farm. I intend to create a highly available solution based on Hyper-V (Windows Server 2012 R2) clustering technology.
We have only four servers connected to the storage system. I need at least three of them to create a robust cluster of server virtual machines. In addition, I need to deploy a pilot VDI farm for hosting at least 50 workstation VMs. They will need an
entire physical server.
The question is, should I deploy all four servers as a part of a single four-node Hyper-V cluster? Or should I deploy three servers in a three-node cluster for hosting server VMs and then create a single-node cluster for VDI (supposing that it will
be extended with new servers in the future)? The first design is much more robust, but it requires that I give full control over the SC VMM server to a person who manages that VDI installation. I don't want to do it due to security reasons.
Any ideas?
Evgeniy Lotosh
MSCE: Server infractructire, MCSE: Messaging

We've bought a new storage system, and I'm in process of rebuilding our Hyper-V virtualization farm. I intend to create a highly available solution based on Hyper-V (Windows Server 2012 R2) clustering technology.
We have only four servers connected to the storage system. I need at least three of them to create a robust cluster of server virtual machines. In addition, I need to deploy a pilot VDI farm for hosting at least 50 workstation VMs. They will need an
entire physical server.
The question is, should I deploy all four servers as a part of a single four-node Hyper-V cluster? Or should I deploy three servers in a three-node cluster for hosting server VMs and then create a single-node cluster for VDI (supposing that it will
be extended with new servers in the future)? The first design is much more robust, but it requires that I give full control over the SC VMM server to a person who manages that VDI installation. I don't want to do it due to security reasons.
Any ideas?
You can perfectly use your Hyper-V cluster just make sure you have properly configured storage as it's IOPS thing that makes or breaks the whole VDI config, modern CPUs have enough of a horsepower to handle virtually any workload and RAM is cheap these days.
Create a 4-node symmetric cluster and make sure you have under provisioned hosts so VDI VMs can move to the other one and it would stall on CPU. 
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Help me spend 20k on VDI/Virtualization/all-in-one Cluster. What would you do?

    I want to buy hardware for VDI, virtualization and general cloud-services. (hardware will be colocated.) I need some help/advice on my current setup; I already have a (semi-succesfull) IT company, I have a customer base, advertising, cashflow, connections,
    etc. Fort starters I will be using one colo location, the DC's in my country are one of the most reliable in the world. And so is the infrastructure. My DC is one of the bigger ones, and I will either go with a private rack or with 2 seperate ones shared with
    my clients. (So I will be the only one placing hardware). There are no natural disasters or things like that, and DC's going down is extremely rare, and only happens to with startups/smaller shops. The 20.000$ is purely for hardware. I already decided to go
    with the Windows Server/Hyper-V platform, because of the new technologies that make "decent" low-budget setups possible. I cannot afford a SAN so I will need to use a different storage solution that utilizes the server2012r2 capabilities. I might
    have been a little unrealistic regarding my requirements, and they are more of goals anyway. Let my try again; My main goal will be virtualization and delivering VDI. (zero-clients) Second to that I would like to be able to deliver some cloud-services (backup,
    hosting, sharing, remote-desktop, etc.) to eventually offer a all-in-one package. I understand that with my budget I will not be able to do immediately be able to offer all these things at maximum performance and reliability. However, I should be able to lay
    a solid foundation for future investments, and probably even be able to offer VDI at a small scale? Some things to take into account before sharing your advice; These services will not be offered publicly. I will start with my current customerbase and work
    up from there. Most of my customers have minimum storage requirements. So I'm okay with making concessions on the amount of storage I can offer, as long as i can easily add more storage in the future. Most clients have very small bandwidth (8-60mb download)
    that's why I want to build my architecture around the zero-client model. That way I can at-least offer everyone of my clients my services. The downside of this is of course that i am very dependent on latency, which is why I think local SSD's (combined with
    HDD's) could be my answer. I am aware about the SSD lifespan, but I will be choosing only the drives that have proven to sustain some heavy-duty punishment. (Check THIS out, Hardcore SSD edurance test, 600TB so far, and not one has failed, not even TLC based).
    If each of the consumer grade SSD's in that test can sustain over halve a petabyte of non-stop writes without breaking a sweat, (except for the TLC maybe) and probably make it to a petabyte without dying, they will serve my cause just fine. Now I've made some
    decisions so far; The E3 does not offer enough ram for future expansion, so I will be going for E5 platform. Since DP MB's are not that much more expensive then the single ones, I will probably go with dual socket. Very flexible in terms of expansion, and
    not that more expensive compared to single socket. Now the big question for me at this point is; a Can i use the DP e5 nodes for storage as well, by adding local storage in every node, and use clustering for redundancy and sharing? What are the con's versus
    dedicated storage servers? Would it be better to use separate nodes just for storage? if so, why? This would cost me allot more money, and the all-in-one option would be allot more efficient. If i'm going the all-in-one route, I will be probably be going with
    10gb/e aswell, and put each node in at least a 2u form-factor for more drives, expansion possibilities, and ease of management. I am aware that I will have a smaller number of nodes in total, but because each node serves so many purposes i still think this
    would be the best route. Especially if you consider that eventually I will need to step up to 10gb/e anyway, and in the future I can always buy more nodes for extra redundancy. And I can start with nodes that are only halve-filled, and work up from there,
    and if one of the nodes does go down, I could temporarily move the CPU, RAM, etc. to the remaining nodes to give them some more power until i fix the problem with the node that has failed. So, that is my plan as of now, feel free to criticize me on anything
    you would do different. If something in my setup is impossible, or impractical, please say so, and tell me what would be a better alternative. Many thanks for the responses so far, Regards, Cloudbuilder

    Hi ,
    It is hard for me to answer this .
    I am not family with storage , but  I think shared storage is needed if you want to build a high availability cluster .
    BGDS
    Paul 

  • Hyper-V cluster Backup causes virtual machine reboots for common Cluster Shared Volumes members.

    I am having a problem where my VMs are rebooting while other VMs that share the same CSV are being backed up. I have provided all the information that I have gather to this point below. If I have missed anything, please let me know.
    My HyperV Cluster configuration:
    5 Node Cluster running 2008R2 Core DataCenter w/SP1. All updates as released by WSUS that will install on a Core installation
    Each Node has 8 NICs configured as follows:
     NIC1 - Management/Campus access (26.x VLAN)
     NIC2 - iSCSI dedicated (22.x VLAN)
     NIC3 - Live Migration (28.x VLAN)
     NIC4 - Heartbeat (20.x VLAN)
     NIC5 - VSwitch (26.x VLAN)
     NIC6 - VSwitch (18.x VLAN)
     NIC7 - VSwitch (27.x VLAN)
     NIC8 - VSwitch (22.x VLAN)
    Following hotfixes additional installed by MS guidance (either while build or when troubleshooting stability issue in Jan 2013)
     KB2531907 - Was installed during original building of cluster
     KB2705759 - Installed during troubleshooting in early Jan2013
     KB2684681 - Installed during troubleshooting in early Jan2013
     KB2685891 - Installed during troubleshooting in early Jan2013
     KB2639032 - Installed during troubleshooting in early Jan2013
    Original cluster build was two hosts with quorum drive. Initial two hosts were HST1 and HST5
    Next host added was HST3, then HST6 and finally HST2.
    NOTE: HST4 hardware was used in different project and HST6 will eventually become HST4
    Validation of cluster comes with warning for following things:
     Updates inconsistent across hosts
      I have tried to manually install "missing" updates and they were not applicable
      Most likely cause is different build times for each machine in cluster
       HST1 and HST5 are both the same level because they were built at same time
       HST3 was not rebuilt from scratch due to time constraints and it actually goes back to Pre-SP1 and has a larger list of updates that others are lacking and hence the inconsistency
       HST6 was built from scratch but has more updates missing than 1 or 5 (10 missing instead of 7)
       HST2 was most recently built and it has the most missing updates (15)
     Storage - List Potential Cluster Disks
      It says there are Persistent Reservations on all 14 of my CSV volumes and thinks they are from another cluster.
      They are removed from the validation set for this reason. These iSCSI volumes/disks were all created new for
      this cluster and have never been a part of any other cluster.
     When I run the Cluster Validation wizard, I get a slew of Event ID 5120 from FailoverClustering. Wording of error:
      Cluster Shared Volume 'Volume12' ('Cluster Disk 13') is no longer available on this node because of
      'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the
      volume is reestablished.
     Under Storage and Cluster Shared VOlumes in Failover Cluster Manager, all disks show online and there is no negative effect of the errors.
    Cluster Shared Volumes
     We have 14 CSVs that are all iSCSI attached to all 5 hosts. They are housed on an HP P4500G2 (LeftHand) SAN.
     I have limited the number of VMs to no more than 7 per CSV as per best practices documentation from HP/Lefthand
     VMs in each CSV are spread out amonst all 5 hosts (as you would expect)
    Backup software we use is BackupChain from BackupChain.com.
    Problem we are having:
     When backup kicks off for a VM, all VMs on same CSV reboot without warning. This normally happens within seconds of the backup starting
    What have to done to troubleshoot this:
     We have tried rebalancing our backups
      Originally, I had backup jobs scheduled to kick off on Friday or Saturday evening after 9pm
      2 or 3 hosts would be backing up VMs (Serially; one VM per host at a time) each night.
      I changed my backup scheduled so that of my 90 VMs, only one per CSV is backing up at the same time
       I mapped out my Hosts and CSVs and scheduled my backups to run on week nights where each night, there
       is only one VM backed up per CSV. All VMs can be backed up over 5 nights (there are some VMs that don't
       get backed up). I also staggered the start times for each Host so that only one Host would be starting
       in the same timeframe. There was some overlap for Hosts that had backups that ran longer than 1 hour.
      Testing this new schedule did not fix my problem. It only made it more clear. As each backup timeframe
      started, whichever CSV the first VM to start was on would have all of their VMs reboot and come back up.
     I then thought maybe I was overloading the network still so I decided to disable all of the scheduled backup
     and run it manually. Kicking off a backup on a single VM, in most cases, will cause the reboot of common
     CSV members.
     Ok, maybe there is something wrong with my backup software.
      Downloaded a Demo of Veeam and installed it onto my cluster.
      Did a test backup of one VM and I had not problems.
      Did a test backup of a second VM and I had the same problem. All VMs on same CSV rebooted
     Ok, it is not my backup software. Apparently it is VSS. I have looked through various websites. The best troubleshooting
     site I have found for VSS in one place it on BackupChain.com (http://backupchain.com/hyper-v-backup/Troubleshooting.html)
     I have tested almost every process on there list and I will lay out results below:
      1. I have rebooted HST6 and problems still persist
      2. When I run VSSADMIN delete shadows /all, I have no shadows to delete on any of my 5 nodes
       When I run VSSADMIN list writers, I have no error messages on any writers on any node...
      3. When I check the listed registry key, I only have the build in MS VSS writer listed (I am using software VSS)
      4. When I run VSSADMIN Resize ShadowStorge command, there is no shadow storage on any node
      5. I have completed the registration and service cycling on HST6 as laid out here and most of the stuff "errors"
       Only a few of the DLL's actually register.
      6. HyperV Integration Services were reconciled when I worked with MS in early January and I have no indication of
       further issue here.
      7. I did not complete the step to delete the Subscriptions because, again, I have no error messages when I list writers
      8. I removed the Veeam software that I had installed to test (it hadn't added any VSS Writer anyway though)
      9. I can't realistically uninstall my HyperV and test VSS
      10. Already have latest SPs and Updates
      11. This is part of step 5 so I already did this. This seems to be a rehash of various other stratgies
     I have used the VSS Troubleshooter that is part of BackupChain (Ctrl-T) and I get the following error:
      ERROR: Selected writer 'Microsoft Hyper-V VSS Writer' is in failed state!
      - Status: 8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
      - Writer Failure code: 0x800423f0 (<Unknown error code>)
      - Writer ID: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
      - Instance ID: {d55b6934-1c8d-46ab-a43f-4f997f18dc71}
      VSS snapshot creation failed with result: 8000FFFF
    VSS errors in event viewer. Below are representative errors I have received from various Nodes of my cluster:
    I have various of the below spread out over all hosts except for HST6
    Source: VolSnap, Event ID 10, The shadow copy of volume took too long to install
    Source: VolSnap, Event ID 16, The shadow copies of volume x were aborted because volume y, which contains shadow copy storage for this shadow copy, wa force dismounted.
    Source: VolSnap, Event ID 27, The shadow copies of volume x were aborted during detection because a critical control file could not be opened.
    I only have one instance of each of these and both of the below are from HST3
    Source: VSS, Event ID 12293, Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details RevertToSnashot [hr = 0x80042302, A Volume Shadow Copy Service component encountered an
    unexpected error.
    Source: VSS, Event ID 8193, Volume Shadow Copy Service error: Unexpected error calling routine GetOverlappedResult.  hr = 0x80070057, The parameter is incorrect.
    So, basically, everything I have tried has resulted in no success towards solving this problem.
    I would appreciate anything assistance that can be provided.
    Thanks,
    Charles J. Palmer
    Wright Flood

    Tim,
    Thanks for the reply. I ran the first two commands and got this:
    Name                                                            
    Role Metric
    Cluster Network 1                                              
    3  10000
    Cluster Network 2 - HeartBeat                              1   1300
    Cluster Network 3 - iSCSI                                    0  10100
    Cluster Network 4 - LiveMigration                         1   1200
    When you look at the properties of each network, this is how I have it configured:
    Cluster Network 1 - Allow cluster network communications on this network and Allow clients to connect through this network (26.x subnet)
    Cluster Network 2 - Allow cluster network communications on this network. New network added while working with Microsoft support last month. (28.x subnet)
    Cluster Network 3 - Do not allow cluster network communications on this network. (22.x subnet)
    Cluster Network 4 - Allow cluster network communications on this network. Existing but not configured to be used by VMs for Live Migration until MS corrected. (20.x subnet)
    Should I modify my metrics further or are the current values sufficient.
    I worked with an MS support rep because my cluster (once I added the 5th host) stopped being able to live migrate VMs and I had VMs host jumping on startup. It was a mess for a couple of days. They had me add the Heartbeat network as part of the solution
    to my problem. There doesn't seem to be anywhere to configure a network specifically for CSV so I would assume it would use (based on my metrics above) Cluster Network 4 and then Cluster Network 2 for CSV communications and would fail back to the Cluster Network
    1 if both 2 and 4 were down/inaccessible.
    As to the iSCSI getting a second NIC, I would love to but management wants separation of our VMs by subnet and role and hence why I need the 4 VSwitch NICs. I would have to look at adding an additional quad port NIC to my servers and I would be having to
    use half height cards for 2 of my 5 servers for that to work.
    But, on that note, it doesn't appear to actually be a bandwidth issue. I can run a backup for a single VM and get nothing on the network card (It caused the reboots before any real data has even started to pass apparently) and still the problem occurs.
    As to Backup Chain, I have been working with the vendor and they are telling my the issue is with VSS. They also say they support CSV as well. If you go to this page (http://backupchain.com/Hyper-V-Backup-Software.html)
    they say they support CSVs. Their tech support has been very helpful but unfortunately, nothing has fixed the problem.
    What is annoying is that every backup doesn't cause a problem. I have a daily backup of one of our machines that runs fine without initiating any additional reboots. But most every other backup job will trigger the VMs on the common CSV to reboot.
    I understood about the updates but I had to "prove" it to the MS tech I was on the phone with and hence I brought it up. I understand on the storage as well. Why give a warning for something that is working though... I think that is just a poor indicator
    that it doesn't explain that in the report.
    At a loss for what else I can do,
    Charles J. Palmer

  • Cluster Network Randomly Failing on Hyper-V Cluster

    Please let me know if there is a more appropriate forum. I am having a really strange issue that is seemingly random. I have a 3 host cluster that are all identical hardware and running Hyper-V Server 2012 R2. The networking is as follows and each network
    is a different VLAN/Subnet:
    3 Cluster networks for virtual machines
    1 Cluster network for cluster traffic/management
    1 Heartbeat network
    2 iSCSI networks for storage
    All of the networks are perfectly fine except for one which seems to fail on a random node at a random time during the day (so far, a maximum of once per day).
    If I start to live migrate virtual machines that are on the failed network, the cluster network comes back up. The cluster networks are teamed using SCVMM and they are switch independent and running the Dynamic teaming algorithm. We have tried changing the
    network switches to see if it was faulty network hardware and things ran fine for one day and then just happened again today so we've ruled that out. The only error message I get is 1127 which is the error stating that the cluster network has gone into a failed
    state which doesn't help much. I've run the cluster validation tool for networking several times and it always passes 100%. What I am worried about is hardware incompatibilities as I am using Dell servers (PowerEdge R720) that have Broadcom NIC's in them.
    We have 12 Ethernet ports in each server and they are all identical hardware. Four of them are integrated Broadcom, another four that are from a Broadcom quad add-on NIC, and another 4 that are from an Intel quad add-on NIC. All are server grade NIC's. The
    only problem I've had in the past is with VMQ which we've had to disable as a workaround but that has always stabilized our virtual networks. In any case all of the cluster networks for virtual machines are set up identically and only this particular one randomly
    fails on any one of the three hosts (it has happened at least once on each node now).
    I am wondering if anyone has had this experience before. I have read that there are some nasty compatibility issues between Broadcom and Hyper-V  but I am wondering if someone could give me some ideas to find out how to narrow this down since the event
    logs don't seem to be speaking in obvious terms to me.
    Please let me know if you have any suggestions on how to narrow down what's causing this or if there is more information that I could provide. In the meantime, I'm going to try and take note of which virtual machines are running on the host that has the
    network fail just in case there's some correlation there but that could take a while to accrue any useful data and our users aren't too happy with the instability...
    Thank you in advance for your time and sorry for the lengthy post!

    Since I made the change last Friday evening, 4/10, I haven't experienced the issue. I won't be completely convinced that this resolved it until I monitor for at least one more week since it didn't actually present itself for the first time until I was already
    one week into live deployment. Also, this link below is much more eloquent than how I put it and describes my issue exactly. Coupled with the KB article that someone posted within this comments section (the same that I posted earlier here) of the article,
    this is what led me to check the VMQ status through PowerShell which is much better than going through the registry to do it (I'm running Hyper-V Server 2012 R2 which is like core so I don't have the GUI options shown in the article).
    http://alexappleton.net/post/77116755157/hyper-v-virtual-machines-losing-network
    I could try updating the driver but there is mention in the comments of this post that driver updates have yet to resolve this issue so we may still be waiting on Broadcom for a fix. Please confirm otherwise if anyone has any information.

  • Programmatically create array from common cluster items inside array of clusters

    I have seen many questions and responses on dealing with arrays of clusters, but none that discuss quite what I am looking for. I am trying to programmatically create an array from common cluster items inside array of clusters. I have a working solution but looking for a cleaner approach.  I have an array of clusters representing channels of data.  Each cluster contains a mixture of control data types, i.e.. names, types, range, values, units, etc. The entire cluster is a typedef made up of other typedefs such as the type, range and units and native controls like numeric and boolean. One array is a “block” or module. One cluster is a channel of data. I wrote a small vi to extract all the data with the same units and “pipe” them into another array so that I can process all the data from all the channels of the same units together.  It consists of a loop to iterate through the array, in which there is an unbundle by name and a case structure with a case for each unit.  Within a specific case, there is a build array for that unit and all the other non-relevant shift registers pass through.  As you can see from the attached snapshots, the effort to add an additional unit grows as each non-relevant case must be wired through.  It is important to note that there is no default case.  My question:  Is there a cleaner, more efficient and elegant way to do this?
    Thanks in advance!
    Solved!
    Go to Solution.
    Attachments:
    NI_Chan units to array_1.png ‏35 KB
    NI_Chan units to array_2.png ‏50 KB

    nathand wrote:
    Your comments made me curious, so I put together a quick test. Maybe there's an error in the code (below as a snippet, and attached as a VI) or maybe it's been fixed in LabVIEW 2013, but I'm consistently getting faster times from the IPE (2-3 ms versus 5-6ms for unbundle/index). See if you get the same results. For fun I flipped the order of the test and got the same results (this is why the snippet and the VI execute the tests in opposite order).
    This seems like a poster child for using the IPES!  We can look at the index array + replace subset and recognize that it is in place, but the compiler is not so clever (yet!).  The bundle/unbundle is a well-known "magic pattern" so it should be roughly equivalent to the IPES, with a tiny penalty due to overhead.
    Replace only the array operation with an IPES and leave the bundle/unbundle alone and I wager the times will be roughly the same as using the nested IPES.  Maybe even a slight lean toward the magic pattern now if I recall correctly.
    If you instantly recognize all combinations which the compiler will optimize and not optimize, or you want to exhaustively benchmark all of your code then pick and choose between the two to avoid the slight overhead.  Otherwise I think the IPES looks better, at best works MUCH better, and at worst works ever-so-slightly worse.  And as a not-so-gentle reminder to all:  if you really care about performance at this level of detail: TURN OFF DEBUGGING!

  • How we can run multiple IOP servers as separate services in iop 4.0.5

    We have two different IOP servers running on a same windows machine. We want to install nt service for both. When we install the service as Oracle Integrated Operational Planning for first server, and try to do it for second, it results in an error that service can't be installed. Any idea how we can achieve this?

    ISServer.properties file contains a variable 'Server.ApplicationName'. Value for both the instances must be different. Please check if the value is same in your case. Please try changing it and post a message if the problem persists.

  • Monitor Servers from separate SCOM instances

    Quick question regarding SCOM 2012. I have had to design a single physical SCOM 2012R2 setup for a company which also has SQL installed locally. This was against my recommendations but business and cost requirements have dictated this. We have 2 datacentres,
    DC1 and DC2. DC1 is the primary and DC2 is the DR site. They are looking for some kind of high availbility, can anyone advise if the following is possible:
    2 single server SCOM instances, one at DC1 and the other at DC2 both with SQL installed locally
    Monitor the servers at DC1 and DC2 from both instances using multi-homed agents
    Would this be possible? I appreciate this would add administrative overhead but cant think of any other way to possibly do this due to only having a single server at each DC. 

    for building up high availability solution of SCOM 2012 r2, you should built two managment server, one managemnet server,MS1, locate in site DC1 and other,MS2, is located in site DC2. SCOM agents in DC1 report to MS1 and SCOM agents in DC2 report to MS2.
    All agents should be configured autofailover to other Managemtn server. For a high availability of DB, you may refer to high availability solution on DB by using AlwaysOn Availability Groups.
    http://technet.microsoft.com/en-us/library/hh920812.aspx
    Roger

  • SAP Installation in Cluster for ECC Ehp4 SR1 in one cluster

    Dear Experts,
    Platform is Windows 2008, SQl Server 2008, ECC Ehp4 SR1
    In our project, we are implementing ECC 6.0 Ehp4 SR1. Prior to this release, we use to have ABAP & JAVA (dual) stack in single installation. But going through installation guides and posts in SDN understood that now ABAP and JAVA has to be installed separately with different SID.
    We have installed development system with separate SID for ABAP stack and java stack in the same box. Similarly, we did it for quality system also.
    Now, we need to install the production system which will be in cluster. Earlier we have done cluster installation for ABAP & JAVA stack which would come in same cluster only. But now, since ABAP and JAVA have to be installed with separate SID, can I install both ABAP and JAVA stack in same cluster with two nodes or do we need to have separate cluster for ABAP and JAVA. As of now, we only have two nodes available. If we need to go with separate cluster for ABAP and JAVA we need to get two additional nodes.
    So, let me know whether we can use existing two nodes and create a cluster and install ABAP and JAVA with different SID's on it. I have gone through the installation guide and could understand that it can be done. Hence let me know, whether I can go with single cluster. If so, then would be the advantages and disadvantages.
    Thanks & Regards,
    Sharath

    Hi Sharath Babu,
    The ASCS & SCS instance must be installed and configured to run on two MSCS nodes in one MSCS
    cluster. Ofcourse ESR is also on the same lines.
    In brief, for each SAP system you have to install one central instance and at least one dialog instance.
    For example, if your local instance on both nodes should have below listed items
    <drive>:\usr\sap\<SID>\SYS
    <drive>:\usr\sap\<SID>\ASCS20
    <drive>:\usr\sap\<SID>\SCS10
    And these above folders are junction point with Central Instance that's build in SAN Drive with similar set of folders
    <drive>:\usr\sap\<SID>\SYS
    <drive>:\usr\sap\<SID>\ASCS20
    <drive>:\usr\sap\<SID>\SCS10
    Regards
    Sekhar
    Edited by: sekhar on Nov 27, 2009 10:25 AM

  • Win Server 2012 Failover Cluster - Error when adding disk onto a cluster (The error code was '0x1' ('Incorrect function.').)

    Hi Techies
    I'm currently running running 2 VMs Win Server 2012 and would like to test Failover Clustering for one of our FTP server
    I've added on both servers an additional partition, formatted and Online, but cannot bring the disk online from the cluster manager
    Assistance would be greatly appreciated
    Thank you
    Jabu

    You posted this in the Exchange Forum, your best bet for an answer would be to post this in the Windows Server Forum.
    https://social.technet.microsoft.com/Forums/en-US/home?forum=winserverClustering
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread

  • How can i get to a specific position in a cluster? (do a Jump in a Cluster)

    I have a Cluster of two strings and one Integer: String.name String.value integer.time. I want to do a search for a specific String.name and then start to work with the cluster at this position. Is this possible with simple unbundle by name search in a whileloop for the needed name and then wire the cluster to the output?. I cant and dont want to change the structure of the Cluster itself (to many dependances)! Any suggestions will be nice

    Herby Hu:
    Not too hard to do.
    You are on the right track. I would recommend taking your original
    cluster array passing it into a while loop and indexing at the while
    loop, and then comparing the String.name value to the test value. If
    the test value=String.name, then exit the loop and assign the while
    loop i index as the output that you will use to index your cluster
    array.
    If the loop exits without test value=String.name, then the test value
    was never found and you will need to have a way to deal with this case
    (assign an index of -1 and then report test value not found back to
    the user or something similar. (Don't try to index the cluster array
    with a -1 index though.)
    You will need to use a shift register and a true false selector.
    Initially assign a -1 to the shift register before beginning the while
    loop. If a test value=String.name=true, then use the true false
    selector to assign the I index to the shift register.
    The while loop is the most efficient way I can think of to do this
    because the array is an array of clusters so that you have to index
    the array BEFORE you can unbundle by name. Thus, you can't really use
    search 1D array as far as I know. (Unless they have made it so that
    you can search a cluster array for a particular cluster value. Even
    then, search 1D array would expect that all components in the cluster
    element would have to match the search cluster and all you care about
    is a name match.)
    Doug De Clue
    LabVIEW programmer
    [email protected]
    herbyhu wrote in message news:<[email protected]>...
    > I have a Cluster of two strings and one Integer: String.name
    > String.value integer.time. I want to do a search for a specific
    > String.name and then start to work with the cluster at this position.
    > Is this possible with simple unbundle by name search in a whileloop
    > for the needed name and then wire the cluster to the output?. I cant
    > and dont want to change the structure of the Cluster itself (to many
    > dependances)! Any suggestions will be nice

  • I have a modem and a separate router in a part of my house that gets poor reception from the modem.My iPad won't connect to the auxillary router like my laptop does with ease. How do I get the iPad to connect?

    I have a modem, and a separate (hard wired from modem) router in a part of my house that gets poor reception from the modem.My iPad won't connect to the auxillary router like my laptop does with ease. How do I get the iPad to connect? It does see the router in my wi-fi connections list - just won't connect. !

    This article will provide a troubleshooting framework.
    http://support.apple.com/kb/TS1398

  • Real Application Cluster on Sun Solaris 8 and Sun Cluster 3

    Hello,
    we want to install Oracle 9i Enterprise Edition in combination with Oracle Real Application Cluster-Option on 2 Nodes. Every node (12-CPU-SMP-Machine) should running Sun Solaris 8 and Sun Cluster 3 Service.
    Does this configuration work with ORAC? I found nowhere informations about. Is there anything I have to pay special attention for during installation?
    Thank you for helping and best regards from Berlin/Germany
    Michael Wuttke

    Forms and report services work fine on solaris 8.
    My problem is on the client side.
    I have to use solaris 8 with netscape like forms clients
    and I wasn't able to make it work with java plugins.
    Any solution?
    Mauro

  • 2008 R2 SP1 failover cluster, after connect to VM from failover cluster manager console I've get only upper quourter of screen console

    Hello :)
    I have to ask because this issue is boring me a long time ago :(
    From time to time when I connect to virtual machine from failover cluster manager console I've get virtual machine connection screen (console) which is reduced only to upper left quorter of full screen. There is no scroll bar ... If I live migrate
    VM to another node (or restart VM on same node) and reconnect, console screen is displayed ok (ie all console screen is visible).
    It is happening regardless of OS version installed into guest VM: 2003 R2, 2008 R2, 2012, 2012 R2).
    I've check inside VM ( I can see that integration services are installed and running but when I click onto console window with mouse I've get message:
    "Virtual Machine Connection
    Mouse not captured in Remote Desktop session
    The mouse is available in a Remote Desktop session when
    integration services are installed in the guest operating
    system...."
    From SCVMM console I can see that IC version on guest VM is: 6.1.7601.17514
    I would like to know what is going on and is there any way how can I detect those situation or even better how to prevent it ?
    Thank you for any idea.
    Best regards
    Nenad

    Hi Nenad,
    As you mentioned you have tried 2003 R2, 2008 R2, 2012, 2012 R2 guest vm, but all this do not work properly, With Server 2003 platform the 2008r2 Hyper-V the 2003R2 are not
    supported, in your case you must update to Windows Server 2003 R2 with Service Pack 2.
    Server 2012 as guest vm on 2008r2 you must install the following hot fix:
    You cannot run a Windows 8-based or Windows Server 2012-based virtual machine in Windows Server 2008 R2
    https://support2.microsoft.com/kb/2744129?wa=wsignin1.0
    Server 2012 is the last version of Windows that will be supported as a guest operating system on 2008r2 Hyper-V therefore the 2012r2 guest vm is not supported.
    Please use the following command can query all guest vm’s IC version, please compare this problematic vms IC version with functioning vms.
    The following powershell command can be used to display the version of integration services installed on all VMs on the Hyper-V host:
    PS C:\Users\administrator> get-vm | ft name, IntegrationServicesVersion
    Name                                                       
    IntegrationServicesVersion
    TestVM2012                                           
    6.2.9200.16433
    SQL01                                                    
    6.2.9200.16433
    SQL02                  
                                      6.2.9200.16433
    SCVMM01                                               
    6.2.9200.16384
    SCVMM02                                               
    6.2.9200.16384
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Windows Server 2012 Cluster Validation Failing On List All Potential Cluster Disks

    I'm in the process of setting up a new cluster using Windows Server 2012 with a view to migrate from our existing 2008 R2 cluster.  I am having problems validating the storage and the wizard keeps failing on the List All Potential Cluster Disks
    test (which causes all the other storage tests to be cancelled).
    I'm getting the following error:
    Failed while verifying removal of any Persistent Reservation on physical disk 19ba99a4 at node RC-HYPERV-3.riddlesdown.local.
    When I look in validation report, disk 19ba99a4 is the management LUN (Access LUN 31 as termed by the storage software).
    Looking on my 2008 R2 cluster this doesn't appear in Disk Management.
    Any ideas how I can get rid of it or the error that it is causing?
     

    Hi,
    Thank you for your question.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Best Regards,
    Aiden
    TechNet Subscriber Support
    If you are
    TechNet Subscription
    user and have any feedback on our support quality, please send your feedback
    here.
    Aiden Cao
    TechNet Community Support

  • Sql 2008 r2 cluster side by side with sql 2012 cluster on Windows 2008 R2

    We have a SQL 2008R2 active/passive SQL cluster running on Windows 2008 R2 cluster.
    I would like to add a SQL 2012 clustered instance to the same Winows 2008 R2 cluster.
    Are there any issues involved having a SQL 2008 R2 clustered instance running side by side with a SQL 2012 instance in same Windows 2008 R2 cluster?
    Are there any pitfalls/"gotchas" to watch for with SQL 2012 cluster install?
    Thank you so much!

    Hello,
    Two instances, one SQL Server 2008 and the other SQL Server 2012, side-by-side, is fully supported.
    I would like to recommend you to assign a different port (not port 1433) to the non-default instance
    to avoid an IP address conflict between both instances.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

Maybe you are looking for

  • FQDN Share Access (Could not find this item)

    Say I have the domain 'contoso.com', which is resolved by a domain controller (Controller). I have a share (Share) on Controller with everyone permissions (full control, just to test). Now I can access Share via '\\Controller\Share' or '\\Controller.

  • Opening IE specific webpages in Safari

    The company that I work for has at least one specific and important website that MUST be opened in IE. Yes, I know.... *sigh* I'm curious if anyone has any potential solutions to this issue OTHER than a VM? I'm looking into Crossover, Play on Mac and

  • Parameter drop down issue in XI Rel. 2 report viewed with XI Rel. 1

    I've modified a report (originally created in Rel. 1) with Rel. 2.  I've modified the one parameter.  Everything works fine when viewing the report in Rel. 2.  When I send the report to someone with Rel. 1, they get the parameter screen with a blank

  • Hiding subform in adobe form through javascript.

    Hi guys, I wanted to hide a subform through javascript. i want to make it visible only when the page no = 3. what is the syantax of javascript to do this. thanks in advance

  • HDV and I Movie HD

    Anyone have any experience with High Def camcorders and importing HD video into iMovie HD and playing the the HD video back on an apple ACD or High Def TV? I know the HDV camcorders are very expensive but some are coming out that are for the general