Unplanned failover in a Hyper-V cluster vs unplanned failover in an ordinary (not Hyper-V) cluster

Hello!
Please excuse me if you think my question is silly, but before deploying something  in a production environment I'd like to dot the i's  and cross the t's.
1) Suppose there's a two node cluster with a Hyper-V role that hosts a number of highly available VM.
If the both cluster nodes are up and running an administrator can initiate a planned failover wich will transfer all VMs
including their system state to another node without downtime.
In case any cluster node goes down unexpectedly the unplanned failover fires up that will transfer all VMs to another node
WITHOUT their system state. As far as I understand this can lead to some data loss.
http://itknowledgeexchange.techtarget.com/itanswers/how-does-live-migration-differ-from-vm-failover/
If, for example, I have an Exchange vm and it would be transfered to the second node during unplanned failover in the Hyper-V cluster I will lose some data by design.
2) Suppose there's a two node cluster with the Exchange clustered installation: in case one node crashes the other takes over without any data loss.
Conclusion: it's more disaster resilient to implement some server role in an "ordinary" cluster that deploy it inside a vm in the Hyper-V cluster.
Is it correct?
Thank you in advance,
Michael

"And if this "anything in memory and any active threads" is so large that can take up to 13m15s to transfer during Live Migration it will be lost."
First, that 13m15s required to live migrate all your VMs is not the time it takes to move individual VMs.  By default, Hyper-V is set to move a maximum of 2 VMs at a time.  You can change that, but it would be foolish to increase that value if
all you have is a single 1GE network.  The other VMs will be queued.
Secondly, you are getting that amount of time confused with what is actually happening.  Think of a single VM.  Let's even assume that it has so much memory and is so active that it takes 13 minutes to live migrate.  (Highly unlikely, even
on a 1 GE NIC).  During that 13 minutes the VM takes to live migrate, the VM continues to perform normally.  In fact, if something happens in the middle of the LM, causing the LM to fail, absolutely nothing is lost because the VM is still operating
on the original host.
Now let's look at what happens when the host on which that VM is running fails, causing a restart of the VM on another node of the cluster.  The VM is doing its work reading and writing to its data files.  At that instance in time when the host
fails, the VM may have some unwritten data buffers in memory.  Since the host fails, the VM crashes, losing whatever it had in memory at the instant in time.  It is not going to lose any 13 minutes of data.  In fact, if you have an application
that is processing data at this volume, you most likely have something like SQL running.  When the VM goes down, the cluster will automatically restart the VM on another node of the cluster.  SQL will automatically replay transaction logs to recover
to the best of its ability.
Is there a possibility of data loss?  Yes, a very tiny possibility for a very small amount.  Is there a possibility of data corruption?  Yes, a very, very tiny possibility, just like with a physical machine.
The amount of time required for a live migration is meaningless when it comes to potential data loss of that VM.  The potential data loss is pretty much the same as if you had it running on a physical machine, the machine crashed, and then restarted.
"clustered applicationsDO NOT STOP working during unplanned failover (so there is no recovery time), "
Not exactly true.  Let's use SQL as an example again.  When SQL is instsalled in a cluster, you install at a minimum one instance, but you can have multiple instances.  When the node on which the active instance is running fails, there is
a brief pause in service while the instance starts on the other node.  Depending on transactions outstanding, last write, etc., it will take a little bit of time for the SQL instance to be ready to start handling requests on the new node.
Yes, there is a definite difference between restarting the entire VM (just the VM is clustered) and clustering the application.  Recovery time is about the biggest issue.  As you have noted, restarting a VM, i.e. rebooting it, takes time. 
And because it takes a longer period of time, there is a good chance that clients will be unable to access the resource for a period of time, maybe as much as 1-5 minutes depending upon a lot of different factors, whereas with a clustered application, the
clients may be unable to access for up to a minute or so.
However, the amount of data potentially lost is quite dependent upon the application.  SQL is designed to recover nicely in either environment, and it is likely not to lose any data.  Sequential writing applications will be dependent upon things
like disk cache held in memory - large caches means higher probability of losing data.  No disk cache means there is not likely to be any loss of data.
.:|:.:|:. tim

Similar Messages

  • Will CMS and cluster group (quorum) failover work if FSW server is down?

    I have following servers in same site and same location:-
    1) NODE1 - mailbox server 1 in CCR (Active)
    2) NODE2 - mailbox server 2 in CCR (Passive)
    3) HUB1 - FSW
    I have OS windows 2003 on all above servers and exchange version is 2007.
    Now, if FSW server goes down then can I do following? please provide descriptive information.
    1) - CMS failover from NODE1 to NODE2.
    2) - Cluster group (quorum) failover from from NODE1 to NODE2.
    Thanks,

    Hi,
    FSW is only used when certain node cannot work properly. However, if Hub always will be turned down for a long time, it’s recommended to move it on another server.
    For more information about FSW, please refer to the following articles:
    http://blogs.technet.com/b/exchange/archive/2007/04/25/3402084.aspx
    http://technet.microsoft.com/en-us/library/bb124922(EXCHG.80).aspx
    Meanwhile, if all of the nodes in the cluster need to be shut down, including the active node, you must first stop the clustered mailbox server. The Windows shutdown process is not Exchange-aware. Therefore, we recommend that you only shut down passive nodes.
    If an active node needs to be shut down or restarted, we recommend that you move the clustered mailbox server to another available node.

  • Why virtual interfaces added to ManagementOS not visible to Cluster service?

    Hello All, 
    I"m starting this new thread since the one before is answered by our friend Udo. My problem in short is following. Diagram will be enough to explain what I'm trying to achieve. I've setup this lab to learn Hyper-V clustering with 2 nodes. It is Hyper-V
    server 2012. Both nodes have 3x physical NIcs, 1 in each node is dedicated to managing the Node. Rest of the two are used to create a NIC team. Atop of that NIC team, a virtual switch is created with -AllowManagementOS
    $False. Next I created and added following virtual interfaces to host partition, and plugged them into virtual switch created atop of teamed interface. These virtual interfaces should serve the purpose of various networks available. 
    For SAN i'm running a Linux VM which has iSCSI target server and clustering service has no problem with that. All tests pass ok.
    The problem is......when those virtual interfaces added to hosts; do not appear as available networks
    to cluster service; instead it only shows the management NIC as the available network to leverage. 
    This is making it difficult to understand how to setup a cluster of 2x Hyper-V Server nodes. Can someone help please?
    Regards,
    Shahzad.

    Shahzad,
    I've read this thread a couple of times and I don't think I'm clear on the exact question you're asking.
    When the clustering service goes out to look for "Networks", what it does is scan the IP addresses on each node. Every time it finds an IP in a unique subnet, that subnet is listed as a network. It can't see virtual switches and doesn't care about
    virtual vs. teamed vs. physical adapters or anything like that. It's just looking at IP addresses. This is why I'm confused when you say, "it won't show virtual interfaces available as networks". "Networks" in this context are IP subnets.
    I'm not aware of any context where a singular interface would be treated like a network.
    If you've got virtual adapters attached to the management operating system
    and have assigned IPs to them, the cluster should have discovered those networks. If you have multiple adapters on the same node using IPs in the same subnet, that network will only appear once and the cluster service will only use
    one adapter from that subnet on that node. The one it picked will be visible on the "Network Connections" tab at the bottom of Failover Cluster Manager when you're on the Networks section.
    Eric Siron Altaro Hyper-V Blog
    I am an independent blog contributor, not an Altaro employee. I am solely responsible for the content of my posts.
    "Every relationship you have is in worse shape than you think."
    Hello Eric and friends, 
    Eric, much appreciated about your interest about the issue and yes I agree with you when you said... "When the clustering service goes out to look for "Networks",
    what it does is scan the IP addresses on each node. Every time it finds an IP in a unique subnet, that subnet is listed as a network. It can't see virtual switches and doesn't care about virtual vs. teamed vs. physical adapters or anything like that. It's
    just looking at IP addresses. This is why I'm confused when you say, "it won't show virtual interfaces available as networks". "Networks" in this context are IP subnets. I'm not aware of any context where a singular interface would be treated
    like a network."
    By networks I meant to say subnets. Let me explain what I've configured so far:
    Node 1 & Node 2 installed with 3x NICs. All 3 NICs/node plugged into same switch. 
    Node1:  131.107.0.50/24
    Node2:  131.107l.0.150/24
    A Core Domain controller VM running on Node 1:   131.107.0.200/24 
    A JUMPBOX (WS 2012 R2 Std.) VM running on Node 1: 131.107.0.100/24
    A Linux SAN VM running on Node 2: 10.1.1.100/8 
    I planed to configured following networks:
    (1) Cluster traffic:  10.0.0.50/24     (IP given to virtual interface for Cluster traffic in Node1)
         Cluster traffic:  10.0.0.150/24   (IP given to virtual interface for Cluster traffic in Node2)
    (2) SAN traffic:      10.1.1.50/8      (IP given to virtual interfce for SAN traffic in Node1)  
         SAN traffic:      10.1.1.150/8    (IP given to virtual interfce for SAN traffic in Node2)
    Note: Cluster service has no problem accessing the SAN VM (10.1.1.100) over this network, it validates SAN settings and comes back OK. This is an indication that virtual interface is
    working fine. 
    (3) Migration traffic:   172.168.0.50/8     (IP given to virtual interfce for
    Migration traffic in Node1) 
         Migration traffic:   172.168.0.150/8    (IP given to virtual interfce for
    Migration  traffic in Node2)
    All these networks (virtual interfaces) are made available through two virtual switches which are configured EXACTLY identical on both Node1/Node2.
    Now after finishing the cluster validation steps (which comes all OK), when create cluster wizard starts, it only shows one network; i.e. network of physical Layer 2 switch i.e. 131.107.0.0/24.
    I wonder why it won't show IPs of other networks (10.0.0.0/8, 10.1.1.0/8 and  172.168.0.0/8)
    Regards,
    Shahzad

  • Apache Plugin is not working with Cluster Entry in httpd.conf

    Hi Guys,
    We have a very particular issue that is happening here in our production environments. Before explaining the issue to you, I will just give a brief of our architecture.
    We are using Weblogic Portal Platform SP4.
    Our current Production Domain Consists of 1 Admin Server and 2 Managed Servers. The managed servers are in a cluster.
    We have two Linux boxes on which the weblogic servers are running. These two machines have IPs: x.x.x.58 and x.x.x.59. Now, the Admin Server and managed Server 1 are started on the machine x.x.x.58 and managed server 2 is started on the machine x.x.x.59. Both the managedServers listen on port 8007.
    This entire setup mentioned in the above paragraph is at a location A (lets say).
    Now, the requests are proxied to the managed server cluster using BEA's Apache plugin residing on Apache. Apache is running on two Linux boxes that are present at a location B.
    Location B is connected to Location A by a dedicated WAN Link.
    Now, before the two Apache servers there is a physical Load Balancer that balances the requests between the two Apache boxes.
    This means that whenever there is a request from the Internet to the weblogic cluster, the load balancer first sends it to either of the Apache servers and then the BEA Apache plugin on that Apache forwards the request via the 2 mbps WAN link to the weblogic cluster.
    Now following is the description of the issue we are facing. We donot know whether the issue is with the plugin or with weblogic server.
    In order that the plugin load balances the requests between the servers in the cluster and for failover, the following entry was put in httpd.conf on both the Apache Servers.
    WebLogicCluster y.y.y.11:8007,y.y.y.13:8007
    Note: Because of the WAN link there is natting used. This implies y.y.y.11 is natted to x.x.x.58 and y.y.y.13 is natted to x.x.x.59.
    Here in lies the issue. When we use this plugin entry the following issue occurs.
    1.     First we go to the home page and login.
    2.     The user is then redirected to a Portal Desktop fitting those credentials.
    3.     Now on this Portal Desktop, invariably whenever we click any of the desktop's links, an HTTP 403 is thrown. This error is thrown because on the desktop there are entitlements so that it cannot be accessed without a user logging in with credentials. But this error indicates that the server is losing the session and entitlement details.
    Why does this happen?
    As a result, we cannot use that plugin entry. The current entry we have is
    WebLogicHost y.y.y.11
    WebLogicPort 8007
    As you can see, here we are only redirecting to one managed server. This entry is not what we want, obviously because if that server fails, the plugin wont know which other server to pass the request on to. Also in this case there is no load balancing.
    Our issues our,
    1. How to resolve this plugin entry issue. How can we use the cluster entry without encountering this HTTP 403 Error?
    3. Consider the following sequence of steps.
    - Request for the home page is served by Apache Server 1
    - User logins in, the login request is sent by the Load Balancer to Apache Server 1 and plugin on Apache Server 1 sends it to the cluster via WAN l ink. Then the internal mechanism works with creating the session id and primary server and secondary server and logs the user in and sends the response via Apache Server 1.
    - Now, the user clicks a link on the desktop. The request now goes to the load balancer and it sends it to Apache Server 2.
    The question is now does the plugin on Apache server 2 correctly understand and send the request to the primary server of that session.
    Pls help me as to giving me some direction in resolving this issue

    From your Xorg.0.log:
    (II) fglrx(0): DRIScreenInit done
    (II) fglrx(0): Kernel Module Version Information:
    (II) fglrx(0): Name: fglrx
    (II) fglrx(0): Version: 8.49.7
    (II) fglrx(0): Date: May 12 2008
    (II) fglrx(0): Desc: ATI FireGL DRM kernel module
    (WW) fglrx(0): Kernel Module version does *not* match driver.
    (EE) fglrx(0): incompatible kernel module detected - HW accelerated OpenGL will not work
    You have to quit X, then rmmod and modprobe the kernel module - you're still using the old one.  (a reboot would technically fix it too)
    # rmmod fglrx
    # modprobe fglrx
    # dmesg | tail
    The dmesg output should say you've successfully loaded the 8.501 driver.

  • Hyper link of public image(hyperlink or image) can not be saved on windows server 2012 and sharepoint 2010 problem

    hyper link of public image(hyperlink or image) can not be saved on windows server 2012 and sharepoint 2010 problem, is this a bug?
    thanks for any reply.
    Rosone

    It is not a bug, you might be using IE in Windows server 2012 and and browser might be restricting your site actions to respond properly.
    Check this in a different browser or access site in a differ OS.
    Adnan Amin MCT, SharePoint Architect | If you find this post useful kindly please mark it as an answer.

  • Hyper-V on Windows 8.1 64 bit, "Could not send keys to the virtual machine"

    Hi,
    I newly installed Windows 8.1 (not an upgrade, fresh install from corporate network) on a Dell Precision T7600 workstation and enabled the Hyper-V feature with the matching management. I created a couple of 2 GB RAM, 2 virtual CPUs VMs and am trying to install
    Windows Server 2k8R2 and 2k12 on them. After a network boot, during the first couple dialog boxes for Windows setup, I am unable to use the keyboard. When I try to boot from an ISO, the VM get stuck on a black screen "Starting Windows". In either
    case if I use the VM connection window menu to paste from clipboard or send ctrl+alt+del, I get the error:
    Virtual Machine Connection
    Could not send keys to the virtual machine.
    Interacting with the virtual machine's keyboard device failed.
    I search online for this error and did not find others experiencing it.
    I tried to shutdown and reset both VMs, I stopped and restarted the Hyper-V service. Neither helped. I am not sure what other step I can take at this point.
    Also, I configured a virtual switch as external network on one of the two Intel network adapters. Both connect to a gigabit switch. "Allow management operating system to share this network adapter" is checked. Both VMs are on the same virtual switch.
    Senior Soft. Dev. Eng. | Microsoft IT | Microsoft Corporation

    Hi,
    Can you find any information in event log about this issue?
    Based on my research, it seems that the Dll file related to Hyper-v was corrupted.
    I suggest you try using the following command to repair your system.
    1.Open an elevated command prompt: press Win+X, then click Command Prompt(Admin),and press Enter.
    2.At the command prompt, type the following command, and then press ENTER:
    Dism /Online /Cleanup-Image /RestoreHealth
    Regards,
    Kelvin hsu
    TechNet Community Support

  • Hyper-v manager in windows 8.1 cannot connect to hyper-v server. RPC server unavailable

    I am setting up a new server. I installed Hyper-V 2012 R2 on it and want to manage it remotely with Hyper-V manager from a Windows 8.1 computer. I am unable to communicate with the computer with the server. Hyper-V manager gives me the following error: RPC
    Server Unavailable. Unable to establish communication between "the hypervserver" and "the W8.1station". Both computer are located in the same subnet and both are on the same domain.
    I configured the server with a fixed IP, DNS, to accept pings, connected it to the domain and set it up to allow remote access and management. Although I do not think it is necessary, I added my domain account to the administrators group in the hyper-V server.
    I can perfectly ping the server and remote into the server from the W8.1 station. From he server I can also ping the W8.1 station as well as exterior websites (DNS functionality is ok).
    I have play with the firewall in both, the Hyper-V server and the management station and have not being able to get them to communicate. Blaming the Hyper-V server for all this and the great time that I have already spent on this problem, I borrowed a Windos
    7 Pro computer, installed RSAT and enabled the Hyper-V manager in that computer. Launched the manager AND CONNECTED TO THE HYPER-V SERVER WITH NO PROBLEMS AT ALL !!!!!!!
    I have read a lot already about this but most people having this issue are setting up servers in a workgroup environment, and almost nobody is using Windows 8.1 as the management station. Can someone PLEASE give me a hand with this problem??
    Thanks to you all !!!

    Hi reirem,
    Please use the script " hvremote " to Verify configuration for errors via   command "hvremote /show /target:othercomputername
    Here is HvRemote script link :
    http://code.msdn.microsoft.com/Hyper-V-Remote-Management-26d127c6
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Impact of NOT having SSO Cluster

    Hello Everyone,
    I wanted to understand the below points about SSO Clustering.
    * Impacts of NOT having SSO Cluster
    * Benefits of having SSO Cluster
    * Step by step guide available for SSO Clustering.
    * Is there any pre-requisite or important deadlocks while performing SSO Clustering.
    Thanks in advance for all your suggestions.
    Thanks,
    Prashant
    Please mark this post accordingly if it answers your query or is helpful.

    * Impacts of NOT having SSO Clusterà -->SSO is the key service in order to run BizTalk environment. That is why it is recommended to have SSO as a cluster service in production environemt along with
    your SQL instance as cluster. This is for high availability of the service.
    Consider SSO is running only on one physical machine and if that goes off, your whole biztalk environment will stop working and it will be a downtime. That is why it is always recommended to have SSO as cluster.
    * Benefits of having SSO Cluster--> same as above appoint.
    * Step by step guide available for SSO Clustering.-->
    https://msdn.microsoft.com/en-us/library/aa559783.aspx?f=255&MSPPError=-2147217396
    * Is there any pre-requisite or important deadlocks while performing SSO Clustering.--> nope. It’s very straight forward , please follow the guide . any windows Admin person can perform the clustering of this service.
    Make sure you keep the backup always at safe place.
    Greetings,HTH
    Naushad Alam
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or
    Mark As Answer
    alamnaushad.wordpress.com

  • Diskgroup not mounted during cluster startup

    Hi,
    I have a 2 node RAC(11GR2) on VMWare 7.1.4. OS is Solaris 10
    I have registered 2 instances in the cluster.
    srvctl add database -d dbrac -o /u01/app/oracle/product/11.2.0/dbhome_1 -a "extdg,nordg"
    srvctl add instance -d dbrac -i dbrac2 -n vmsol2
    srvctl add instance -d dbrac -i dbrac1 -n vmsol1
    After after registering the 2 instances, initially the instance were automatically up when ever I execute a ./crsctl start cluster.
    But now the database instances on both nodes are not comming up; only ASM is up.
    While checking ASM disk group, I found EXTDG and NORDG is not mounted. So I mounted the disk group and started the database.
    Then I manually mounted, the disk groups and started the database.
    SQL> alter diskgroup nordg mount;
    Diskgroup altered
    SQL> alter diskgroup extdg mount;
    Later I tried removing the the database configuration from the cluster using SRVCTL and added the database to cluster again.
    srvctl remove instance -d dbrac -i dbrac1
    srvctl remove instance -d dbrac -i dbrac2
    srvctl remove database -d dbrac
    srvctl add database -d dbrac -o /u01/app/oracle/product/11.2.0/dbhome_1 -a "extdg,nordg"
    srvctl add instance -d dbrac -i dbrac2 -n vmsol2
    srvctl add instance -d dbrac -i dbrac1 -n vmsol1
    Still database is not starting during cluster startup.
    Why did the disk group is not getting mounted while cluster is starting? Can someone help me?
    Regards,
    Mat
    Edited by: user11278087 on Mar 9, 2012 7:37 PM
    Edited by: user11278087 on Mar 9, 2012 7:41 PM
    Edited by: user11278087 on Mar 9, 2012 7:42 PM

    Hi,
    Thank you for your replay.
    Disk group was mounting previously without doing any modification in spfile after executing the following commands.
    srvctl add database -d dbrac -o /u01/app/oracle/product/11.2.0/dbhome_1 -a "extdg,nordg"
    srvctl add instance -d dbrac -i dbrac2 -n vmsol2
    srvctl add instance -d dbrac -i dbrac1 -n vmsol1
    But suddenly this issue happened.
    I do not have metalink access. Could you please help me?
    Regards,
    Mat.

  • Windows could not start the Cluster Service on Local computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 2.

    Dear Technet,
    Windows could not start the Cluster Service on Local computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 2.
    My cluster suddenly went disappear. and tried to restart the cluster service. When trying to restart service this above mention error comes up.
    even i tried to remove the cluster through power-shell still couldn't happen because of cluster service not running.
    Help me please.. thank you.
    Regards
    Shamil

    Hi,
    Could you confirm which account when you start the cluster service? The Cluster service is a service that requires a domain user account.
    The server cluster Setup program changes the local security policy for this account by granting a set of user rights to the account. Additionally, this account is made a member
    of the local Administrators group.
    If one or more of these user rights are missing, the Cluster service may stop immediately during startup or later, depending on when the Cluster service requires the particular
    user right.
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Hyper-v Client in Win10 Enterprise Technical Preview does not boot to PXE via Leagacy Network Adapter after OS is installed using gen 1 BIOS

    Using Hyper-v within Windows 10 Enterprise Technical Preview to boot a Win 7  Legacy BIOS (Generation 1) Operating system client does not boot to PXE legacy network adapter after operating system has been installed within the Virtual Client.
    Hyper-v Client will boot to PXE Legacy Network Adapter only when no other boot device is available and no OS is installed.
    Hyper-V settings seem to ignore BIOS - Startup order within Hyper-V Client Settings and will always boot to VHD before Legacy Network Adapter even if Legacy Network adapter is first in the startup list.
    IF VHD is deleted and a new VHD added to the Client then will boot to PXE using Legacy Network Adapter. Again once OS is installed then will not boot to PXE Legacy Network Adapater.
    Gen 2 BIOS/UEFI works as expected and boots to PXE network everytime.

    Hi shaunmoncrieff,
    Thank you for the information.
    Currently I am not available to take a test using Windows 10 on physical machine.
    You may try to submit this information through Windows Feedback Tool, once I have tested, I will update the results here.
    Best regards
    Michael Shao
    TechNet Community Support

  • SPF is not supported SCVMM cluster problems, when repairing ?

    SPF is not supported SCVMM cluster problems, when repairing ?

    See:
    *http://forums.sdn.sap.com/thread.jspa?threadID=2056183&tstart=45#10718101

  • How Can we force the Hyper-V Replica to use a dedicated network between two hyper-v hosts?

    Dear Team,
    How can we force Hyper-V Replica to use a dedicated network between two hyper-v hosts?
    For live migration, we can choose the desired network in Hyper-V Manager (Use these IP addresses for Live Migration).
    I have two 10G adapters teamed together, the virtual switch is created on top as converged network with several vNICs (MangementOS, LiveMigratrion, Backup, etc...)
    Each network is set with a specific VLAN to isolate the traffic.
    The two Hyper-V hosts are on the same LAN and domain.
    Thank you.
    Regards,

    I have accomplished this by using a DNS pointer specifying an IP address on that dedicated network. That will force traffic done that network.
    John Johnston

  • OVMRU_002030E Cannot create OCFS2 file system with local file server: Local FS OVMSRVR. Its server is not in a cluster. [Thu Dec 04 01:33:10 EST 2014]

    Hi Guys,
    Im trying to create a repository .I have a single node OVM server and have presented two LUN's (Hitachi  HUS110 direct attached (via FC))
    I've created a server pool and unchecked the clustered server pool. I see the LUN's (physical disk from Oracle Virtual manager) .But when creating the repository i'm having this error
    "OVMRU_002030E Cannot create OCFS2 file system with local file server: Local FS OVMSRVR. Its server is not in a cluster. [Thu Dec 04 01:33:10 EST 2014]"
    Any steps i missed?Appreciate anyone's input
    Regards,
    Robert

    Hi Robert,
    did you actually add the OVS to the server pool, that you created?

  • 0x0000007a Error for VMS inside a Hyper-V server in a failover cluster

    Hi All, 
    This is my first time posting a question on this forum. Will provide more details if necessary. 
    Would appreciate any help for the below problem: 
    We have two servers operating on Win 2012 R2 datacenter. However one of them(host2) had all it's VMs crash with error code 0x0000007a. While the OS of the host itself was ok in a sense, trying to run any application or opening windows explorer would result
    in a hang, We can still use ctrl+alt+delete to log off. 
    Both servers have been installed with similar windows update patches, and all the configurations are the same. However Host01 is running fine. 
    Booting on safe mode let's us run the server fine. Did a chkdsk /r and it found some errors on the disk, I did a restart to let it try and fix the issue but issues are still the same. 
    Dell hardware check came up with nothing. 
    Thank you!
    Anton

    Hi
    Can you check if there are any hardware errors logged in event viewer?
    Perhaps you have a faulty memory module. I had a similar issue with my host and it ended up being a combination of reseating the ram and replacing the Quad network adapter.
    Are both you dells on the latest SUU?
    Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Maybe you are looking for

  • ITunes 10.7 and Windows 8

    Hi. I installed Windows 8 pro on Saturday night at some point. iTunes was working fine the rest of the night on Saturday and most of the day on Sunday. Then suddenly on Sunday night, every time I opened iTunes it just freezes. I can't click on anythi

  • JSP Accessing variables in a class that are located outside of the class

    I am having a problem accessing a nested HashMap in a function. The HashMap is defined before the HTML starts and the function is located near the end (Shouldn't make a difference). List tires = databean.getTires(); Map frontTireInfo = (Map)tires.get

  • Integration of BIP and Oracle Forms

    Hi Gurus I have been referring to a whitepaper 'How to integrate Oracle BI Publisher via Web Services in Oracle Forms' version 2.2 It mentions about using version Oracle Forms 10.1.2.3 and Oracle JDeveloper 10.1.3.4 I am using Oracle Forms 10.1.2.0.2

  • Xp_cmdshell to Copy Document based on ID

    Hi all, Is it possible to use xp_cmdshell to copy a document on the server to a shared folder using the code for the document. My example is -  A sql table with Code and FileLocation.  I know the codes for the documents but not where they are saved.

  • SAP Business One elearing Curriculum

    Dear All, I was browsing into the eLearning section in order to find some help on SAP Business One. I found some flash tutorials regarding Print Layout Designer and User Defined Values. I know that there are other flash tutorials on the Sales Module,