Clustering with Hyper-V

Hello,
I have a 2 node cluster (server 2012 R2) with Hyper-V installed. Both the hosts receive their updates from SCCM 2012. Node1 gets it's updates on Sunday and Node2 gets it's updates on Tuesday. The VM's get there updates on another day of the week. My issue
is if I have VM's on NODE1 and NODE1 receives its updates and requires a restart, the VM's do not move gracefully to NODE2. If I log onto a VM that has moved to NODE2 then it'll ask me why the VM unexpectedly shutdown. Is there a way to make
all the VM's live migrate to NODE2 before NODE1 restarts due to updates? Thanks
Pat
Pat

Hi Pat,
If you restart a node before you perform a live migration, the VMs on the node you restarted will perform a quick migration rather than a Live migration hence the warnings.
You could initiate a Live migration using PowerShell or other scripting method, perhaps even Orchestrator if you have the full system centre suite before the updates are installed and then the restart wont cause issues.
Hyper-V: How To Initiate a Live Migration using PowerShell
http://social.technet.microsoft.com/wiki/contents/articles/hyper-v-how-to-initiate-a-live-migration-using-powershell.aspx
According to MS it is possible to integrate SCCM with CAU but this is beyond my knowledge. Microsoft provided a nice FAQ for CAU that is certainly recommended to read:
http://technet.microsoft.com/en-us/library/hh831367.aspx
 "CAU and Configuration Manager can work together to deliver synergistic value. By using the public plug-in interface architecture in CAU, Configuration Manager can leverage the cluster awareness of CAU. This allows
a customer who already has Configuration Manager deployed to use the cluster awareness capabilities of CAU while taking advantage of the Configuration Manager infrastructure, such as distribution points, approvals, and the Configuration console."
Do you use SCVMM with the Hyper-V cluster? Another option is to look to integrate SCCM into SCVMM and use the compliance element to patch and live migrate VMs if you do.
Kind Regards
Michael Coutanche
Blog:   
Twitter:   LinkedIn:
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

Similar Messages

  • Need some help with 2012 R2 Clustering Permissions (with Hyper-V)

    Could someone please point me in the direction of  some sensible documentation regarding how 2012 cluster permissions work?
    I have a hosted Powershell application that sits in a C# web service wrapper. It automates everything to do with Hyper-V and works 100% of the time.
    I want the account it is running under (via impersonation) to be able to automate the creation of cluster shared volumes and the addition of VMs as clustered roles.
    I've written the code and i am confident it works, but I cannot get my head around how the permissions are supposed to be set-up.
    Ideally, I want a single account per hyper-v node, that can:
    a.) Administer Hyper-V locally (there seems to be a group for this bit - this is working OK).
    b.) Create and add clustered shared volumes within the cluster.
    c.) Can add and remove cluster (VM) roles.
    d.) Can add and remove data (VHDs) on the CSV themselves.
    I ideally like to have this account not be a domain admin, and pare down its rights to the those listed above.
    I can't even begin to explain how lost I've got trying to get this to work. Even if I do add a new domain admin, for instance, it doesn’t even seem to be capable off adding cluster roles or deleting files from a CSV. I think I really need to take a step
    back.
    Is what I want to do even  possible? 
    Thank you. 

    As far as storage: Your script would create the VHDX files on C:\ClusterStorage\Volume#
    There is no special permission required for this other than perhaps local admin rights. I can create, delete, and mount VHDX files via Windows Explorer on any system in the domain within the CSV folder set.
    UNC: \\NODE\C$\ClusterStorage\Volume#
    As far as creating and deleting VMs on the cluster use the PowerShell commands as normal.
    One option would be to use Group Policy Preferences to deliver a domain user account to the Local Administrators Group on the Hyper-V Nodes. That user account would be the one that would be used to run your required PS commands in a local admin context.
    Another option would be to provision a local admin account with the same UN/Pwd across all nodes and use that for your PowerShell needs.
    Philip Elder Microsoft Cluster MVP Blog: http://blog.mpecsinc.ca

  • 3 Node hyper-V 2012 R2 Failover Clustering with Storage spaces on one of the Hyper-V hosts

    Hi,
    We have 3x Dell R720s with 5x 600GB 3.5 15K SAS and 128 GB RAM each. Was wondering if I could setup a Fail-over Hyper-V 2012 R2 Clustering with these 3 with the shared storage for the CSV being provided by one of the Hyper-V hosts with storage spaces installed
    (Is storage spaces supported on Hyper-V?) Or I can use a 2-Node Failover clustering and the third one as a standalone Hyper-V or Server 2012 R2 with Hyper-V and storage spaces.  
    Each Server comes with QP 1G and a DP10G nics so that I can dedicate the 10G nics for iSCSI
    Dont have a SAN or a 10G switch so it would be a crossover cable connection between the servers.
    Most of the VMs would be Non-HA. Exchange 2010, Sharepoint 2010 and SQL Server 2008 R2 would be the only VMS running as HA-VMs. CSV for the Hyper-V Failover cluster would be provided by the storage spaces.

    I thought I was tying to do just that with 8x600 GB RAID-10 using the H/W RAID controller (on the 3rd Server) and creating CSVs out of that space so as to provide better storage performance for the HA-VMs.
    1. Storage Server : 8x 600GB RAID-10 (For CSVs to house all HA-VMs running on the other 2 Servers) It may also run some local VMs that have very little disk I/O
    2. Hyper-V-1 : Will act has primary HYPER-V hopst for 2x Exchange and Database Server HA-VMs (the VMDXs would be stored on the Storage Servers CSVs on top of the 8x600GB RAID-10). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    3. Hyper-V-2 : Will act as a Hyper-V host when the above HA-VMs fail-over to this one (when HYPER-V-1 is down for any reason). May also run some non-HA VMs using the local 2x600 GB in RAID-1 
    The single point of failure for the HA-VMs (non HA-VMs are non-HA so its OK if they are down for some time) is the Storage Server. The Exchange servers here are DAG peers to the Exchange Servers at the head office so in case the storage server mainboard
    goes down (disk failure is mitigated using RAID, other components such as RAM, mainboard may still go but their % failure is relatively low) the local exchange servers would be down but exchange clients will still be able to do their email related tasks using
    the HO Exchange servers.
    Also they are under 4hr mission critical support including entire server replacement within the 4 hour period. 
    If you're OK with your shared storage being a single point of failure then sure you can proceed the way you've listed. However you'll still route all VM-related I/O over Ethernet which is obviously slower then running VMs from DAS (with or without virtual SAN
    LUN-to-LUN replication layer) as DAS has higher bandwidth and smaller latency. Also with your scenario you exclude one host from your hypervisor cluster so running VMs on a pair of hosts instead of three would give you much worse performance and resilience:
    with 1 from 3 physical hosts lost cluster would be still operable and with 1 from 2 lost all VMs booted on a single node could give you inadequate performance. So make sure your hosts would be insanely underprovisioned as every single node should be able to
    handle ALL workload in case of disaster. Good luck and happy clustering :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Clustering Free Hyper-V or Windows Server 2012 Standard with Windows Server 2012 Datacenter

    Hi,
    Is anyone running a cluster which has Datacenter 2012 clustered with either Standard 2012 or Hyper-V 2012?
    I have seen posts here and elsewhere that say it is possible but when I rang Microsoft Licencing team to discuss licensing in this scenario they advised me that the hardware and OS on each clustered server must be identical.
    Now I'm really confused.
    Thanks,
     

    Please repost in the proper forum:
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winserverhyperv
    This forum is for the product Virtual Server 2005, not Hyper-V.

  • Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

    Hi,
    Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery
    Manager?
    I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote
    DR site. Both sites are connected/will be connected to each other through dark fibre.
    I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V
    Replica.
    Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to
    another host within the same cluster,
    the Migration VM Wizard gives me the following "Rating Explanation" error:
    "The virtual machine virtual machine name which
    requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.
    When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.
    When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.
    When I stop replication of the VM, the error goes away.
    Initially, I thought this error was because I attempted to manually configure
    the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).
    However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of
    my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.
    However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.
    I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 
    I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.
    Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible
    with each other?
    If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.
    D

    This can be considered as a minor GUI bug. 
    Let me explain. Live Migration and Hyper-V Replica is supported on both Windows Server 2012 and 2012 R2 Hyper-V.
    This is because we have the Hyper-V Replica Broker Role (in a cluster) that is able to detect, receive and keep track of the VMs and the synchronizations. The configuration related to VMs enabled with replications follows the VMs itself. 
    If you try to live migrate a VM within Failover Cluster Manager, you will not get any message at all. But VMM will (as you can see), give you an
    error but it should rather be an informative message instead.
    Intelligent placement (in VMM) is responsible for putting everything in your environment together to give you tips about where the VM best possible can run, and that is why we are seeing this message here.
    I have personally reported this as a bug. I will check on this one and get back to this thread.
    Update: just spoke to one of the PMs of HRM and they can confirm that live migration is supported - and should work in this context.
    Please see this thread as well: http://social.msdn.microsoft.com/Forums/windowsazure/en-US/29163570-22a6-4da4-b309-21878aeb8ff8/hyperv-live-migration-compatibility-with-hyperv-replicahyperv-recovery-manager?forum=hypervrecovmgr
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Generate a report with date range and year as POV with Hyp Planning ?

    Hi everybody,
    I am starting with hyp planning and i need your help please.
    I have to create some forms. In those forms, the final user is supposed to be able to display data in the forms between 2 dates and for a specific year.
    My first problem : I don't know how you can display data in a form between 2 dates and for one specific year. I just have one dimension YEAR and one PERIOD, so if i selected them as a PAGE, the final user will just be able to choose the month and the year for his form ... and not displaying data between 2 dates.
    My second problem is with the dimensions YEAR, SCENARIO and VERSION. I don't want to put the dimensions VERSION and SCENARIO as PAGE (it easier for the final user to just choose a year than to choose a year, scenario, and version) but as POV with a relationship with the dimension YEAR (because if the user chooses YEAR = actual_year (2012) the VERSION and the SCENARIO won't be the same than if the user chooses YEAR= last_year). IF YEAR = next_year, VERSION=Propuesta, SCENARIO=Forecast
    IF YEAR = actual_year, VERSION=Propuesta, SCENARIO=Forecast AC
    IF YEAR = last_year, VERSION=Actual, SCENARIA=Real
    How can i do that?
    Thank you for your help
    Edited by: 932573 on May 7, 2012 3:44 PM
    Edited by: 932573 on May 7, 2012 4:27 PM

    I am not sure if you are using RAS or Enterprise SDK, but here are some code snippets to set range report parameters:
    For scheduling:
    // oReport is IReport object holding the crystal report.
    oReportParameter = oReport.getReportParameters().get(i);
    currentRangeValue = oReportParameter.getCurrentValues().addRangeValue();
    currentRangeValue.getFromValue().setValue(dateParameter);
    currentRangeValue.getToValue().setValue(dateParameter);
    For viewing:
    ParameterFieldRangeValue pfrv = new ParameterFieldRangeValue();
    pfrv.setBeginValue(dateTimeValue);
    pfrv.setEndValue(dateTimeValue1);
    pfrv.setLowerBoundType(RangeValueBoundType.inclusive);
    pfrv.setUpperBoundType(RangeValueBoundType.inclusive);
    pf.getCurrentValues().add(pfrv);
    f.add(pf);
    f is Fields object and pass that to viewer.

  • Mouse not working in Virtual machine (win2k3 32 and 64 bit) on Windows Server 2008 RC1 With Hyper-V Technologies

    I have installed two virtual machines(win2k3 32bit and win2k3 64bit ) on Windows Server 2008 RC1 With Hyper-V Technologies (Beta) enterprise. Mouse is getting lost inside VM (using CTRL+ ATL +  LEFT ARROW  comes mouse control control in host OS).
    Is there any setting to make mouse works? Insert Integration Serice setup Disk operation succeeds.

     all credits for mike_something
    The problem is that the old mousedriver is hooked to the mouse device class.
    To remove this dependency, go into the registry and navigate to the key
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Contro l\Class\{4D36E96F-E325-11CE-BFC1-08002BE10318}
    Remove the value msvmmouf from the UpperFilters Regvalue.
    Reboot....
    tadaaa!!
    If you wish you can take out the drivers completely by deleting these registry hives completely :
    The driver:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\msvmmouf
    The Service:
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Servic es\VPCMap
    The CopyHook Shell Extension: ( for folder access )
    HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{30C14BA C-122C-42ED-B319-1139DBF48EB8}\InProcServer32
    HKEY_CLASSES_ROOT\Directory\shellex\CopyHookHandle rs\VPCCopyHook
    After a reboot you can delete the "Virtual Machine Additions" folder from program files...
    Good Luck!

  • Virtualized Windows 8.1 with Hyper-V as a home PC?

    Hello everyone,
    I'm getting ready to reimage my Windows 8.1 home PC as it has been unreliable of late and I simply don't have the time to invest troubleshooting the various oddities of its behavior. Since this is something I tend to do somewhat regularly, though, I began
    to wonder if maybe this time I should do something different. I'm considering installing Hyper-V and configuring a virtual machine to use as my daily PC, then making that machine's virtual disk my bootable volume.
    I know this probably isn't ideal but there's two big reasons that make this sound like an attractive option to me. First and foremost, I tinker with my PC all the time. As a consequence, I break things on a regular basis. That's usually fine, but eventually
    I simply want to get on with using my PC and so I have to invest time reimaging, repairing, etc. to get my system back to normal. Virtualizing my system would mean I could accomplish this incredibly easily, simply by reverting to an earlier snapshot.
    The other point is that I am currently attending school to become a network administrator and, well, what better way to acquaint myself better with Hyper-V than to use it on a daily basis! I'm already fairly familiar with virtualization having used VMWare
    Workstation/Player and Oracle VirtualBox rather extensively at work and for school.
    Anyway, this sounds wonderful in my head but I have some concerns I can't really find answers for...
    1.It doesn't sound like I can simply install Hyper-V as my "host" OS without installing Windows 8 in non-virtualized form. I don't really want to have a Windows 8 host on which I run a Windows 8 guest VM. I want to simply boot to a virtualized
    Windows 8 environment that I can manage from the hypervisor, no mucking with a host OS.
    2.If the above is possible, how much of a performance overhead is this going to have in my system? My PC is more than well equipped to handle a VM or two but as I use it for gaming and other resource-intensive applications I would not want to do this if
    there's, say, a 5%+ performance loss.
    Has anyone done this? Any recommendations?

    I don't know much about Hyper-V, but I use VMware virtual machines all the time for tinkering.  That's a good way to keep from doing potentially damaging things to your bootable host system.  I don't know whether Hyper-V offers snapshots, but I
    can restore a VM to an earlier configuration in an eyeblink - VERY handy.
    Keep in mind a virtual machine will have lower performance than the host system, so unless you just have overwhelming resources and power perhaps you'll want to keep to using your host system for most things.
    There are a lot of good habits and things you can do to keep a system stable.  I noticed you mentioned having to reinstall Windows quite often.  I find that unacceptable, and in fact every system I've run - going all the way back to Windows
    95 - I've only ever installed once.  I'm not kidding.  I've even restored Windows from backup onto a replacement system on two different occasions and was back up and running with a fully configured system in very short order - saving all the hassle
    of reinstallation, copying of data, etc.  I've documented how I manage systems in my books.
    First and foremost, I'd suggest NEVER running an automatic "cleaner" application.  Would you invite a stranger into your home to just delete whatever they feel like?  Of course not.
    But there are a few things you should do from time to time to keep a system running healthy, including bettering your default security environment and running a SysInternals tool called Autoruns to look over all the programs that have installed themselves
    to run in the background.  I have MOST of those disabled, because they're simply not needed!
    To beef up security, probably the best thing you can do is to download something called the MVPS hosts file, which blocks named access to literally tens of thousands of parasite web sites.  The key is that it does so passively, meaning you're not running
    any more software.  It just causes Windows to locally resolve parasite web site names to address 0.0.0.0.  It's very effective.  Beyond that, reconfigure Internet Explorer to not run ActiveX from any site in the Internet Zone.  Finally,
    replace Windows' own security software with something better - I prefer Avast! myself.
    We've already discussed using virtual machines to do your poking around.  That's really effective.
    I encourage you to get into virtualization.  It opens up whole new realms of possibilities.
    Good luck!
    -Noel
    Detailed how-to in my eBooks:  
    Configure The Windows 7 "To Work" Options
    Configure The Windows 8 "To Work" Options

  • Windows Server 2003 R2 Backup with Hyper-V in DPM

    Hello,
    I am trying to backup multiple Windows Server 2003 R2 VMs on Hyper-V (Windows Server 2012 R2) with DPM.
    When I try to this I get the following errors:
    On the Hyper-V Host:
    Source: VSS ID:8229
    A VSS writer has rejected an event with error 0x800423f4, The writer experienced a non-transient error.  If the backup process is retried,
    the error is likely to reoccur.
    . Changes that the writer made to the writer components while handling the event will not be available to the requester. Check the event log for related events from the application hosting the VSS writer.
    Operation:
       PrepareForSnapshot Event
    Context:
       Execution Context: Writer
       Writer Class Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Name: Microsoft Hyper-V VSS Writer
       Writer Instance ID: {0b5be050-91e2-41a6-9207-414f1291f975}
       Command Line: C:\Windows\system32\vmms.exe
       Process ID: 5312
    Source: Hyper-V-VMMS ID: 18014
    Checkpoint operation for 'VMName' was cancelled. (Virtual machine ID D605FC49-D379-4C60-A730-485C8E9279DC)
    Source: Hyper-V-VMMS ID: 18012
    Checkpoint operation for 'VMName' failed. (Virtual machine ID D605FC49-D379-4C60-A730-485C8E9279DC)
    Source: Hyper-V-VMMS ID: 10150
    Could not create backup checkpoint for virtual machine 'VMName': Element not found. (0x80070490). (Virtual machine ID D605FC49-D379-4C60-A730-485C8E9279DC)
    Source: Hyper-V-Worker ID: 3280
    'VMName' could not initiate a checkpoint operation: Element not found. (0x80070490). (Virtual machine ID D605FC49-D379-4C60-A730-485C8E9279DC)
    Source: Hyper-V-VMMS ID: 16010
    The operation failed.
    The Output of vssadmin list writers on the Hyper-V Host looks like this:
    Writer name: 'Microsoft Hyper-V VSS Writer'
       Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Instance Id: {0b5be050-91e2-41a6-9207-414f1291f975}
       State: [8] Failed
       Last error: Non-retryable error
    In the Windows Server 2003 R2 VM there are no errors in the eventlog.
    Only the information, that VSS is started and stopped after a while.
    When I do a Backup with ntbackup of in the VM everything runs fine.
    When I create a checkpoint with Hyper-V everything runs fine, too.
    The integration services are up to date and get-vmintegrationservice -vmname VMName shows everything is enabled and OK.
    On the Hyper-V host there are 13 VMs. 3 of them are running Windows Server 2003 R2 and 2 of them are showing this error. It seems to be no general problem with Windows Server 2003 R2 then.
    I tried the backup on a different Hyper-V host with no success.
    How can I narrow down the cause of this error?
    Thanks.
    greets,
    torsten

    Thank you to everyone in this thread!
    After digging through VSS and Hyper-V dead-ends, the solution above worked for me in a W2012 R2 environment.
    I only had this problem trying to back up W2012 R2 Hyper-V VMs stored on a CSV, hosted on a W2012 R2 Host Cluster, backing up from a Host node.  I could back up from within the guests fine.  Both Windows Server Backup and DPM 2012 R2
    gave VSS errors on the cluster hosts. 
    In fact Windows Server Backup complains with 80780189 it won't back up an application with files on a CSV, which is a common cluster configuration.  This is an example of another caveat in a minefield of MS caveats buried in fine print which often negates
    configurations touted by MS Marketing, things which sound great on a sell sheet, but in practice are impractical or just not production-ready because of these restrictions.
    Disabled Backup in Integration Services, performed an offline back up from DPM 2012 R2 (which did briefly affect serving the workloads on the VM), which succeeded, and then re-enabled Backup in Integration Services.  DPM then continued to
    sync successfully after that for a day.  However, this problem returned after a period of heavy DPM backup activity.  So I redistributed backups through various Protection Groups to spread the DPM load over a longer period and this seems to
    have solved this problem.
    In general I had lots of problems installing, configuring, and now operating DPM, so the experience has not been smooth, but most of the problems have now been solved.  I find you have to "baby" this product and not push it too hard,
    otherwise things start breaking.
    Thanks again everyone!
    DPM Error ID 30111  0x800423F4
    DPM-EM Event ID 3106

  • SAP ECC 6.0 installation in windows 2008 clustering with db2 ERROR DB21524E

    Dear Sir.,
    Am installing sap ECC 6.0 on windows 2008 clustering with db2.
    I got one error in the phase of Configure the database for mscs. The error is  DB21524E 'FAILED TO CREATE THE RESOURCE DB2 IP PRD' THE CLUSTER NETWORK WAS NOT FOUND .
    DB2_INSTANCE=DB2PRD
    DB2_LOGON_USERNAME=iil\db2prd
    DB2_LOGON_PASSWORD=XXXX
    CLUSTER_NAME=mscs
    GROUP_NAME=DB2 PRD Group
    DB2_NODE=0
    IP_NAME = DB2 IP PRD
    IP_ADDRESS=192.168.16.27
    IP_SUBNET=255.255.0.0
    IP_NETWORK=public
    NETNAME_NAME=DB2 NetName PRD
    NETNAME_VALUE=dbgrp
    NETNAME_DEPENDENCY=DB2 IP PRD
    DISK_NAME=Disk M::
    TARGET_DRVMAP_DISK=Disk M
    Best regards.,
    please help me since am already running late with this installation to run the db2mscs utility to Create resource.
    Best regards.,
    Manjunath G
    Edited by: Manjug77 on Oct 29, 2009 2:45 PM

    Hello Manjunath.
    This looks like a configuration problem.
    Please check if IP_NETWORK is set to the name of your network adapter and
    if your IP_ADDRESS and IP_SUBNET are set to the correct values.
    Note:
    - IP_ADDRESS is a new IP address that is not used by any machine in the network.
    - IP_NETWORK is optional
    If you still get the same error debug your db2mscs.exe-call:
    See the answer from  Adam Wilson:
    Can you run the following and check the output:
    db2mscs -f <path>\db2mscs.cfg -d <path>\debug.txt
    I suspect you may see the following error in the debug.txt:
    Create_IP_Resource fnc_errcode 5045
    If you see the fnc_errcode 5045
    In that case, error 5045 which is a windows error, means
    ERROR_CLUSTER_NETWORK_NOT_FOUND. This error is occuring because windows
    couldn't find the "public network" as indicated by IP_NETWORK.
    Windows couldn't find the MSCS network called "public network". The
    IP_NETWORK parameter must be set to an MSCS Network., so running the
    Cluster Admin GUI and expanding the Cluster Configuration->Network to
    view all MSCS networks that were available and if "public network" was
    one of them.
    However, the parameter IP_NETWORK is optional and you could be commented
    out. In that case the first MSCS network detected by the system was used.
    Best regards,
    Hinnerk Gildhoff

  • Exporting data clusters with type version

    Hi all,
    let's assume we are saving some ABAP data as a cluster to database using the IMPORT TO ... functionality, e.g.
    EXPORT VBAK FROM LS_VBAK VBAP FROM LT_VBAP  TO DATABASE INDX(QT) ID 'TEST'
    Some days later, the data can be imported
    IMPORT VBAK TO LS_VBAK VBAP TO LT_VBAP FROM DATABASE INDX(QT) ID 'TEST'.
    Some months or years later, however, the IMPORT may crash: Since it is the most normal thing in the world that ABAP types are extended, some new fields may have been added to the structures VBAP or VBAK in the meantime.
    The data are not lost, however: Using method CL_ABAP_EXPIMP_UTILITIES=>DBUF_IMPORT_CREATE_DATA, they can be recovered from an XSTRING. This will create data objects apt to the content of the buffer. But the component names are lost - they get auto-generated names like COMP00001, COMP00002 etc., replacing the original names MANDT, VBELN, etc.
    So a natural question is how to save the type info ( = metadata) for the extracted data together with the data themselves:
    EXPORT TYPES FROM LT_TYPES VBAK FROM LS_VBAK VBAP FROM LT_VBAP TO DATABASE INDX(QT) ID 'TEST'.
    The table LT_TYPES should contain the meta type info for all exported data. For structures, this could be a DDFIELDS-like table containing the component information. For tables, additionally the table kind, key uniqueness and key components should be saved.
    Actually, LT_TYPES should contain persistent versions of CL_ABAP_STRUCTDESCR, CL_ABAP_TABLEDESCR, etc. But it seems there is no serialization provided for the RTTI type info classes.
    (In an optimized version, the type info could be stored in a separate cluster, and being referenced by a version number only in the data cluster, for efficiency).
    In the import step, the LT_TYPES could be imported first, and then instances for these historical data types could be created as containers for the real data import (here, I am inventing a class zcl_abap_expimp_utilities):
    IMPORT TYPES TO LT_TYPES FROM DATABASE INDX(QT) ID 'TEST'.
    DATA(LO_TYPES) = ZCL_ABAP_EXPIMP_UITLITIES=>CREATE_TYPE_INFOS ( LT_TYPES ).
    assign lo_types->data_object('VBAK')->* to <LS_VBAK>.
    assign lo_types->data_object('VBAP')->* to <LT_VBAP>.
    IMPORT VBAK TO <LS_VBAK> VBAP TO <LT_VBAP> FROM DATABASE INDX(QT) ID 'TEST'.
    Now the data can be recovered with their historical types (i.e. the types they had when the export statement was performed) and processed further.
    For example, structures and table-lines could be mixed into the current versions using MOVE-CORRESPONDING, and so on.
    My question: Is there any support from the standard for this functionality: Exporting data clusters with type version?
    Regards,
    Rüdiger

    The IMPORT statement works fine if target internal table has all fields of source internal table, plus some additional fields at the end, something like append structure of vbak.
    Here is the snippet used.
    TYPES:
    BEGIN OF ty,
      a TYPE i,
    END OF ty,
    BEGIN OF ty2.
            INCLUDE TYPE ty.
    TYPES:
      b TYPE i,
    END OF ty2.
    DATA: lt1 TYPE TABLE OF ty,
          ls TYPE ty,
          lt2 TYPE TABLE OF ty2.
    ls-a = 2. APPEND ls TO lt1.
    ls-a = 4. APPEND ls TO lt1.
    EXPORT table = lt1 TO MEMORY ID 'ZTEST'.
    IMPORT table = lt2 FROM MEMORY ID 'ZTEST'.
    I guess IMPORT statement would behave fine if current VBAK has more fields than older VBAK.

  • Problem with clustering with JBoss server

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • Problem with clustering with JBoss server---help needed

    Hi,
    Its a HUMBLE REQUEST TO THE EXPERIENCED persons.
    I am new to clustering. My objective is to attain clustering with load balencing and/or Failover in JBoss server. I have two JBoss servers running in two diffferent IP addresses which form my cluster. I could succesfully perform farm (all/farm) deployment
    in my cluster.
    I do believe that if clustering is enabled; and if one of the server(s1) goes down, then the other(s2) will serve the requests coming to s1. Am i correct? Or is that true only in the case of "Failover clustering". If it is correct, what are all the things i have to do to achieve it?
    As i am new to the topic, can any one explain me how a simple application (say getting a value from a user and storing it in the database--assume every is there in a WAR file), can be deployed with load balencing and failover support rather than going in to clustering EJB or anything difficult to understand.
    Kindly help me in this mattter. Atleast give me some hints and i ll learn from that.Becoz i could n't find a step by step procedure explaining which configuration files are to be changed to achieve this (and how) for achiving this. Also i could n't find Books explaining this rather than usual theorectical concepts.
    Thanking you in advance
    with respect
    abhirami

    hi ,
    In this scenario u can use the load balancer instead of fail over clustering .
    I would suggest u to create apache proxy for redirect the request for many jboss instance.
    Rgds
    kathir

  • CSA 5.1 Agent Installation on Microsoft Clusters with Teamed Broadcom NICs

    I'm searching all over Cisco.com for information on installing CSA 5.1 agent on Microsoft Clusters with Teamed Broadcom NICs, but I can't find any information other than "this is supported" in the installation guide.
    Does anyone know if there is a process or procedure that should be followed to install this? For example, some questions that come to mind are:
    - Do the cluster services are needed to be stopped?
    - Should the cluster be broken and then rebuilt?
    - Is there any documentation indicating this configuration is approved by Microsoft?
    - Are there case studies or other documentation on previous similar installations and/or lessons learned?
    Thanks in advance,
    Ken

    Ken, you might just end up being the case study! Do you have a non-production cluster to with?
    If not and you already completed pilot testing, you probably have an idea of what you want to do with the agent. Do you have to stop the cluster for other software installations? I guess you might ask MS about breaking the cluster it since it's their cluster.
    The only caveat I've seen with teamed NICs is when the agent tries to contact the MC it may timeout a few times. You could probably increase the polling time if this happens.
    I'd create an agent kit that belongs to a group in test mode with minimal or no policies attached to test first and install it on one of the nodes. If that works ok you could gradually increase the policies and rules until you are comfortable that it is tuned correctly and then switch to protect mode.
    Hope this helps...
    Tom S

  • Clustering with wl6.1 - Problems

    Hi,
              After reading a bit about clustering with weblogic 6.1 (and thanks to
              this list), I have done the following:
              1. Configure machines - two boxes (Solaris and Linux).
              2. Configure servers - weblogic 6.1 running on both at port 7001.
              3. Administration server is Win'XP. Here is the snippet of config.xml
              on the Administration Server:
              <Server Cluster="MyCluster" ListenAddress="192.168.1.239"
              Machine="dummy239" Name="wls239">
              <Log Name="wls239"/>
              <SSL Name="wls239"/>
              <ServerDebug Name="wls239"/>
              <KernelDebug Name="wls239"/>
              <ServerStart Name="wls239"/>
              <WebServer Name="wls239"/>
              </Server>
              <Server Cluster="MyCluster" ListenAddress="192.168.1.131"
              Machine="dummy131" Name="wls131">
              <Log Name="wls131"/>
              <SSL Name="wls131"/>
              <ServerDebug Name="wls131"/>
              <KernelDebug Name="wls131"/>
              <ServerStart Name="wls131"
              OutputFile="C:\bea\wlserver6.1\.\config\NodeManagerClientLogs\wls131\startserver_1029504698175.log"/>
              <WebServer Name="wls131"/>
              </Server>
              Problems:
              1. I can't figure out how I set the "OutputFile" parameter for the
              server "wls131".
              2. I have NodeManager started on 131 listening on port 5555. But when
              I try to start server "wls131" from the Administration Server, I get
              the following error:
              <Aug 16, 2002 6:56:58 AM PDT> <Error> <NodeManager> <Could not start
              server 'wls131' via Node Manager - reason: '[SecureCommandInvoker:
              Could not create a socket to the NodeManager running on host
              '192.168.1.131:5555' to execute command 'online null', reason:
              Connection refused: connect. Ensure that the NodeManager on host
              '192.168.1.131' is configured to listen on port '5555' and that it is
              actively listening]'
              Any help will be greatly appreciated.
              TIA,
              

    I have made some progress:
              1. The environment settings on 131 were missing. I executed setEnv.sh
              to setup the required environment variables.
              2. nodemanager.hosts (on 131) had the following entries earlier:
                   # more nodemanager.hosts
                   127.0.0.1
                   localhost
                   192.168.1.135
              I changed it to:
                   #more nodemanager.hosts
                   192.168.1.135
              3. The Administration Server (135) did not have any listen Address
              defined (since it was working without it), but since one of the errors
              thrown by NodeManager on 131 was - "could not connect to
              localhost:70001 via HTTP", I changed the listen Address to
              192.168.1.135 (instead of the null).
              4. I deleted all the logs (NodeManagerInternal logs on 131) and all
              log files on NodeManagerClientLogs on 135.
              5. Restarted Admin Server. Restarted NodeManager on 131.
              NodeManagerInternalLogs on 131 has:
                   [root@]# more NodeManagerInternal_1029567030003
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
              listenAddress to '1
                   92.168.1.131'>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
              listenPort to '5555
                   '>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting WebLogic
              home to '/
                   home/weblogic/bea/wlserver6.1'>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting java
              home to '/home
                   /weblogic/jdk1.3.1_03'>
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <SecureSo
                   cketListener: Enabled Ciphers >
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <TLS_RSA_
                   EXPORT_WITH_RC4_40_MD5>
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <SecureSo
                   cketListener: listening on 192.168.1.131:5555>
                   And the wls131 logs contain:
                   [root@dummy131 wls131]# more config
                   #Saved configuration for wls131
                   #Sat Aug 17 12:24:42 IST 2002
                   processId=18437
                   savedLogsDirectory=/home/weblogic/bea/wlserver6.1/NodeManagerLogs
                   classpath=NULL
                   nodemanager.debugEnabled=false
                   TimeStamp=1029567282621
                   command=online
                   java.security.policy=NULL
                   bea.home=NULL
                   weblogic.Domain=domain
                   serverStartArgs=NULL
                   weblogic.management.server=192.168.1.135\:7001
                   RootDirectory=NULL
                   nodemanager.sslEnabled=true
                   weblogic.Name=wls131
                   The error generated for the client (131) was:
                   [root@dummy131 wls131]# more wls131_error.log
                   The WebLogic Server did not start up properly.
                   Exception raised:
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.
                   DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   Reason: Fatal initialization exception
                   and the output on the admin server (135) is:
                   <Aug 17, 2002 12:24:42 PM IST> <Info>
              <[email protected]:5555> <BaseProcessControl: saving process
              id of Weblogic Managed server 'wls131', pid: 18437>
                   Starting WebLogic Server ....
                   Connecting to http://192.168.1.135:7001...
                   <Aug 17, 2002 12:24:50 PM IST> <Emergency> <Configuration Management>
              <Errors detected attempting to connect to admin server at
              192.168.1.135:7001 during initialization of managed server (
              192.168.1.131:7001 ). The reported error was: <
              weblogic.security.acl.DefaultUserInfoImpl > This condition generally
              results when the managed and admin servers are using the same listen
              address and port.>
                   <Aug 17, 2002 12:24:50 PM IST> <Emergency> <Server> <Unable to
              initialize the server: 'Fatal initialization exception
                   Throwable: weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   '>
                   The WebLogic Server did not start up properly.
                   Exception raised:
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   Reason: Fatal initialization exception
              6. Now from the client (131) error, I thought it was something to do
              with security. So I tried to start weblogic manually (connected as the
              same user). Curiosly enough, it does start (it threw some errors for
              some EJBs, but I got the final message):
                   <Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
              <ListenThread listening on port 7001>
                   <Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
              <SSLListenThread listening on port 7002>
                   <Aug 17, 2002 12:30:40 PM IST> <Notice> <WebLogicServer> <Started
              WebLogic Admin Server "myserver" for domain "mydomain" running in
              Production Mode>
              7. As you can see the domain on the client (131) is "mydomain". But
              shouldn't the Admin server be 192.168.1.135, since this is what I have
              configured for the NodeManager? Or is it that the error occurs because
              the Admin server Node Manager is configured to work with is 135, while
              in the default scripts the admin server is itself? I'm confused :-)
              Help, anyone?
              

Maybe you are looking for