Shared disk for weblogic cluster

Hi Gurus,
I've googled about the subject, but didn't find any specific.
Could somebody explain,
Is it possible/certified to create weblogic domain on a clustered file system (i.e. OCFS2)?
I would like to build a cluster with common domain home.
Regards,
Mikhail

Hi Saurabh,
What is the best way to have some shared data
within a cluster. This data needs to be in memory
only, no need for DB persistence. But it can be
upadted (in memory only) and updates should be
visible to any of the servers with in the cluster.
For example - I want to create a hashtable to be
shared within cluster and the hashtable can be
modified from time to time.Tangosol Coherence does just that:
:: http://www.tangosol.com/coherence.jsp
It even supports the same Hashtable API!
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Saurabh" <[email protected]> wrote in message
news:3ed7741f$[email protected]..
>

Similar Messages

  • ACFS recomandations for weblogic cluster

    Hi,
    Can some one help me to findout material(step by step) for ACFS usage with weblogic clustering.
    Originally I started thread here :
    Re: ACFS recomandations for weblogic cluster
    Thanks in advance for valuable time
    Datla

    The first question you need to ask yourself is for what purpose do i need this type of filesystem when using a WebLogic Cluster?
    The first which comes to mind is the migration of so-called singleton service, such as JMS Servers, persistence stores and JTA.
    For example, when a machine fails, we must bring the services, running on the failed machine, up on other machines. The JTA
    service plays a critical role in recovery from a failure scenario where transactions are involved. In-flight transactions can hold locks
    on underlying resources. If the transaction manager is not available to recover these transactions, resources may hold on to these
    locks for a long time. This makes it difficult for an application to function properly. JTA service migration is possible only if the server's
    default persistent store (where the JTA logs are kept) is accessible to the server to which the service will migrate.
    This where a shared storage mechanism comes into play - to store files which need to be accessible from every server. The concept
    of ACFS which is useful in this scenario is the mount model (http://download.oracle.com/docs/cd/E14072_01/server.112/e10500/asmfilesystem.htm#CACJGEAC)
    To set-up a mount point you can follow the steps presented here: http://download.oracle.com/docs/cd/E15051_01/wls/docs103/cluster/service_migration.html#wp1049463
    This link contains the steps in order to configure JMS migration: http://download.oracle.com/docs/cd/E15051_01/wls/docs103/cluster/service_migration.html#wp1065826
    This link contains the steps in order to configure JTA migration: http://download.oracle.com/docs/cd/E15051_01/wls/docs103/cluster/service_migration.html#wp1054024

  • Shared Disks For RAC

    Hi,
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? OCFS2? or any other option?
    Can some one lead me to a documnet on how can I use the shared disks for RAC?
    Thanks.

    javed555 wrote:
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? You have two options:
    1. Create shared virtual, i.e. file-backed disks. These files will be stored in /OVS/sharedDisk/ and made available to each guest
    2. Expose physical devices directly to each guest, e.g. an LVM partition or a multipath LUN.
    With both options, the disks show up as devices in the guests and you would then provision them with ASM, exactly the same way as if your RAC nodes were physical.
    OCFS2 or NFS are required to create shared storage for Oracle VM Servers. This is to ensure the /OVS mount point is shared between multiple Oracle VM Servers.

  • Shared data for a cluster

    Hi,
    What is the best way to have some shared data within a cluster. This data needs
    to be in memory only, no need for DB persistence. But it can be upadted (in memory
    only) and updates should be visible to any of the servers with in the cluster.
    For example - I want to create a hashtable to be shared within cluster and the
    hashtable can be modified from time to time.
    One option is to use entity bean, but this data is not really required to persist
    in DB. Is there any other option?
    thanks
    - saurabh

    Hi Saurabh,
    What is the best way to have some shared data
    within a cluster. This data needs to be in memory
    only, no need for DB persistence. But it can be
    upadted (in memory only) and updates should be
    visible to any of the servers with in the cluster.
    For example - I want to create a hashtable to be
    shared within cluster and the hashtable can be
    modified from time to time.Tangosol Coherence does just that:
    :: http://www.tangosol.com/coherence.jsp
    It even supports the same Hashtable API!
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Saurabh" <[email protected]> wrote in message
    news:3ed7741f$[email protected]..
    >

  • WFC without Shared Disks for SCVMM Failover

    SCVMM Failover Cluster requires us to first have a WFC in place.  If shared disk is the limitation, can a WFC be created without shared disks? just to fulfill the purpose of providing WFC to the SCVMM Failover Cluster.

    Thanks for the reply.  Actually, design is not my decision unfortunately.
    I have gone ahead with a disk witness and I have used Shared VHDX as a Quorum Disk.  I faced some issues when I live migrated, but when I enabled Guest Services on both nodes which is disabled by default, it started migrating fine.
    Please, do share your thoughts on it.  Especially with respect to Shared VHDX and Live migration and the overall strategy as MS always prefers Disk Witness to File Share Witness.

  • Shared disks for clusterware

    Hi:
    I am setting up Oracle Clusterware. I am trying to set it up on two nodes. This i on AIX and trying the latest version 11gR2. I am selecting the first option ( Install and Configure Grid Infrastructure for a Cluster)
    I am trying to setup shared disk by mounting a file system of one system on the other using NFS.
    Here are my questions.
    I am creating user "oracle" on both the boxes. I am adding all the groups (like dba, oinstall etc.)
    Here are my questions:
    The NFS mount I have which is accessible by both the boxes should be used only for OCR (Oracle Cluster Registry) and Voting Disk (CF). I do not have ASM.
    Is it correct?
    Also
    The Oracle Base(eg : /u01/app/grid ) and Software Location(eg : /u01/app/11.2.0/grid) can be local to each individual nodes
    Is it correct?
    Those are the two questions I have. Also, will this work?
    I have tried to put everything on the shared disk (installation files, OCR, CF, base, inventory etc) but looks like the second box (the one from where I have not installed) is having problem. Because it is trying to change ownership of user and group of a few files when I run orainstRoot.sh and root.sh.
    Please let me know
    Thanks

    Hi,
    The NFS mount I have which is accessible by both the boxes should be used only for OCR (Oracle Cluster Registry) and Voting Disk (CF). I do not have ASM.
    Is it correct?Use documentation of Oracle GI 11.2
    http://docs.oracle.com/cd/E11882_01/install.112/e24614/storage.htm#CDEJIDFB
    Why not use ASM? Since is more safe and have more performance.
    Also
    The Oracle Base(eg : /u01/app/grid ) and Software Location(eg : /u01/app/11.2.0/grid) can be local to each individual nodes
    Is it correct?
    Those are the two questions I have. Also, will this work?For grid infrastructure for a cluster installations, the Grid home must not be placed under one of the Oracle base directories, or under Oracle home directories of Oracle Database installation owners, or in the home directory of an installation owner. During installation, ownership of the path to the Grid home is changed to root. This change causes permission errors for other installations.
    http://docs.oracle.com/cd/E11882_01/install.112/e24614/preaix.htm#BABFCFIH
    Please follow the installation guide, all your question here is easily answered on documentation:
    http://docs.oracle.com/cd/E11882_01/install.112/e24614/preaix.htm
    Regards,
    Levi Pereira

  • Doubts about shared disk for RAC

    Hi All,
    I am really new to RAC.Even after reading various documents,I still have many doubts regarding shared storage and file systems needed for RAC.
    1.Clusterware has to be installed on a shared file system like OCFS2.Which type of hard drive is required to install OCFS2 so that it can be accessed from all nodes??
    It has to be an external hard drive???Or we can use any simple hard disk for shared storage??
    If we use external hard drive then does it need to be connected to a seperate server alltogether or can it be connected to any one of the nodes in the cluster???
    Apart from this shared drives,approximately what size of hard disk is required for all nodes(for just a testing environment).
    Sincerely appreciate a reply!!
    Thanks in advance.

    Clusterware has to be installed on shared storage. RAC also requires shared storage for the database.
    Shared storage can be managed via many methods.
    1. Some sites using Linux or UNIX-based OSes choose to use RAW disk devices. This method is not frequently used due to the unpleasant management overhead and long-term manageability for RAW devices.
    2. Many sites use cluster filesystems. On Linux and Windows, Oracle offers OCFS2 as one (free) cluster filesystem. Other vendors also offer add-on products for some OSes that provide supported cluster filesystems (like GFS, GPFS, VxFS, and others). Supported cluster filesystems may be used for Clusterware files (OCR and voting disks) as well as database files. Check Metalink for a list of supported cluster filesystems.
    3. ASM can be used to manage shared storage used for database files. Unfortunately, due to architecture decisions made by Oracle, ASM cannot currently be used for Clusterware files (OCR and voting disks). It is relatively common to see ASM used for DB files and either RAW or a cluster filesystem used for Clusterware files. In other words, ASM and cluster filesystems and RAW are not mutually exclusive.
    As for hardware--I have not seen any hardware capable of easily connecting multiple servers to internal storage. So, shared storage is always (in my experience) housed externally. You can find some articles on OTN and other sites (search Google for them) that use firewire drives or a third computer running openfiler to provide the shared storage in test environments. In production environments, SAN devices are commonly employed to provide concurrent access to storage from multiple servers.
    Hope this helps!
    Message was edited by:
    Dan_Norris

  • VMWare Fusion & Shared Disks for RAC install

    Does anyone have any advice on how best to install a 2 node RAC 10G on a Macbook Pro. I've searched the forums here and haven't found much on this subject.
    I'm having problems setting up the shared disks to be used for the voting disk and ASM.
    Cheers,
    Mike

    This link might help you in what you are looking for.
    http://www.apple.com/itpro/profiles/rotech/
    Good luck,
    --MM                                                                                                                                                                                                                                                   

  • Problem configuring front end host for weblogic cluster

    hi,
    I am using weblogic 8 sp4,I have a cluster of WLI servers for which i am trying to configure a front end host.I am using apache Http server 2.0.55 for the same.
    I copied the mod_wl_20.so file to the modules directory and added the following to httpd.conf file
    LoadModule weblogic_module modules/mod_wl_20.so
    <IfModule mod_weblogic.c>
    WebLogic http://myIP:9191,http://myIP:9192,http://myIP:9193
    ErrorPage http://myerrorpage1.mydomain.com/
    MatchExpression *.jsp
    MatchExpression *.xyz
    </IfModule>
    <Location /weblogic>
    SetHandler weblogic-handler
    PathTrim /weblogic
    </Location>
    however,the error page gets displayed and not the default weblogic page.
    When I change the Error page url to that of my cluster url(i.e)http://myIP:9191, the weblogic default page gets displayed .. I mean to say that the same url doesnot work with the WeblogicCluster parameter but works with
    the ErrorPage parameter.
    Somebody who has configured the front end host using apache http server please clarify, it would be of great help.
    Thanks in advance

    The error.log displays the following (sorry i forgot to add it above :-()
    [Thu Nov 24 15:46:40 2005] [notice] Apache/2.0.55 (Unix) configured -- resuming normal operations
    [Thu Nov 24 15:46:56 2005] [error] Port number specified in WebLogicCluster parameter specified in httpd.conf is not an integer less than 65535
    [Thu Nov 24 15:46:56 2005] [error] CONFIG_ERROR [line 1344 of ap_proxy.cpp]: Port number specified in WebLogicCluster parameter specified in httpd.conf is not an integer less than 65535

  • Setting up a shared disk for Time Machine

    Folks,
    I have a 24" Al iMac with an external 750GB disk allocated to time machine.
    This machine has a single account with administrative privileges "me".
    I have a second 20" snow iMac that I would like to backup to the same disk.
    It has 4 accounts, one of which has administrative privileges (also called "me").
    How do I need to configure sharing on the Al iMac to allow time machine on the snow iMac to see the drive on the Al iMac, and be able to backup/restore to this volume?
    I have searched the forums here but cannot seem to find this information.
    Thanks in advance.
    - ljm

    Turn on Filesharing in the "Sharing" preference pane on both computers (You might want to specify which users have access via filesharing). Then on the snow imac connect to your Aluminum imac and mount the external drive so you can access it. Go to the TM settings and select the Aluminum imac's drive that's now shared and mounted, and start the initial backup. It will create a "sparsebundle" disk image on the backup drive and use that for the snow imac's backup. Everytime it writes to the image, it will mount it over the network and do a backup, and then eject it. Sometimes it wont back up to the image if you're not connected to the Aluminum mac, so to solve this, select the mounted shared drive as a "login item" in the snow imac's user preferences (for each user). This will create the connection on login, and backups should proceed accordingly.
    Message was edited by: Topher Kessler

  • How to make iTunes watch a shared disk for new music?

    I have a small home network with a shred disk with all the music we purchased ripped neatly into it.
    I wish my iTunes software to watch this disk and let me play music from there and creates playlists in order to update my iPOD.
    I do not wish to manually import every time as existing songs are imported over and over and if I clear the library and re import the playlists are erased.
    Please suggest a solution.
    Thanks

    Shalom, Ofer.
    iTunes doesn't offer any sort of folder watching feature.
    about If the folder is added once more - I get duplicates.
    That shouldn't happen.
    Unless your files on the server are WMAs, or you have messed up settings.

  • Extreme + Shared Disk through a NAT?

    Hi,
    I've done a search on this topic but can't find a thread that answers my question - so I beg forgiveness if this has been asked and answered many times...
    I've got an Extreme base station hosting a shared disk for a small business LAN. It works great.
    This past weekend I put in a D-Link router into one of the offices to help reduce traffic noise (I've got an Ethernet-based interface for outboard automation and it was previously very laggy). The router does exactly what I need it to - except I can't use the Airport Disks menu to mount the Airport Disk.
    The Disk shows up in the menu but returns a "Connection Failed" dialog box. I can connect to the Disk directly through Finder > Go > Connect to Server
    I'm thinking some ports need to be opened up.
    Anyone know where I can find that info?
    Or perhaps something else is going on?
    Thanks.
    - patrick

    Duane wrote:
    Please describe exactly how the D-Link and AirPort Extreme base station (AEBS) are connected.
    Hi Duane,
    Thanks for taking this on...
    PowerMac & Control Surface (both static IP) -> D-Link (NOT DHCP, in the 10.0.0.1 range)
    then
    DLink -> Netgear GigaE 20-port switch
    the AEBS goes:
    AEBS -> same Netgear switch
    then
    Netgear Switch -> Firewall/Router (static IP in 192.168.1.1 range) -> DSL
    Are you trying to go through the D-Link to get to the disk attached to the AEBS?
    Yup. Punch through the D-Link to the AEBS.
    Is the D-Link configured so that it's DHCP server is enabled or disabled?
    Disabled.
    - pi
    Message was edited by: Patrick Inhofer

  • Use storage as shared disk

    We have storage connected to one server using optic port.Can i make shared disk for future install Oracle Clusterware(for 2 node).thanx

    I probably wouldn't use iSCSI in this manner for a production database unless I had no other choice. My biggest issue is that one more piece is added to the configuration that can fail that can affect availability. If that other host goes down, so does the shared storage that is mounted to the database server. I might do this as a short-term solution to get me immediate access to disk, but I would then immediately procure additional disk storage and get that into the configuration as soon as it arrives.
    Cheers,
    Brian

  • Mapping shared disk mount in Linux

    Hi ,
    We are using shared disk for our SOA clustered environment, everything is working fine, even disk sharing between two servers are fine,issues is when ever we reboot servers,mounting node 1 disk to node2 always disappearing, i have to manually mount disk in node 2, is there any way to automate to mount the disk after reboot in node 2.
    OS : OEL 5.4(64bit).
    11G SOA/WLS

    hi,
    it's a bit not clear to me - would you please describe how you managed to map other drives/volumes?
    please post
    listing of /dev/mapper/*
    content of /etc/rc.local
    Are you using multipath or ASMlib?

  • Hyper-V cluster Backup causes virtual machine reboots for common Cluster Shared Volumes members.

    I am having a problem where my VMs are rebooting while other VMs that share the same CSV are being backed up. I have provided all the information that I have gather to this point below. If I have missed anything, please let me know.
    My HyperV Cluster configuration:
    5 Node Cluster running 2008R2 Core DataCenter w/SP1. All updates as released by WSUS that will install on a Core installation
    Each Node has 8 NICs configured as follows:
     NIC1 - Management/Campus access (26.x VLAN)
     NIC2 - iSCSI dedicated (22.x VLAN)
     NIC3 - Live Migration (28.x VLAN)
     NIC4 - Heartbeat (20.x VLAN)
     NIC5 - VSwitch (26.x VLAN)
     NIC6 - VSwitch (18.x VLAN)
     NIC7 - VSwitch (27.x VLAN)
     NIC8 - VSwitch (22.x VLAN)
    Following hotfixes additional installed by MS guidance (either while build or when troubleshooting stability issue in Jan 2013)
     KB2531907 - Was installed during original building of cluster
     KB2705759 - Installed during troubleshooting in early Jan2013
     KB2684681 - Installed during troubleshooting in early Jan2013
     KB2685891 - Installed during troubleshooting in early Jan2013
     KB2639032 - Installed during troubleshooting in early Jan2013
    Original cluster build was two hosts with quorum drive. Initial two hosts were HST1 and HST5
    Next host added was HST3, then HST6 and finally HST2.
    NOTE: HST4 hardware was used in different project and HST6 will eventually become HST4
    Validation of cluster comes with warning for following things:
     Updates inconsistent across hosts
      I have tried to manually install "missing" updates and they were not applicable
      Most likely cause is different build times for each machine in cluster
       HST1 and HST5 are both the same level because they were built at same time
       HST3 was not rebuilt from scratch due to time constraints and it actually goes back to Pre-SP1 and has a larger list of updates that others are lacking and hence the inconsistency
       HST6 was built from scratch but has more updates missing than 1 or 5 (10 missing instead of 7)
       HST2 was most recently built and it has the most missing updates (15)
     Storage - List Potential Cluster Disks
      It says there are Persistent Reservations on all 14 of my CSV volumes and thinks they are from another cluster.
      They are removed from the validation set for this reason. These iSCSI volumes/disks were all created new for
      this cluster and have never been a part of any other cluster.
     When I run the Cluster Validation wizard, I get a slew of Event ID 5120 from FailoverClustering. Wording of error:
      Cluster Shared Volume 'Volume12' ('Cluster Disk 13') is no longer available on this node because of
      'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the
      volume is reestablished.
     Under Storage and Cluster Shared VOlumes in Failover Cluster Manager, all disks show online and there is no negative effect of the errors.
    Cluster Shared Volumes
     We have 14 CSVs that are all iSCSI attached to all 5 hosts. They are housed on an HP P4500G2 (LeftHand) SAN.
     I have limited the number of VMs to no more than 7 per CSV as per best practices documentation from HP/Lefthand
     VMs in each CSV are spread out amonst all 5 hosts (as you would expect)
    Backup software we use is BackupChain from BackupChain.com.
    Problem we are having:
     When backup kicks off for a VM, all VMs on same CSV reboot without warning. This normally happens within seconds of the backup starting
    What have to done to troubleshoot this:
     We have tried rebalancing our backups
      Originally, I had backup jobs scheduled to kick off on Friday or Saturday evening after 9pm
      2 or 3 hosts would be backing up VMs (Serially; one VM per host at a time) each night.
      I changed my backup scheduled so that of my 90 VMs, only one per CSV is backing up at the same time
       I mapped out my Hosts and CSVs and scheduled my backups to run on week nights where each night, there
       is only one VM backed up per CSV. All VMs can be backed up over 5 nights (there are some VMs that don't
       get backed up). I also staggered the start times for each Host so that only one Host would be starting
       in the same timeframe. There was some overlap for Hosts that had backups that ran longer than 1 hour.
      Testing this new schedule did not fix my problem. It only made it more clear. As each backup timeframe
      started, whichever CSV the first VM to start was on would have all of their VMs reboot and come back up.
     I then thought maybe I was overloading the network still so I decided to disable all of the scheduled backup
     and run it manually. Kicking off a backup on a single VM, in most cases, will cause the reboot of common
     CSV members.
     Ok, maybe there is something wrong with my backup software.
      Downloaded a Demo of Veeam and installed it onto my cluster.
      Did a test backup of one VM and I had not problems.
      Did a test backup of a second VM and I had the same problem. All VMs on same CSV rebooted
     Ok, it is not my backup software. Apparently it is VSS. I have looked through various websites. The best troubleshooting
     site I have found for VSS in one place it on BackupChain.com (http://backupchain.com/hyper-v-backup/Troubleshooting.html)
     I have tested almost every process on there list and I will lay out results below:
      1. I have rebooted HST6 and problems still persist
      2. When I run VSSADMIN delete shadows /all, I have no shadows to delete on any of my 5 nodes
       When I run VSSADMIN list writers, I have no error messages on any writers on any node...
      3. When I check the listed registry key, I only have the build in MS VSS writer listed (I am using software VSS)
      4. When I run VSSADMIN Resize ShadowStorge command, there is no shadow storage on any node
      5. I have completed the registration and service cycling on HST6 as laid out here and most of the stuff "errors"
       Only a few of the DLL's actually register.
      6. HyperV Integration Services were reconciled when I worked with MS in early January and I have no indication of
       further issue here.
      7. I did not complete the step to delete the Subscriptions because, again, I have no error messages when I list writers
      8. I removed the Veeam software that I had installed to test (it hadn't added any VSS Writer anyway though)
      9. I can't realistically uninstall my HyperV and test VSS
      10. Already have latest SPs and Updates
      11. This is part of step 5 so I already did this. This seems to be a rehash of various other stratgies
     I have used the VSS Troubleshooter that is part of BackupChain (Ctrl-T) and I get the following error:
      ERROR: Selected writer 'Microsoft Hyper-V VSS Writer' is in failed state!
      - Status: 8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
      - Writer Failure code: 0x800423f0 (<Unknown error code>)
      - Writer ID: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
      - Instance ID: {d55b6934-1c8d-46ab-a43f-4f997f18dc71}
      VSS snapshot creation failed with result: 8000FFFF
    VSS errors in event viewer. Below are representative errors I have received from various Nodes of my cluster:
    I have various of the below spread out over all hosts except for HST6
    Source: VolSnap, Event ID 10, The shadow copy of volume took too long to install
    Source: VolSnap, Event ID 16, The shadow copies of volume x were aborted because volume y, which contains shadow copy storage for this shadow copy, wa force dismounted.
    Source: VolSnap, Event ID 27, The shadow copies of volume x were aborted during detection because a critical control file could not be opened.
    I only have one instance of each of these and both of the below are from HST3
    Source: VSS, Event ID 12293, Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details RevertToSnashot [hr = 0x80042302, A Volume Shadow Copy Service component encountered an
    unexpected error.
    Source: VSS, Event ID 8193, Volume Shadow Copy Service error: Unexpected error calling routine GetOverlappedResult.  hr = 0x80070057, The parameter is incorrect.
    So, basically, everything I have tried has resulted in no success towards solving this problem.
    I would appreciate anything assistance that can be provided.
    Thanks,
    Charles J. Palmer
    Wright Flood

    Tim,
    Thanks for the reply. I ran the first two commands and got this:
    Name                                                            
    Role Metric
    Cluster Network 1                                              
    3  10000
    Cluster Network 2 - HeartBeat                              1   1300
    Cluster Network 3 - iSCSI                                    0  10100
    Cluster Network 4 - LiveMigration                         1   1200
    When you look at the properties of each network, this is how I have it configured:
    Cluster Network 1 - Allow cluster network communications on this network and Allow clients to connect through this network (26.x subnet)
    Cluster Network 2 - Allow cluster network communications on this network. New network added while working with Microsoft support last month. (28.x subnet)
    Cluster Network 3 - Do not allow cluster network communications on this network. (22.x subnet)
    Cluster Network 4 - Allow cluster network communications on this network. Existing but not configured to be used by VMs for Live Migration until MS corrected. (20.x subnet)
    Should I modify my metrics further or are the current values sufficient.
    I worked with an MS support rep because my cluster (once I added the 5th host) stopped being able to live migrate VMs and I had VMs host jumping on startup. It was a mess for a couple of days. They had me add the Heartbeat network as part of the solution
    to my problem. There doesn't seem to be anywhere to configure a network specifically for CSV so I would assume it would use (based on my metrics above) Cluster Network 4 and then Cluster Network 2 for CSV communications and would fail back to the Cluster Network
    1 if both 2 and 4 were down/inaccessible.
    As to the iSCSI getting a second NIC, I would love to but management wants separation of our VMs by subnet and role and hence why I need the 4 VSwitch NICs. I would have to look at adding an additional quad port NIC to my servers and I would be having to
    use half height cards for 2 of my 5 servers for that to work.
    But, on that note, it doesn't appear to actually be a bandwidth issue. I can run a backup for a single VM and get nothing on the network card (It caused the reboots before any real data has even started to pass apparently) and still the problem occurs.
    As to Backup Chain, I have been working with the vendor and they are telling my the issue is with VSS. They also say they support CSV as well. If you go to this page (http://backupchain.com/Hyper-V-Backup-Software.html)
    they say they support CSVs. Their tech support has been very helpful but unfortunately, nothing has fixed the problem.
    What is annoying is that every backup doesn't cause a problem. I have a daily backup of one of our machines that runs fine without initiating any additional reboots. But most every other backup job will trigger the VMs on the common CSV to reboot.
    I understood about the updates but I had to "prove" it to the MS tech I was on the phone with and hence I brought it up. I understand on the storage as well. Why give a warning for something that is working though... I think that is just a poor indicator
    that it doesn't explain that in the report.
    At a loss for what else I can do,
    Charles J. Palmer

Maybe you are looking for

  • Index series and index class in asset accounting

    Hi All, I have read in the SAP documentation that we need to assign index series to index class. I have not found the place where index class is configured/assigned. Could you please let me know what are the configuration steps for indexing in asset

  • Can't tie emailed Pages document to my default printer

    In our office we have several printers: HP LaserJet 1300, HP Color LaserJet 2605dn, HP LaserJet 4/4M PS, etc. On my MacBook Pro I have set HP LaserJet P2055dn as my default printer. My colleague from office sent me Pages document via email. On his Ma

  • How to I put pictures on a cd /from my mac book?

    How do I copy pics from my mac book to a cd?

  • Adobe Illustrator CS6 Won't Open

    I have CS6 Master Collection and all of the programs were functioning properly, but now Illustrator wont open. I have a MacBook Pro with OSX 10.9. When I click to open Illustrator, the icon in the dock bounces once, then stops. The program never disp

  • Batch Convert avi to QT Movie

    How do I batch convert avi video to QT mov (using Sorenson3 codec) w/ QuickTimePRo7? Thanks!! PC   Windows XP Pro   Mind attached