Hyper-V Guest Disk Configuration and SAN placement

In a nutshell, I have a test 2012 R2 cluster (2-node) attached by fibre to a SAN.  The SAN at present is configured with 2 volumes, one RAID 10 and the other RAID 5.
2 questions really.  Is it best practice for each guest to have 2 virtual hard disks, one for OS and the other for data.  If so, would it be appropriate to put the OS vhdx on the RAID 10 and the data on RAID 5?
I'm trying to get a general rule of thumb rather than cater for say specifics such as SQL server.  I'm really looking at file servers and the like initially.
Ian

Hi Ian, I have a similar setup in production, I have 5 CSV volumes, 2 of them are using SATA disks, and 3 are high performance SAS, based on an EMC VNX. I have classed these storage volume as Gold and Bronze.
we have some guests that are files servers, so the OS is on the fast disk, and the data is on the slower SATA disk.
I used SCVMM to Storage migrate and split the VHDX files up across CSV volumes.
I did see this in an article on Hyper-V and its a fully supported practice. I think it more down to requirements rather than best practice. and making use of any slower disk you have rather than wasting the high performance on applications that don't require
it.
Hope that helps
Cheers
Mark

Similar Messages

  • Disk configuration and workflow help needed for lab video workstation

    Hi All,
    Setting up a video editing workstation for a research lab that will use Premeire to edit AVCHD Progressive clips (sometimes with 2 streams side-by-side, but usually single-camera) and export them to .mp4 for later viewing by video coders. We won't be using AfterEffects or adding anything to the videos other than some text (titles, maybe sub-titles).
    The other purpose of this workstation is to act as a file server and backup system for other machines in the lab. Coders will be viewing the exported videos via other networked machines and working with Microsoft Office files that will be stored on the workstation's other HDDs. I'll have a physical backup drive and cloud backup via CrashPlan.
    I've built a machine that is probably overkill, but the client (my wife) wanted it to be "fast," and the purpose of the machine might change in the future:
    i7-4770K (overclocked a bit)
    16GB RAM
    Asus Z87-Pro
    GeForce GTX 660
    I have the OS (W7) and programs on a 256 GB Samsung 840 Pro SSD and currently have two 1TB Velociraptors to use for the Premiere workflow. I'm trying to figure out how to proceed with the purchase of the rest of the drives, and I want to keep the Premiere drives separate from the large storage drives from the lab that are networked and synced to cloud backup.
    Following the recommendations for a three-disc configuration I've picked up on these forums, I could set it up like this:
    C: (256GB SSD) (OS, programs, pagefile)
    D: (1TB HDD) (media, projects)
    E: (1TB HDD) (previews, media cache, exports)
    F: (4TB HDD) (backups of media, projects, and exports and storage of other research files)*THIS DRIVE WOULD BE SHARED ON THE NETWORK
    G: (4TB external HDD) (backup of F & drive that backs up to CrashPlan)
    but it seems that would be a waste of the speed of the second 10k velociraptor. If I added another SSD and RAIDed the Velociraptors it would be:
    C: (256GB SSD) (OS, programs)
    D: (Two 1TB Velociraptors in RAID 0) (media, projects)
    E: (256GB SSD) (media cache, pagefile)
    but would I then need to add another dedicated HDD for previews and exports, or could I store those on the networked F: from above (which would be previews, exports, backups of media and projects, and storage of other research files) without taking a speed hit?
    It seems overkill to have a dedicated drive for exports and previews (let's make that the new F:), then have them copy to the first 4TB drive (now G:), then back that up to the second 4TB drive (now H:), then back that up to CrashPlan. However, people might be accessing that network drive at any time, and I don't want that to slow any part of the video process down.
    I appreciate any advice ya'll can give me!

    Hi Jim,
    Thanks for the encouraging response. I'm leaning toward the non-SSD option at this point. 
    To make sure I understand, are you suggesting I try using the Velociraptor Raid 0 in the 2 disk configuration suggested by Harm's Guidelines for Disk Usage chart? Like this:
    C: (256 GB SSD) (OS, Programs, Pagefile, Media Cache)
    D: (1TB x2 in RAID 0) (Media, Projects, Previews, Exports)?
    Where I'm still confused there, and in looking at Harm's array suggestions for 5 or more drives, is how performance is affected by having simultaneous read/write operations happening on the same drive, which is what I understood was the reason for spreading out the files on multiple drives. Maybe I don't understand how Premiere's file operations work in practice, or maybe I don't understand RAID 0 well enough.
    In the type of editing we'll be doing (minimal) aren't there still times when Premiere will be trying to read and write from the D: drive at the same time, for example during export? Wouldn't the increased speed benefits of RAID 0 for either read or write alone be defeated by asking the array to do both simultaneously?
    Maybe the reason the Media Cache is on the SSD in the above configuration is because that is what will be read while writing to something like Exports? But that wouldn't make sense given Harm's chart, which has the Media Cache also located on the array....
    Another question is, given that the final home of the exported videos will be on the big internal drive (4TB) anyway, could I set it up like this:
    C: (SSD) (OS, Programs, Pagefile, Media Cache)
    D: (2TB RAID 0) (Media, Projects, Previews)
    E: (network shared 4TB HDD) (Exports + a bunch of other shared non-video files)
    so I don't end up having to copy the exported videos over to the 4TB drive? Do you think it would render significantly faster to the RAID than it would to the 7200 rpm 4TB drive? I'd like to cut out the step of copying exported videos from D: to E: all the time if it wasn't necessary.
    Thanks again.

  • Windows Server 2012 - Hyper-V - iSCSI SAN - All Hyper-V Guests stops responding and extensive disk read/write

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM, 2 teamed NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM, 1 NIC dedicated to Virtual Machines and Management, 2 teamed NIC dedicated to iSCSI storage. - This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched of.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, WMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with 2 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair
    Our Problem
    Our problem is that for some reason all of the VMs stops responding or responds very slowly and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    Symptoms (i.e. this happens, or does not happen, at the same time)
    I we look at resource monitor on the host then we see that there is often an extensive read from a VHDX of one of the VMs (40-60 Mbyte/s) and a combined write speed to many files in \HarddiskVolume5\System Volume Information\{<someguid and no file extension>}.
    See iamge below.
    The combined network speed to the iSCSI SAN is about 500-600 Mbit/s.
    When this happens it is usually during and after a VSS ShadowCopy backup, but has also happens during hours where no backup should be running (i.e. during daytime when the backup has finished hours ago according to the log files). There is however
    not that extensive writes to the backup file that is created on an external hard drive and this does not seem to happen during all backups (we have manually checked a few times, but it is hard to say since this error does not seem leave any traces in event
    viewer).
    We cannot find any indication that the VMs themself detect any problem and we see no increase of errors (for example storage related errors) in the eventlog inside the VMs.
    The QNAP uses about 50% processing Power on all cores.
    We see no dropped packets on the switch.
    (I have split the image to save horizontal space).
    Unable to recreate the problem / find definitive trigger
    We have not succeeded in recreating the problem manually by, for instance, running chkdsk or defrag in VM and Hosts, copy and remove large files to VMs, running CPU and Disk intensive operations inside a VM (for instance scan and repair a database file).
    Questions
    Why does all VMs stop responding and why is there such intensive Read/Writes to the iSCSI SAN?
    Could it be anything in our setup that cannot handle all the read/write requests? For instance the iSCSI SAN, the hosts, etc?
    What can we do about this? Should we use MultiPath IO instead of NIC teaming to the SAN, limit bandwith to the SAN, etc?

    Hi,
    > All VMs are using dynamic disks (as recommended by Microsoft).
    If this is a testing environment, it’s okay, but if this a production environment, it’s not recommended. Fixed VHDs are recommended for production instead of dynamically expanding or differencing VHDs.
    Hyper-V: Dynamic virtual hard disks are not recommended for virtual machines that run server workloads in a production environment
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx
    > This is the primary host and normaly all VMs run on this host.
    According to your posting, we know that you have Cluster Shared Volumes in the Hyper-V cluster, but why not distribute your VMs into two Hyper-V hosts.
    Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster
    http://technet.microsoft.com/en-us/library/jj612868.aspx
    > 2 teamed NIC dedicated to iSCSI storage.
    Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. (At least it’s not supported until Windows Server 2008 R2. Although
    Windows Server 2012 has built-in network teaming feature, I don’t article which declare that Windows Server 2012 network teaming support iSCSI connection)
    Understanding Requirements for Failover Clusters
    http://technet.microsoft.com/en-us/library/cc771404.aspx
    > I have seen using MPIO suggests using different subnets, is this a requirement for using MPIO
    > or is this just a way to make sure that you do not run out of IP adressess?
    What I found is: if it is possible, isolate the iSCSI and data networks that reside on the same switch infrastructure through the use of VLANs and separate subnets. Redundant network paths from the server to the storage system via MPIO will maximize availability
    and performance. Of course you can set these two NICs in separate subnets, but I don’t think it is necessary.
    > Why should it be better to not have dedicated wireing for iSCSI and Management?
    It is recommended that the iSCSI SAN network be separated (logically or physically) from the data network workloads. This ‘best practice’ network configuration optimizes performance and reliability.
    Check that and modify cluster configuration, monitor it and give us feedback for further troubleshooting.
    For more information please refer to following MS articles:
    Volume Shadow Copy Service
    http://technet.microsoft.com/en-us/library/ee923636(WS.10).aspx
    Support for Multipath I/O (MPIO)
    http://technet.microsoft.com/en-us/library/cc770294.aspx
    Deployments and Tests in an iSCSI SAN
    http://technet.microsoft.com/en-US/library/bb649502(v=SQL.90).aspx
    Hope this helps!
    TechNet Subscriber Support
    If you are
    TechNet Subscription user and have any feedback on our support quality, please send your feedback
    here.
    Lawrence
    TechNet Community Support

  • Servers with local disk space and SAN disk: is it possible?

    Hello,
    I am evaluating Oracle VM. We have some servers with local 15000rpm hdds. We also would like to use a SAN. Is it possible to have some domu in local hard disks and some in SAN?
    Can I enable HA for domu in SAN?
    Thanks in advance for any reply!
    Mario Giammarco

    I am evaluating Oracle VM. We have some servers with local 15000rpm hdds. We also would like to use a SAN. Is it >possible to have some domu in local hard disks and some in SAN?you can have some domu in local and some in SAN. But domu from local cannot be migrated to other server. In case of SAN you can migrate to other servers.
    Can I enable HA for domu in SAN?Yes.

  • Hyper-V Failover Cluster Configuration Confirmation

    Dear All,
            I have created a Hyper-V Failover Cluster and I want you to confirm if the configuration I have done is okay and I have not missed
    out anything that is mandatory for a Hyper-V Failover Cluster to work.  My configuration is below:
    1. Presented Disks to servers, formatted and taken offline
    2. Installed necessary features, such as failover clustering
    3. Configured NIC Teaming
    4. Created cluster, not adding storage at the time of creation
     - Added disks to the cluster
     - Added disks as CSV
     - Renamed disks to represent respective CSV volumes
     - Assigning each node a CSV volume
     - Configured quorum automatically which configured the disk witness
     - There were two networks so renamed them to Management and Cluster Communication
     - Exposed Management Network to Cluster and Clients
     - Exposed Cluster Communication Network to Cluster only
    5. Installed Hyper-V
     - Changed Virtual Disks, Configuration and Snapshots Location
     - Assigned one CSV volume to each node
     - Configured External switch with allow management option checked
    1. For minimum configuration, is this enough?
    2. If I create a virtual machine and make it highly available from hyper-v console, would it be highly available and would live
    migrate, etc.?
    3. Are there any configuration changes required?
    4. Please, suggest how it can be made better?
    Thanks in advan

    Hi ,
    Please refer to following steps to build a hyper-v failover cluster :
    Step 1: Connect both physical computers to the networks and storage
    Step 2: Install Hyper-V and Failover Clustering on both physical computers
    Step 3: Create a virtual switch
    Step 4: Validate the cluster configuration
    Step 5: Create the cluster
    Step 6: Add a disk as CSV to store virtual machine data
    Step 7: Create a highly available virtual machine 
    Step 8: Install the guest operating system on the virtual machine
    Step 9: Test a planned failover
    Step 10: Test an unplanned failover
    Step 11: Modify the settings of a virtual machine
    Step 12: Remove a virtual machine from a cluster
    For details please refer to following link:
    http://technet.microsoft.com/en-us//library/jj863389.aspx
    Hope it helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • 2008 r2 hyper-v guest with static IP always looses network connectivity after every restart - no problem with DHCP

    Hello,
    We are running 2008 R2 domain with one physical DC and other running in VM on Hyper-V host (2008 R2 Standard). The host has 4 NICs and is configured to use one physical NIC for itself (management) and the hyper-v guest is configured to use another dedicated/physical
    NIC (through microsoft virtual switch) just for itself.
    I noticed that after setting the hyper-v guest with a static IP address all works fine only until guest restart. When the guest boots up the IP address is still configured correctly in IPv4 properties, but there is no network connectivity at all and in fact
    the guest shows running APIPA config in ipconfig /all output. That situation continues until I remove the virtual NIC from hyper-v guest, remove the virtual switch from dedicated NIC on host and then reconfigure it (using same settings as they were). very
    annoying.
    For time being I switched the virtual DC (problematic hyper-v guest) to a DHCP IP and configured DHCP server (running on physical DC machine, not on hyper-v host) to store a reservation for the hyper-v guest so it always gets the same "static"
    IP configuration.
    Is there some kind of a problem/bug with using static IP on (2008 R2) hyper-v guests? is there a hotfix for static IP config in hyper-v guest environment?
    both 2008 R2 OSes (host and guest) are up to date with all updates (synced with Microsoft, not WSUS).

    OK, I'm not at the office now, but took my time to test out the restart scenarios on problematic virtual guest remotely.
    No dice, same as it was, everything works fine after guest has IP configured in DHCP mode (IP reservation of 192.168.1.5 for specific MAC address) and it doesn't work after restart in static IP mode (same address, works before restart of guest).
    I also took "arp -a" outputs at each step from host server and that was always saying there is only a single host (192.168.1.5 = VDC = problematic virtual guest) assigned to that IP address and always with same MAC, so that pretty much rules out
    ARP/MAC troubles and no issues with switches/routers getting spoofed. Problem is most likely with the virtual guest (WS2008R2) or within the host running same OS.
    Here are outputs:
    A) VDC has IP configured in DHCP mode - always same, survives through restart (all works)
    Ethernet adapter Local Area Connection:
    Connection-specific DNS Suffix . : CD.lan
    Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter
    Physical Address. . . . . . . . . : 00-15-5D-01-D3-00
    DHCP Enabled. . . . . . . . . . . : Yes
    Autoconfiguration Enabled . . . . : Yes
    Link-local IPv6 Address . . . . . : fe80::b9af:6679:3142:8799%13(Preferred)
    IPv4 Address. . . . . . . . . . . : 192.168.1.5(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    Lease Obtained. . . . . . . . . . : Thursday, January 30, 2014 5:34:48 PM
    Lease Expires . . . . . . . . . . : Friday, February 07, 2014 5:35:26 PM
    Default Gateway . . . . . . . . . : 192.168.1.254
    DHCP Server . . . . . . . . . . . : 192.168.4.5
    DHCPv6 IAID . . . . . . . . . . . : 268440925
    DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-6F-5F-C2-00-15-5D-01-D3-00
    DNS Servers . . . . . . . . . . . : 192.168.1.5
    192.168.4.5
    NetBIOS over Tcpip. . . . . . . . : Enabled
    ARP -a output from host server at that time:
    Interface: 192.168.1.4 --- 0xc
    Internet Address Physical Address Type
    192.168.1.5 00-15-5d-01-d3-00 dynamic
    B) VDC has IP configured in static mode - BEFORE RESTART (all works)
    Ethernet adapter Local Area Connection:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter
    Physical Address. . . . . . . . . : 00-15-5D-01-D3-00
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    Link-local IPv6 Address . . . . . : fe80::b9af:6679:3142:8799%13(Preferred)
    IPv4 Address. . . . . . . . . . . : 192.168.1.5(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    Default Gateway . . . . . . . . . : 192.168.1.254
    DHCPv6 IAID . . . . . . . . . . . : 268440925
    DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-6F-5F-C2-00-15-5D-01-D3-00
    DNS Servers . . . . . . . . . . . : 192.168.1.5
    192.168.4.5
    NetBIOS over Tcpip. . . . . . . . : Enabled
    ARP -a output from host server at that time:
    Interface: 192.168.1.4 --- 0xc
    Internet Address Physical Address Type
    192.168.1.5 00-15-5d-01-d3-00 dynamic
    C) VDC has the same IP configured in static mode - AFTER RESTART (no more network connectivity at all, LAN in Public zone)
    Windows IP Configuration
    Host Name . . . . . . . . . . . . : VDC
    Primary Dns Suffix . . . . . . . : CD.lan
    Node Type . . . . . . . . . . . . : Hybrid
    IP Routing Enabled. . . . . . . . : No
    WINS Proxy Enabled. . . . . . . . : No
    DNS Suffix Search List. . . . . . : CD.lan
    Ethernet adapter Local Area Connection:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter
    Physical Address. . . . . . . . . : 00-15-5D-01-D3-00
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    Link-local IPv6 Address . . . . . : fe80::b9af:6679:3142:8799%13(Preferred)
    Autoconfiguration IPv4 Address. . : 169.254.135.153(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.0.0
    Default Gateway . . . . . . . . . : 192.168.1.254
    DHCPv6 IAID . . . . . . . . . . . : 268440925
    DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-6F-5F-C2-00-15-5D-01-D3-00
    DNS Servers . . . . . . . . . . . : 192.168.1.5
    192.168.4.5
    NetBIOS over Tcpip. . . . . . . . : Enabled
    ARP -a output from host server at that time:
    Interface: 192.168.1.4 --- 0xc
    Internet Address Physical Address Type
    192.168.1.5 00-15-5d-01-d3-00 dynamic
    Throughout the testing, the hyper-v host IP configuration and IPconfig output was always staying same.
    The Network Connection #2 is the only one the host uses (not shared with hyper-v guests).
    The Network Connection #4 is assigned to Microsoft Virtual Switch hence why it doesn't show up in results, like below:
    Windows IP Configuration
    Host Name . . . . . . . . . . . . : HYPER-V
    Primary Dns Suffix . . . . . . . : CD.lan
    Node Type . . . . . . . . . . . . : Hybrid
    IP Routing Enabled. . . . . . . . : No
    WINS Proxy Enabled. . . . . . . . : No
    DNS Suffix Search List. . . . . . : CD.lan
    Ethernet adapter Local Area Connection 3:
    Media State . . . . . . . . . . . : Media disconnected
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : HP Ethernet 1Gb 4-port 331i Adapter #3
    Physical Address. . . . . . . . . : 9C-8E-99-52-15-91
    DHCP Enabled. . . . . . . . . . . : Yes
    Autoconfiguration Enabled . . . . : Yes
    Ethernet adapter Local Area Connection 2:
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : HP Ethernet 1Gb 4-port 331i Adapter #2
    Physical Address. . . . . . . . . : 9C-8E-99-52-15-90
    DHCP Enabled. . . . . . . . . . . : No
    Autoconfiguration Enabled . . . . : Yes
    Link-local IPv6 Address . . . . . : fe80::dc78:8a3b:38a5:7af3%12(Preferred)
    IPv4 Address. . . . . . . . . . . : 192.168.1.4(Preferred)
    Subnet Mask . . . . . . . . . . . : 255.255.255.0
    Default Gateway . . . . . . . . . : 192.168.1.254
    DHCPv6 IAID . . . . . . . . . . . : 312250009
    DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1A-67-52-8F-9C-8E-99-52-15-93
    DNS Servers . . . . . . . . . . . : 192.168.4.5
    192.168.1.5
    NetBIOS over Tcpip. . . . . . . . : Enabled
    Ethernet adapter Local Area Connection:
    Media State . . . . . . . . . . . : Media disconnected
    Connection-specific DNS Suffix . :
    Description . . . . . . . . . . . : HP Ethernet 1Gb 4-port 331i Adapter
    Physical Address. . . . . . . . . : 9C-8E-99-52-15-93
    DHCP Enabled. . . . . . . . . . . : Yes
    Autoconfiguration Enabled . . . . : Yes
    On Monday I will install more test guests in Hyper-V host (WS2008R2), in variety of flavors like 7x64, 8.1x64, ws2012r2, and see if they show similar problems with static IP configuration when utilizing a dedicated NIC from host server.
    Don't get me wrong, I can live with virtual DC running on DHCP IP reservation (which is based on MAC), because the virtual DC pretty much requires a physical PDC (hosting also DHCP in my network) to be present for safety reasons ... however I prefer a static
    IP configuration on all servers, hence my question and surprise why it doesn't work.

  • UCSM 2.1 Local disk configuration policy and raid volumes

    Hi!
    If i use Any configuration as local disk configuration policy and do the raid settings directly to the RAID-cards, am i able to have two raid volumes on C-series under UCSM management?
    What i would like to do with C240M3 with 6 local disks: 2 disk raid1 and 4 disk raid0
    So i would use:
    "Any Configuration—For a server configuration that carries forward the local disk configuration without any changes."
    As UCS servers Raid guide indicates:
    "Maximum of One RAID Volume and One RAID Controller in Integrated Rack-Mount Servers
    A rack-mount server that has been integrated with Cisco UCS Manager can  have a maximum of one RAID volume irrespective of how many hard drives  are present on the server. "
    Is this paragraph limitation of GUI not able to set several volumes or hard fact without "Any configuration" workaround?

    I did some testing about this issue:
    Changed Local Disk Configuration to "Any Configuration"
    Two virtual disks can be created from Raid card's WebBIOS
    These disks are visible to RedHat Installation.
    UCSM shows Any configuration for the Storage Local Disk policy
    Actual Disk Configuration has faulty information - WebBios is the only place to check the RAID status?
    Next step: I'll do the same for the production

  • Ask the Expert: ISE 1.2: Configuration and Deployment with Cisco expert Craig Hyps

    Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about how to deploy and configure Cisco Identity Services Engine (ISE) Version 1.2 and to understand the features and enhanced troubleshooting options available in this version, with Cisco expert Craig Hyps.
    October 27, 2014 through November 7, 2014.
    The Cisco Identity Services Engine (ISE) helps IT professionals meet enterprise mobility challenges and secure the evolving network across the entire attack continuum. Cisco ISE is a security policy management platform that identifies users and devices using RADIUS, 802.1X, MAB, and Web Authentication methods and automates secure access controls such as ACLs, VLAN assignment, and Security Group Tags (SGTs) to enforce role-based access to networks and network resources. Cisco ISE delivers superior user and device visibility through profiling, posture and mobile device management (MDM) compliance validation, and it shares vital contextual data with integrated ecosystem partner solutions using Cisco Platform Exchange Grid (pxGrid) technology to accelerate the identification, mitigation, and remediation of threats.
    Craig Hyps is a senior Technical Marketing Engineer for Cisco's Security Business Group with over 25 years networking and security experience. Craig is defining Cisco's next generation Identity Services Engine, ISE, and concurrently serves as the Product Owner for ISE Performance and Scale focused on the requirements of the largest ISE deployments.
    Previously Craig has held senior positions as a customer Consulting Engineer, Systems Engineer and product trainer.   He joined Cisco in 1997 and has extensive experience with Cisco's security portfolio.  Craig holds a Bachelor's degree from Dartmouth College and certifications that include CISSP, CCSP, and CCSI.
    Remember to use the rating system to let Craig know if you have received an adequate response.
    Because of the volume expected during this event, Ali might not be able to answer each question. Remember that you can continue the conversation on the Security community, sub-community shortly after the event. This event lasts through November 7, 2014. Visit this forum often to view responses to your questions and the questions of other community members.
    (Comments are now closed)

    1. Without more specifics it is hard to determine actual issue. It may be possible that if configured in same subnet that asymmetric traffic caused connections to fail. A key enhancement in ISE 1.3 is to make sure traffic received on a given interface is sent out same interface.
    2. Common use cases for using different interfaces include separation of management traffic from user traffic such as web portal access or to support dedicated profiling interfaces. For example, you may want employees to use a different interface for sponsor portal access. For profiling, you may want to use a specific interface for HTTP SPAN traffic or possibly configure IP Anycast to simplify reception and redundancy of DHCP IP Helper traffic. Another use case is simple NIC redundancy.
    a. Management traffic is restricted to eth0, but standalone node will also have PSN persona so above use cases can apply for interfaces eth1-eth3.
    b. For dedicated PAN / MnT nodes it usually does not make sense to configure multiple interfaces although ISE 1.3 does add support for SNMP on multiple interfaces if needed to separate out. It may also be possible to support NIC redundancy but I need to do some more testing to verify. 
    For PSNs, NIC redundancy for RADIUS as well as the other use cases for separate profiling and portal services apply.
    Regarding Supplicant Provisioning issue, the flows are the same whether wireless or wired. The same identity stores are supported as well. The key difference is that wireless users are directed to a specific auth method based on WLAN configuration and Cisco wired switches allow multiple auth methods to be supported on same port. 
    If RADIUS Proxy is required to forward requests to a foreign RADIUS server, then decision must be made based on basic RADIUS attributes or things like NDG. ISE does not terminate the authentication requests and that is handled by foreign server. ISE does support advanced relay functions such as attribute manipulation, but recommend review with requirements with local Cisco or partner security SE if trying to implement provisioning for users authenticated via proxy. Proxy is handled at Authentication Policy level. CWA and Guest Flow is handled in Authorization Policy.  If need to authenticate a CWA user via external RADIUS, then need to use RADIUS Token Server, not RADIUS Proxy.
    A typical flow for a wired user without 802.1X configured would be to hit default policy for CWA.  Based on successful CWA auth, CoA is triggered and user can then match a policy rule based on guest flow and CWA user identity (AD or non-AD) and returned an authorization for NSP.
    Regarding AD multi-domain support...
    Under ISE 1.2, if need to authenticate users across different forests or domains, then mutual trusts must exist, or you can use multiple LDAP server definitions if the EAP protocol supports LDAP. RADIUS Proxy is another option  to have some users authenticated to different AD domains via foreign RADIUS server.
    Under ISE 1.3, we have completely re-architected our AD connector and support multiple AD Forests and Domains with or without mutual trusts.
    When you mention the use of RADIUS proxy, it is not clear whether you are referring to ISE as the proxy or another RADIUS server proxying to ISE.  If you had multiple ISE deployments, then a separate RADIUS Server like ACS could proxy requests to different ISE 1.2 deployments, each with their own separate AD domain connection.  If ISE is the proxy, then you could have some requests being authenticated against locally joined AD domain while others are sent to a foreign RADIUS server which may have one or more AD domain connections.
    In summary, if the key requirement is ability to join multiple AD domains without mutual trust, then very likely ISE 1.3 is the solution.  Your configuration seems to be a bit involved and I do not want to provide design guidance on a paper napkin, so recommend consult with local ATP Security SE to review overall requirements, topology, AD structure, and RADIUS servers that require integration.
    Regards,
    Craig

  • Trying to optimize eSATA and internal disk configurations

    I'm trying to optimize the HD setup on my dual 2.5GHz G5 with 4.5GB RAM
    major considerations.
    - massive itunes library (260GB), and big iphoto lib (25GB) as well
    - lots of video editing in Final Cut with large capture files and many video exports
    - regular podcasting and other media creation with all my music and photos
    - need for regular COMPLETE backups
    - speed
    Here's the current setup. I have six disks as part of the system
    1. internal 160GB disk (maps simply to a MACHD volume)
    2. internal 250GB disk (maps simply to a COMMONS volume for Democracy player files, torrent downloads etc)
    then on my 4-port eSATA controller card
    4x 500GB SATA drives from Western Digital for a total of 2TB eSATA disk space
    they are in 2 eSATA enclosures from FirmTek
    I'm managing the disks with SoftRAID
    Before I get into the problem, how would YOU use this incredible amount of disk space, considering the goals I have? (video, media storage, backup).
    Now, The problem
    I've been disappointed with the speed of my system and suspect its my HD configuration. I have enough RAM right!?
    I've got some raid stripes going on
    2 of the 500GB disks (disk2 and disk3) support two "active" volumes
    a) a striped ATLAS volume of 800GB (holds itunes, documents, iphoto, basically all media files)
    b) a striped VIDEO SCRATCH volumn of 200GB (for working files in FCP, imovie, etc)
    the other 2 of the 500GB disks (disk0 and disk1) support two "clone" volumes
    a) a mirror MACHDCLONE volume on both disk0 and disk1 (to protect the system drive. I run Super Duper 3x per week)
    b) a striped ATLAS_CLONE volume to backup the active ATLAS volume
    the COMMONS volume is not backed up in any way. figure i can live without my Democracy files and torrents, etc.
    My ideas:
    based on my performance observations, my setup above is just wrong, and I don't know where to turn for the best advice. Google is very poor at dealing with such complexity in search results. There are some video advice sites, but they only cover part of my problems. I have a few theories of how I should be using these drives
    1. use the eSATA drives strictly for performance benefits, not for backup. consider a USB2.0 drive for backups and use Mozy for offsite backup
    2. simplify the disk allocation. No single disk should support more than one volume
    3. the video scratch SHOULD be striped in order to benefit from speed. and should be on its own physical disk(s) separate from ANY other function
    So I'm thinking
    a) stripe two of the eSATAs into a single 1 TB array for my media or ATLAS volume
    - this solves me running out of space on the volume (getting closer with the iTunes video downloads every day)
    - it's also just physically easier to deal with. I can SEE what drives make up ATLAS alone
    - will be easier for me to eventually replace the G5 with an MBP running its own eSATA pc card with easy access to the same ATLAS volume
    this still leaves two 500GB eSATA disks around
    b1) I could extend the ATLAS volume to an array including a 3rd eSATA disk for a 1.5TB volume. this would allow me to bring COMMONS files onto ATLAS
    b2) the remaining 500GB eSATA disk can be video scratch
    OR
    c1) dedicate 1 500GB disk to VIDEO SCRATCH
    and
    c2) partition the other 500GB disk as a clone of both COMMONS and the internal System drive
    see how CONFUSING THIS IS ?
    there are too many permutations of things.
    I know I like keeping the system drive simple and internal. Ideally, the second internal disk would mirror this volume, but they do not match in size or brand
    part of me wants to stripe all FOUR eSATA drives into a blazing 2TB masterpiece, but it seems like a bad idea to put VIDEO SCRATCH on the same array as ATLAS
    Other questions:
    should the itunes library get it's own disk altogether? is striping of benefit here?
    are there some sites that explain HD management well?
    PowerMac G5 2.5GHz 4.5GB RAM   Mac OS X (10.4.9)   also own a blacbook

    Thanks so much for that awesome feedback.
    A few points.
    I have the dual processor G5, not the quad core. Purchased in Jan 2005.
    My RAM pageouts are fine (didn't know what that was until you mentioned it)
    Love the idea of moving COMMONS "outside the box"
    I used to have my system volume boot from an external RAID, but didn't notice a big improvement, and it meant my G5 would ONLY boot if the eSATAs were powered up. I just didn't like that feel. I want the tower to work in a self-contained fashion, even if I don't have access to all media. I want access to the OS and apps.
    I'm unlikely to buy more SATA controllers and enclosers or too many new disks. I'm on a serious budget and want to work with as much of what I have as possible. That said, i just checked out the Drobo and am drooling. I'll wait to see how well it performs for data access (and video) and not just storage.
    It sounds dreamy to stripe all four of the eSATAs into a 1.8TB storage megaplex. I imagine they would scream in an ATLAS_BADASS volume, but then I've got nothing left for VIDEO SCRATCH used to capture and render.
    The VIDEO_SCRATCH doesn't need to be large, and I think that's where I'm having a conflict. My eSATA drives are way too big to use even ONE as a video scratch, much less striping two of those bad boys just for that purpose
    Purchasing a 10K drive for video scratch (or system volume) is not really in the cards yet.
    So here's where I sit now:
    1. My Media Storage
    ++ the 4 eSATA drives (2.0TB raw)
    I go with the badass steroid injected ATLAS volume striped across all four.
    this is my media array and holds all the contents of ATLAS and COMMONS (iTunes, iPhoto, Documents, FCP training videos, ripped DVDs, the works)
    2. My System Volume
    ++ the 250GB internal SATA
    move COMMONS out
    migrate system volume to this disk
    better storage-to-free space ratio
    3. My Scratch Disk
    a) use the now-spare 160GB internal Maxtor (probably weak and slow)
    b) get an external FW800 RAID disk from OWC
    http://eshop.macsales.com/shop/firewire/hard-drives/EliteAL/StripedRAID
    I'd go with the 160GB or 320GB
    4. Backup Plan - level 1 - local
    ++ use my spare external Maxtor 250GB FW drive
    clone the system volume regularly
    ++ get an external FW drive (like the 1TB My Book Premium II from WD)
    clone ATLAS_BADASS regularly
    the WD is just $400
    i know it's capacity is lower than my super striped RAID, but i don't know of any cheap way to clone ATLAS_BADASS to a 1.5TB drive
    5. Backup Plan - level 2 - remote
    pay for a Mozy storage account which has unlimited capacity
    upload system and ATLAS_BADASS every few weeks
    any new thoughts? and thanks again!

  • Backup and Restore OCR,Voting Disk and ASM Disks in new SAN-10g RAC

    Dear Friends,
    I am using 10g R2 RAC serup on Linux
    My OCR,Voting Disk and ASM Disk for DBF Files are on a SAN box
    Now i am reorganising SAN by scrapping the entire SAN and cretaing new LUN's (It's a must)
    so pleae let me know
    1) how do i take backup of OCR and Voting Disk from existing SAN and how do i restore it in new LUN's after SAN reorganisation
    2) how do i take backup of existing Database's from existing SAN and how do i restore it in new LUN's after SAN reorganisation.
    I will be doing it in Planned downtime only
    Regards,
    DB

    For step 1 you should following metalink doc.
    For step 2 here is simple backup command script.
    I have done this in windows for you.
    D:\app\ranjit\product\11.2.0\dbhome_1\BIN>rman target /
    Recovery Manager: Release 11.2.0.1.0 - Production on Wed Feb 8 21:48:47 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    connected to target database: ORCL (DBID=1299593730)
    RMAN> run
    +{+
    allocate channel c1 device type disk format 'D:\app\ranjit\rman\%U';
    allocate channel c2 device type disk format 'D:\app\ranjit\rman\%U';
    backup database;
    backup current controlfile format 'D:\app\ranjit\rman\%U';
    +}+
    Regards

  • Second drives on Hyper-V guests failing suddenly

    In past 2 months or so, we have lost 3-4 Hyper-V guest systems that had a second drive attached via the virtual SCSI adapter. Up until this time, these servers have run flawlessly for 3 years+. The drives appear on the host, but if you try to access
    them, they tell you that the disk cannot be found. However, the .vhd is full size and sitting in the SharedCluster storage folder where they belong.
    Even if I create a new server or drive from scratch, in short order the second drive becomes unusable, even if I create it as an IDE device instead of SCSI.
    I have 2 host servers running 2008 R2 Enterprise connected to a Equallogics SAN via iSCSI in a 2 node cluster.
    Oddly, the boot drives seem to not have any issues, on old or new servers. It's only the second drive. Very odd, and scary. Any ideas out there?

    Hi Kevin,
    Please check the event log on each cluster node .
    Have you restarted the cluster ?
    Just one CSV ? I mean that the second drive and boot drive are not on the same LUN ?
    Best Regards
    Elton JI
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Would like to know if this is correct disk configuration for 11.2.0.3

    Hello, please see the procedure below that I used to allow the grid infratstructure 11.2.0.3 oui to be able
    to recognize mY EMC San disks as candidate disks for use with ASM.
    we are using EMC powerpath for our multipathing as stated in the original problem description.I want to know if this is a fully supported method for
    configuring our san disks for use with oracle ASM becuase this is redhat 6 and we do not have the option to use the asmlib driver.Please note that I have
    been able to successfully install the gird infrastructure successfully for a 2 node RAC cluster at this point using this method.Please let me know if there
    any issue with configuring disks using this method.
    We have the following EMC devices which have been created in the /dev directory.I will be using device emcpowerd1 as my disk for the ASM diskgroup I will be
    creating for the ocr and voting device during grid install.
    [root@qlndlnxraccl01 grid]# cd /dev
    [root@qlndlnxraccl01 dev]# ls -l emc*
    crw-r--r--. 1 root root 10, 56 Aug 1 18:18 emcpower
    brw-rw----. 1 root disk 120, 0 Aug 1 19:48 emcpowera
    brw-rw----. 1 root disk 120, 1 Aug 1 18:18 emcpowera1
    brw-rw----. 1 root disk 120, 16 Aug 1 19:48 emcpowerb
    brw-rw----. 1 root disk 120, 17 Aug 1 18:18 emcpowerb1
    brw-rw----. 1 root disk 120, 32 Aug 1 19:48 emcpowerc
    brw-rw----. 1 root disk 120, 33 Aug 1 18:18 emcpowerc1
    brw-rw----. 1 root disk 120, 48 Aug 1 19:48 emcpowerd
    brw-rw----. 1 root disk 120, 49 Aug 1 18:54 emcpowerd1
    brw-rw----. 1 root disk 120, 64 Aug 1 19:48 emcpowere
    brw-rw----. 1 root disk 120, 65 Aug 1 18:18 emcpowere1
    brw-rw----. 1 root disk 120, 80 Aug 1 19:48 emcpowerf
    brw-rw----. 1 root disk 120, 81 Aug 1 18:18 emcpowerf1
    brw-rw----. 1 root disk 120, 96 Aug 1 19:48 emcpowerg
    brw-rw----. 1 root disk 120, 97 Aug 1 18:18 emcpowerg1
    brw-rw----. 1 root disk 120, 112 Aug 1 19:48 emcpowerh
    brw-rw----. 1 root disk 120, 113 Aug 1 18:18 emcpowerh1
    As you can see the permissions by default are root:disk and this will be set at boot time.These permissions do not allow the Grid Infrastructure to recognize
    the devices as candidates for use with ASM so I have to add udev rules to assign new names and permissions during boot time.
    Step 1. Use the scsi_id command to get the unique scsi id for the device as follows.
    [root@qlndlnxraccl01 dev]# scsi_id whitelisted replace-whitespace --device=/dev/emcpowerd1
    360000970000192604642533030434143
    Step 2. Create the file /etc/udev/rules.d/99-oracle-asmdevices.rules
    Step 3. With the scsi_id that was obtained for the device in step 1 you need to create a new rule for that device in the /etc/udev/rules.d/99-oracle-
    asmdevices.rules file. Here is what the rule for that one device looks like.
    KERNEL=="sd*1" SUBSYSTEM=="block",PROGRAM="/sbin/scsi_id --whitelisted --replace-whitespace /dev/$name",RESULT=="360000970000192604642533030434143",NAME="asm
    crsd1",OWNER="grid",GROUP="asmadmin",MODE="0660"
    ( you will need to create a new rule for each device that you plan to use as a candidate disk for use with oracle ASM).
    Step 4. Reboot the host for the new udev rule to take affect and then verify that the new device entry will be added into the /dev directory with the
    specified name, ownership and permissions that are required for use with ASM once the host is back online.
    Note: You will need to replicate/copy the /etc/udev/rules.d/99-oracle-asmdevices.rules file to all nodes in the cluster and restart them for the changes to
    be in place so that all nodes can see the new udev device name in the /dev directory on each respective node.
    You should now see the following device on the host.
    [root@qlndlnxraccl01 rules.d]# cd /dev
    [root@qlndlnxraccl01 dev]# ls -l asm*
    brw-rw----. 1 grid asmadmin 65, 241 Aug 2 10:10 asmcrsd1
    Step 5. Now when you are running the oui installer for the grid installation when you get to the step where you define your ASM diskgroup you should choose
    external redundancy and then click on the change disk dicovery path and change the disk discovery path as follows.
    /dev/asm*
    Now at this point you will see the new disk name asmcrsd1 showing as a condidate disk for use wiith ASM.
    PLease let us know if this is a supported method for our shared disk configuration.
    Thank you.

    Hi,
    I've seen this solution in a lot of forums but I do not agree or don't like at all; even if we have 100 luns of 73GB each.
    so the thing is, as in any other unix flavor we don't have asmlib***just EMCPOWERPATH running on differentes unix/linux flavors we dont like either udev-rules, dm-path and stuff***
    Try this as root user
    ls -ltr emcpowerad1
    brw-r----- 1 root disk 120, 465 Jul 27 11:26 emcpowerad1
    # mknod /dev/asmdisks
    # chown oragrid:asmadmin /dev/asmdisks
    # cd /dev/asmdisks
    # mknod VOL1 c 120 465
    # chmod 660 /dev/asmdisks/VOL*
    repeat above steps on second node
    asm_diskstring='/dev/asmdisks/*'
    talk with sysadmin and stoadm guys to garanty naming and persistents in all nodes of your RAC using emcpowerpath. (even after reboot or san migration)

  • SCVMM R2 - "Incomplete VM configuration" and other bugs like "failed" status

    Noticed a lot of machines in SCVMM go into an irreversible "INCOMPLETE VM CONFIGURATION" status.  All options to manage the machine are then grayed out.  A couple options like "repair" and "refresh" are available,
    neither of which solves the problem.  These machines are up and running with no related errors in our Hyper-V cluster.  There is also one in a "failed" status with the same symptoms.  
    When is this bug in SCVMM going be fixed?  This is supposed to be enterprise VM management software?  Right now, I can't even manage 10 virtual machines with SCVMM because of a BUG.  I have to
    go to the failover cluster manager or the Hyper-V manager.  
    I really don't want to wipe out the entire cluster from SCVMM and readd it, because removing the cluster from management uninstalls the SCVMM agent on all servers.  Needless to say, adding the cluster back to SCVMM management would (ridiculously) require
    a reboot of all servers in the cluster to "reclaim storage."  Just confirmed that this will happen in my test environment.
    If there is a supported way to fix this, please let me know. 
    I am by far not the only person that has noticed this problem (try google), but there are no good supported solutions out there.   

    Hi john,
    Possible causes for an Incomplete VM Configuration status include:
              A configuration file needed for this virtual machine is missing, was accidentally deleted, or is inaccessible because of insufficient permissions. Such files include the virtual hard disk files (.vhd)
    and any additional virtual hard disk files in a differencing disk hierarchy (associated with checkpoints), ISO images (.iso), virtual network configuration files (.vnc), and virtual floppy disks (.vfd). A missing configuration file can be the result of unmasking
    a LUN on a storage area network (SAN) to a different server.
              A virtual hard disk (.vhd) was deleted without removing the virtual hard disk from the virtual machine in Virtual Machine Properties.
              A library share was deleted, and an ISO image on the share was linked to a virtual machine deployed on a host.
              The .vmc file was manually updated or has become corrupted, and Virtual Machine Manager cannot parse the file.
              If a newly discovered virtual machine has Incomplete VM Configuration status, the cause is always a missing virtual hard disk. If other files are missing from the virtual machine—an ISO image (.iso),
    a virtual floppy disk (.vfd), or a virtual network configuration file (.vnc)—a job warning is logged without placing the virtual machine in an Incomplete VM Configuration state. To find out more about the issue that caused the Incomplete VM Config statue,
    view job details in Jobs view of the VMM Administrator Console for additional information. For information about using Jobs view, see "How to Monitor Jobs" (http://go.microsoft.com/fwlink/?LinkId=98633)
    in VMM Help.
    As to solution please refer to following article regarding this state :
    https://technet.microsoft.com/en-us/library/bb963764.aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .
    1.  The config file for the virtual machine is not missing, the machines work flawlessly in Hyper-V manager and failover cluster manager.
    2.  None of the other solutions apply.
    3.  Why is "repair virtual machine" in the solution for this?  Absolutely does not work.
    Forgot to add, I now have a virtual machine that is showing that it has been migrating for days (in SCVMM only of course).
    What I believe happened is that SCVMM has a hard time recovering from the famous cluster errors for Hyper-V , 1146, 1230, 5120, and 5142.
    In my opinion, the fix is this:
    Have Microsoft fix the buggy SCVMM code.

  • Disk Configurations

    I'm building a new system and have some questions about what sort of disk configuration to put together.  Probably about 90% of my source will be AVCHD (more details in my earlier post).  The articles on this forum are great and have been very helpful but I'm still confused.  The Generic Guideline for Disk Setup talks about distributing access across as many disks as possible but then shows all configurations with more than 4 disks as placing everything except the OS, programs, and pagefile on the same RAID.  A file is distributed across multiple disks in a RAID but it's one logical drive so there must be head contention if more than one file is needed at the same time from that RAID.  Wouldn't a setup like this work better?
    C: [1 Drive] OS, Programs
    D: [RAID 3] Media, Projects
    E: [RAID 0] Pagefile, Media Cache
    F: [1 Drive] Previews, Exports
    Would there be any problems having multiple RAIDs?  In the above example, the RAID 3 would require a hardware controller and the RAID 0 could run off the ICH10R on the motherboard.  Can  ICH10R support multiple RAIDS (more than one RAID 0) and can a hardware controller (say, an Areca) support more than one RAID?  If so, would it be better to run both the RAID 3 and RAID 0 in this example off the Areca?
    To RAID or not to RAID has been helpful but I'm still not clear on everything.  What are the differences between an inexpensive controller like the Areca ARC-1210 and the more expensive models which can cost 4 times as much?  Obviously the more expensive controllers have faster processors and more cache but do you get 4 times the performance?  I'm sure a high-end controller would be helpful if you're editing 4K files or uncompressed HD but I suspect it's not worth the expense for a mostly-AVCHD environment.
    What about using RAID 0 for source media?  I understand the likelihood of problems increases with the number of disks but what does that mean in the real world?  I've been using my current drives (Seagate SCSIs) for about 7 years and have never had a problem.  In fact I've owned computers with hard drives since the early 80s and don't believe I've ever had a disk fail on me.  Of course everything needs to be backed up but how often might I be rebuilding a RAID 0 due to disk failure?  Maybe I've been very lucky or maybe "they don't build 'em like they used to".
    I've been using (parallel)SCSI for over 10 years but no longer believe it's cost effective.  It seems like adding more SATA drives to a RAID would be cheaper than expensive 15K RPM SAS drives.  Does everyone agree with that?  Also, SAS drives are only available in much smaller capacities than SATA drives.
    A hardware controller is required for RAID 3 and strongly recommended for RAID 5 but do they offer an advantage for RAID 0?  What about for RAID 10?  One advantage would be providing extra ports since most motherboards only provide 6 SATA ports.  Does one motherboard offer any better SATA and RAID performance than any other or are they all about the same in that regard?
    Is there any advantage to external RAIDS other than convenience in moving data from one computer to another?  It seems like a controller directly on the bus would be faster than one connected externally.
    Is there any disadvantage to running SATA 3 drives on a SATA 2 controller?  A possible advantage might be the larger cache that some SATA 3 drives have.  Would a 64MB cache help much over a 32MB cache?  I've also heard SATA 3 can increase burst speeds.  If I have two SATA 3 ports, and I'm using one on an SSD for the OS, would it help to use the other port for another drive or might that take away bandwidth from the SSD?
    I've run across some things (don't have links handy) that indicate there may be problems with drives larger than 2 TB.  Is this just for single drives larger than 2 TB, RAIDs larger than 2 TB, or am I confused and this is not an an issue?
    What about specific drives that are quiet and perform well?  Quietness is important to me and I worry about building a box, with as many as 10 drives, sounding like an airport runway.  I've heard the Caviar Blues are quieter than the Black but I don't think they perform as well.  I've heard Samsung F3 are both quiet and fast and that's what I'm leaning towards at the moment.  What's with the F4?  Samsung's site says it's “Independently tested as the fastest 3.5” HDD available” yet it also refers to it as an “Eco-Green HDD”, which usually means slow.
    Should I use different drives in RAIDs than standalone?  I've heard “enterprise” models are better for RAIDs because of differences in their firmware error recovery.  These sources say “consumer” models are more likely to time-out in a RAID because they have more aggressive error recovery.  Is this true and should be a concern?
    Roy

    I'm looking forward to an answer too, because I have some of the same questions. I'm currently working with a systems integrator on a quote, and we are hashing out some details about a few things.
    I do a lot of uncompressed 10-bit, as well as some 1080p60 projects. So, for the RAID, to date, I'm going for an Areca ARC-1880ix-16. Funny thing is, there is not much price difference between the 12 and 16 port model. So I'm going to go with the 16-port model, and upgrade the cache to 4GB. Seems well worth it. I'll probably start out with an 8 disk RAID setup, and upgrade it n the future if need be.
    We did toy with the idea to build the RAID around SSDs.... Ouf, imagine having an 8 SSD RAID! (Corsair SSD Force Series 3). But realistically, I'll most likely go with normal SATA III drives. And since they are so inexpensive, I'll probably fill it to the brim. Or most likely, the capacity that the case can handle.
    But Roy has a good point. What about distributing the load on the array? Would it be more appropriate to make 2 RAID groups on the card? To balance the traffic of the media, cache, previews, pagefile and export?
    Roy, I can start answering some of your questions though (My years of being a PC tech comes in handy sometimes, hehe)
    Would there be any problems having multiple RAIDs?  In the above example, the RAID 3 would require a hardware controller and the RAID 0 could run off the ICH10R on the motherboard.  Can  ICH10R support multiple RAIDS (more than one RAID 0) and can a hardware controller (say, an Areca) support more than one RAID?  If so, would it be better to run both the RAID 3 and RAID 0 in this example off the Areca?
    From a technical standpoint, there are no problems running multiple RAIDs. But there would be a performance drawback if the RAID was software only (OS managed). Thankfully, on-board RAIDs do help, but there is still some CPU overhead to deal with on-boad RAID5, and very minimally RAID 0. Having a RAID card is always the better option if you can afford it. The performance, manageability and flexibility are unmatched, compared to any on-board motherboard RAID controllers. RAID 0 is simple and does not need much resources. So, yes, you could run the RAID 0 from the on-board controller of the motherboard. It would theoretically "offload" the 8x PCIe lane from extra traffic, but practically, I seriously doubt the disk I/O would exceed the PCIe bandwidth in the first place.
    What about using RAID 0 for source media?  I understand the likelihood of problems increases with the number of disks but what does that mean in the real world?  I've been using my current drives (Seagate SCSIs) for about 7 years and have never had a problem.  In fact I've owned computers with hard drives since the early 80s and don't believe I've ever had a disk fail on me.  Of course everything needs to be backed up but how often might I be rebuilding a RAID 0 due to disk failure?  Maybe I've been very lucky or maybe "they don't build 'em like they used to".
    I use P2 media. So I practice double copies. A working copy on the computer, the other on an external HD as a backup copy. Everyone using solid state media to record on, should do the same. Having said that. You know what RAID 0 means?  Zero chance of data recovery if 1 drive fails. The more drives in a RAID, the more likely a problem can arise. Packing drives tightly together will produce more heat if not well ventilated, and will reduce the life expectancy of any drive. I have come across some bad disks in my 20+ years dealing with computers as a tech. Not that many, but enough to not trust them, and enough to practice backups even if I had a RAID 5 or 6 (and a hotsparet). Even though I have a backup copy on an external drive for my source media, and even though I try to backup as often as I can, I can still loose other things (ancillary files) in my hypothetical RAID 0 media drive. Worst case? I could loose a day's worth of work, plus what ever time it takes to rebuild and restore everything from the previous night's backup (if I didn't forget). Time is money for most of us. And investing in a proper editing system is something I don't take lightly.
    A hardware controller is required for RAID 3 and strongly recommended for RAID 5 but do they offer an advantage for RAID 0?  What about for RAID 10?  One advantage would be providing extra ports since most motherboards only provide 6 SATA ports.  Does one motherboard offer any better SATA and RAID performance than any other or are they all about the same in that regard?
    There are no major advantages to use a RAID 0 or 10 on a standard addon hardware RAID controller, other then to free up ports on the motherboard, or have a higher disk count on your RAID. But higher end RAID cards with bigger cache, will be faster. On-board RAIDs do have some overhead, but for RAID0, it's not as drastic as RAID5. Motherboard SATA RAIDs, with the same chipset, for all intents and purposes, are basically the same performance. There may be small variations from one manufacturer or another, but nothing real world measurable.
    Is there any advantage to external RAIDS other than convenience in moving data from one computer to another?  It seems like a controller directly on the bus would be faster than one connected externally.
    Convenience, is subjectively proportional to your needs and disk quantity inside the computer casing. hehe What stops me from having more then 16 drives in my system, is the casing size for HDs and possible heat dissipation issues. I try to have a system that is self contained, and avoid using an external enclosure if I can. But regardless, the speed of internal and external ports on a RAID card is the same.
    I've run across some things (don't have links handy) that indicate there may be problems with drives larger than 2 TB.  Is this just for single drives larger than 2 TB, RAIDs larger than 2 TB, or am I confused and this is not an an issue?
    Not an issue with Windows 7.
    Frederic

  • VSS timeouts on Hyper-V guest, Exchange Replication Service, CU5

    I'm having a problem with a Hyper-V guest that is running Exchange 2013 CU5.  I will say I have had issues with this VM since installing CU5.
    Note:  I am doing a backup from within the VM guest, not from the host.
    The scheduled backup takes a long time to complete, an entire weekend to backup 150GB to an ISCSI disk.  In addition the CPU time is very high (58%) while this is happening.  Attempting to open the backup manager window consistently makes the CPU
    time hit 99%.  When this happens Outlook clients will fail.  When backup manager opens it will continually say "Reading data; please wait..."  If the backup manager happened to already been open, the backup job will say "Volume
    1 0% of 4 volumes."
    The processes taking up the CPU time are Microsoft Volume Shadow Copy Service  (24%), Microsoft Block level backup service (62%) and virtual disk service (12%).  Memory use always hovers around 65%.  If I attempt to kill the processes with
    task manager, there is no change.  If I use the kill executable it will say the process is not running.  I cannot stop the corresponding service either.  I cannot stop the backup.  I cannot query vss writer status.  I cannot restart
    the ISCSI service (device in use.)  Restarting the NAS that contains the ISCSI target does nothing.  The only recourse is to restart the server.
    If I restart the server and start a backup fairly soon after, the backup completes normally, in about an hour.  During a normal backup CPU usage is at about 30%.  The Microsoft Volume Shadow Copy Service runs at 0% CPU time as well as the virtual
    disk service.  The Microsoft Block Level Backup Engine runs at 10% CPU time.  The scheduled backup is set to start at 9:30pm.  I have also tried changing backup times.  If I restart the server at 4 am and do not run a manual backup, the
    scheduled backup performs poorly.
    After some digging I find these errors:
    Log Name:      Application
    Source:        MSExchangeRepl
    Date:          10/14/2014 9:30:41 PM
    Event ID:      2112
    Task Category: Exchange VSS Writer
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Description:
    The Microsoft Exchange Replication service VSS Writer instance 7a465f3f-25ba-45b2-952a-870a6ddc2f2b failed with error code 80070020 when preparing for a backup of database 'Mailbox Database 2123847568'.
    Log Name:      Application
    Source:        VSS
    Date:          10/14/2014 9:30:41 PM
    Event ID:      8229
    Task Category: None
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Description:
    A VSS writer has rejected an event with error 0x00000000, The operation completed successfully.
    . Changes that the writer made to the writer components while handling the event will not be available to the requester. Check the event log for related events from the application hosting the VSS writer. 
    Operation:
       PrepareForBackup event
    Context:
       Execution Context: Writer
       Writer Class Id: {7e47b561-971a-46e6-96b9-696eeaa53b2a}
       Writer Name: MSMQ Writer (MSMQ)
       Writer Instance Name: MSMQ Writer (MSMQ)
       Writer Instance ID: {b8ae6140-7fcb-427d-9493-e070221f752f}
       Command Line: C:\Windows\system32\mqsvc.exe
       Process ID: 1676
    Log Name:      Application
    Source:        MSExchangeRepl
    Date:          10/14/2014 9:30:41 PM
    Event ID:      2024
    Task Category: Exchange VSS Writer
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Description:
    The Microsoft Exchange Replication service VSS Writer (Instance 7a465f3f-25ba-45b2-952a-870a6ddc2f2b) failed with error 80070020 when preparing for a backup.
    As you can see the errors happen almost immediately after the backup starts.
    In addition the following VSSwriters show a last error "timed out"
    Microsoft Exchange Writer
    Com+ RegDB Writer
    Shadow Copy Optimization Writer
    Registry Writer
    I will add also the following issues I have been experiencing ever since installing CU5:
    1 Version buckets threshold easy reached.  Had to modify threshold and set limits on email size. Sometimes still happens.
    2 After a restart of the server, the server may have no, or limited connection to the network.  It may or may not have an exclamation point on the network icon.  If there is no exclamation point, it can ping other network resources, but inbound requests
    and pings, will fail.  The event log shows that the network should be available while booting, but it's not since it cannot communicate with active directory.  The fix for this is to disable/enable the network adapter and then all is well.
    I really need some help figuring this out.  Again, never had an issue with this server prior to Exchange 2013 CU5 being installed.

    Hi,
    The reason I think it is Exchange related, is that from the error message it mentioned:
    A VSS writer has rejected an event with error 0x00000000
    And then it indicated it is "Microsoft Exchange Replication service VSS Writer". 
    The Microsoft Exchange Replication service VSS Writer (Instance 7a465f3f-25ba-45b2-952a-870a6ddc2f2b) failed with error 80070020 when preparing for a backup
    Later I found this article:
    How to turn on the Exchange writer for the Volume Shadow Copy service in Windows Small Business Server 2003
    http://support2.microsoft.com/kb/838183/en-us
    It mentioned that with turned on Exchange writer, we will fail to do system state backup and Exchange backup at a same time:
    The Exchange writer may cause conflicts with the information store backup feature of the Backup utility. The information store backup feature uses online streaming to back up the Exchange databases. If the Exchange writer is registered, the Backup utility
    may log errors if you try to back up the system state and the Exchange information store at the same time. (For example, the Backup utility may log Event ID 8019.)
    In order to confirm if it is the cause, please test to:
    1. Create a backup-once task to backup only a simple file - this is a quick test to confirm if Windows Server Backup is fine or not.
    2. A second backup-once task to do a system state backup without any exchange related information.
    3. If both failed, please test to disable Exchange Writer and redo test 1&2.
    It will take us some time on doing these tests. The reason is to figure out if it is Exchange related or not. I'll continue discuss with you if any new clue is found. 
    If you have any feedback on our support, please send to [email protected]

Maybe you are looking for

  • I can't transfer films on to my iPhone from my work PC

    Hi there, I have an iPhone. I connect it to my laptop at home (where I keep my iTunes library, mostly music and a few audio books) and I connect it to my work PC, which is authorised. Now I've just downloaded some movies from tekpub especially for th

  • Dreamweaver 8 and keystroke issues

    I recently installed Dreamweaver 8 along side of Dreamweaver MX. After the install was complete I tried to use Dreamweaver 8 and found that certain (letter) keys were not reconginzed when I typed them in the design window. I then tried typing in the

  • Asynchronous ABAP proxy

    hello everybody! i'm working with PI 7.1 i create a interface ABAP proxy -> PI -> file so is a async interface. every step was created succesfully. When i try to call proxy from my abap program or from test tool in SPROXY transaction i don't see any

  • How do I disable Download Option on .Mac Web Gallery

    I created my pages on .Mac Web Gallery with iweb from my iphoto albums yesterday. At that time I chose to enable viewers to download these photos. I have changed my mind and now want to disable the downloading option. How do I do this with out having

  • I got a mouse issue with Photoshop.

    When trying to use the brush or any tool, the tool doesn't Hold   drag. Clicking and holding to use the brush only creates the initial mark (dot). I have a MacBook Air and Photoshop Elements 11. I searched on the internet and people say its got to do