File Server Storage Planning

I am planning a large file server and I would like your advice and suggestions.  We currently have 4TB of data and I anticipate, based on growth history, we will grow to 25TB within the next 5 years. I am planning our storage infrastructure for this.
 Our files are 95% .jpg files and 5% word documents, excel docs and other misc. documents.  We have 2 servers.  One in building A and one in Building B (DR Site) that are connected via GB LAN.  The plan is to replicate the data from serverA
to ServerB using DFSR.  This will provide high availability in the event of a failure on ServerA.  We also have a 3rd server in buildingB that is a DPM2010 server which will backup this data.
I am trying to decide between to options and would like to know your input and advise.
Option 1:  Split our data into 4 volumes and then grow those volumes over time by adding storage and spanning each disk as necessary.
Option 2:  Put all data on the same volume and grow that one volume over time by adding storage and spanning the disk.
Taking into account our scenario, which decision do you recommend and why?

Hi!
You have quite a few questions inside that question. :-)
To evaluate the impact of using DFS-Replication, you need to look into the rate of change of the files, not exactly the total capacity or the number of files. DFS-R has some known limitations (for instance, you can't replicate open files). Also note that,
when using DFS, both server are running and available to users. Users in building A will go to the server in building A and users in building B will go to the server in building B (if you configure your sites and subnets correctly in Active Direction). That
could potentially lead to a replication conflict if a user in building A and a user in building B edit the same file at the same time. In that case, the last writer will win the conflict.
For true high availability, you need to use a File Server Failover Cluster with a SAN for shared storage. Since you mention different builds, you need to have two SAN solutions (one in each building) and a SAN-based solution to replicate the storage
between them. With a failover cluster, your specific file share will be running in only one of the buildings. If the cluster resource is online on building A, users from both buildings will access tthe share on that server and the server in building B will
only take over that resource in case of a failure of the server in building A.
In regards to using one or multiple volumes, it's really your call. Having multiple volumes will require multiple replication groups (if using DFS-R) or multiple cluster disks (if using Failover Cluster). This added configuration might pay off in giving
you more flexibility. For instance, you could have different cluster groups that could be onlined on different cluster nodes. It also could influence how you plan your backups. You can mask the complexity of having multiple volumes and even multiple file servers
from the end users by using DFS-Namespaces.
DPM 2010 would work with either solution.
Jose

Similar Messages

  • Hyper-V 2012 High Availability using Windows Server 2012 File Server Storage

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks
    were found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v
    hosts are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!

    Hi Guys,
    Need your expertise regarding hyper-v high availability. We setup 2 hyper-v 2012 host in our infra for our domain consolidation project. Unfortunately, we don't have hardware storage that is said to be a requirement that we can use to create a failover cluster
    for hyper-v host to implement HA. Here's the setup:
    Host1
    HP Proliant L380 G7
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Host2
    Dell PowerEdge 2950
    Windows Server 2012 Std
    Hyper-V role, Failover Cluster Manager and File and Storage Services installed
    Storage
    Dell PowerEdge 6800
    Windows Server 2012 Std
    File and Storage Services installed
    I'm able to configure the new feature about Shared Nothing Live Migration - i'm able to move VM's back and fort between my hosts without storage. But this is planned and proactive approach. My concern is to have my hyper-v host to become highly available in
    the event of system failure. If my host1 dies, the VMs should go/move to host2 and vice versa. In setting this up, i believe i need to enable the failover clustering between my hyper-v hosts which i already did but upon validation, it says "No disks were
    found on which to perform cluster validation tests." Is it possible to cluster it using just a regular windows file server? I've read about SMB 3.0 and i've configured it as well i'm able to save VMs on my file server, but i don't think that my hyper-v hosts
    are already highly available.
    Any feedback and suggestions or recommendation is highly appreciated. Thanks in advance!
    Your shared storage is a single point of failure with this scenario so I would not consider the whole setup as a production configuration... Also setup is both slow (all I/O is travelling down the wire to storage server, running VMs from DAS is ages faster)
    and expensive (third server + extra Windows license). I would think twice about what you do and either deploy a built-in VM replication technologies (Hyper-V Replica) and apps built-in clustering features that does not require shared storage (SQL Server and
    Database Mirroring for example, BTW what workload do you run?) or use some third-party software creating fault tolerant shared storage from DAS or investing into physical shared storage hardware (HA one of course). 
    Hi VR38DETT,
    Thanks for responding. The hosts will cater a domain controller (on each host), Web filtering software (Websense), Anti-Virus (McAfee ePO), WSUS and an Auditserver as of the moment. Is the Hyper-V Replica somewhat give "high availability" to VMs or Hyper-V
    hosts? Also, is the cluster required in order to implement it? Haven't tried that but worth a try.

  • Powermac G4 as media file server/storage?

    Hello everyone,
    Long time reader, first time poster.
    I currently run a 15" Pbook G4 1.67g with 2 GB of RAM and 100GB HD, hooked up to an 80GB and 250GB external FW drives. I've slowly been transferring all my DVDs and CDs to the drives, not to mention video footage and all the photos I've been taking. Right now I've got maybe 3 or 4 GB free between the 3 drives.
    I'm greedy. I want to keep going. I don't want to trim back. I have this dream of getting ALL of my music and movies onto HD storage, so I can digitally access any of my media at any time.
    I run iTunes off my Pbook to access my library (which is stored on the 250GB external), but it runs SOOO SLOOOWW. iTunes is extremely sluggish and slow to react. Batch editing files (I'm extremely anal about the tags and album art) takes FOREVER. I hate it.
    I blame the external drives. Maybe they're slow? (The 250GB FW is a Maxtor OneTouch drive).
    I do have an old Powermac G4 dual 533mhz with a nearly-dead 80GB HD in it. I was wondering if there was a simple way to load that machine up with the maximum amount of HD storage, and hook it up to the Pbook for my super-optimal-dream-media-storage solution.
    I'd prefer the cheapest and fastest (in terms of access/transfer speed) solution possible. I know I can set up the Powermac as a file server (but I don't know HOW), and I know I could set up the Powermac as a FW target drive and just plug it in (is that really optimal? Also, I have sold the monitor for it ... all I have is the keyboard and mouse.. is there a way to hook up the MacG4 to the Pbook as a monitor? and how much of a headache would it be to repeatedly connect/disconnect to the MacG4 without a monitor?)
    Anyway. I'm also open to solutions which would make the Powermac a wireless storage solution, since I live in a small apartment and love toting the Pbook around to surf and blog in bed, at the table, in the kitchen, etc.. except with my current setup I have to disconnect it from the music storage. So I can't have my Pbook in the kitchen and play wirelessly off iTunes...
    Thoughts? Comments? Suggestions? I know I'm asking like a million questions in one post, but I appreciate all the help in advance.

    Hi, jzn omg!
    Right now I've got maybe 3 or 4 GB free between the 3 drives...
    I run iTunes off my Pbook to access my library (which is stored on the 250GB external), but it runs SOOO SLOOOWW. iTunes is extremely sluggish and slow to react. Batch editing files (I'm extremely anal about the tags and album art) takes FOREVER. I hate it.
    I blame the external drives. Maybe they're slow?
    Your hard drives are severely overloaded. I try to maintain at least 10-15% or more available space on a drive, particularly on a startup drive. I'm not surprised that the applications are "beachballing" - your system hasn't been given sufficient hard drive space from which it can effectively operate. Moreover, your drive data is likely to be significantly fragmented with the system working "overtime" to constantly search for and piece together files before it can use them and execute the next command.
    Gary
    1GHz DP G4 Quicksilver 2002, 400MHz B&W rev.2 G3, Mac SE30   Mac OS X (10.4.5)   5G iPod, Epson 2200 & R300 & LW Select 360 Printers, Epson 3200 Scanner

  • Selecting VHDx as storage for File Server Role (Failover Cluster 2012 R2)

    Is it possible to select an already existing (offline) VHD or VHDX as storage when creating the "File Server" role? Reason I want to do that is because I already have a file server setup as a virtual machine and causing issues so my company
    decided to make the change towards a File Server role.
    Thank you
    David

    Hi David,
    Do you mean you configured it to file server failover cluster via "High Availability Wizard" ?
    I think you need to choose a shared volume between two nodes to achieve high availability .
    Please refer to following link :
    http://technet.microsoft.com/en-us/library/cc731844(v=WS.10).aspx
    If you do not select a shared volume , I think there is no difference than sharing a mounted VHDX file on a standalone file server .
    I would suggest to copy these files to CSV and share them .
    Hope it helps
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • I can't manage file and storage services in server manager.

    I have a windows 2012 R2 server. I had turned on the file and storage services role and was able to configure a single share in server manager. A few days later I wanted to create another share but when I select file and storage services within server manager
    I get the message at the top that says The server has not been queried for data since it appeared offline. Also there are no shares listed. Even though the shared folder that I already created is available from other computers.
    If I try to create a file share anyway I am asked to choose a server to create the share on and the server appears in the list with a status of offline. 
    Now this may seem like an obvious connection issue however, I am trying to configure the server locally, not over the network. I can manage other services in server manager just fine. I have WDS and WSUS roles installed and can be configured with server
    manager just fine. I only have a problem with file and storage services. 
    There are no errors in the event log. 
    I tried to remove the file and storage services role from the server but as soon as I uncheck the box for file and storage services I get a pop up windows that says: 
    The validation process found problems on the server from which you want to remove features. The selected features cannot be removed from the selected server. click ok to select different featres.
    I lists validation results that simply state the name of the server and says "storage services cannot be removed."
    How can I get file and storage services working again?

    Hi,
    How many servers are there in the list? If the offline serve is a remote server, please reboot the remote server to see the result. In the meantime, please new a shared folded on the local server in Windows Explorer to see if the issue still exists.
    Please refer to the article below to share a folder with server manager.
    12 Steps to NTFS Shared Folders in Windows Server 2012
    https://blogs.technet.com/b/keithmayer/archive/2012/10/21/ntfs-shared-folders-a-whole-lot-easier-in-windows-server-2012.aspx#.Ux1ty_mSwXV
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • HT201238 what will happen to my files if i downgrade my storage plan?

    what will happen to my files if i downgrade my storage plan? will my files be deleted? just locked? just putted in reading mode?

    If you reduce your storage plan the data over that amount will be deleted from iCloud. They will not be deleted from the device.
    No telling where the removal will start. Could be your movies or your email.

  • Backup / File Serving Plan - Your comments?

    A local business would like to have a daily off-site backup, with incremental backups, and do file sharing from a single server across several users. Web based backup is probably not realistic as some files are 250 megs a pop. We have ordered an XSserve with three 750gig drives which we will use as a file server. My plan is to use Drive 1 for system files, with Drive 2 and 3 setup in Raid 1 for data files that will be mirrored. This will allow the owner to simply pull a drive out, put it in a case, and take it home with him each evening, while providing redundancy and up-time if a drive goes down.
    It doesn't, however, ensure that if somebody deletes 200 pages from a document and mistakenly saves it, there will be any archived copy, so we will continue to use our current Retrospect backup plan, with some changes to frequency.
    My current thinking with regard to the System files is to connect an external Hard Drive after the intitial setup has been finalized with the users added and use SuperDuper to clone the System hard drive and then disconnect it (after a verfifying boot from it). Backing up of that drive would be done manually when needed after adding users or doing updates that are verified to be "good" while not automatically backing up corrupted prefs or other problems that can tend to creep into a system.
    Any thoughts or comments ?

    My plan is to use Drive 1 for system files, with Drive 2
    and 3 setup in Raid 1 for data files that will be mirrored.
    This will allow the owner to simply pull a drive out, put it
    in a case, and take it home with him each evening, while
    providing redundancy and up-time if a drive goes down.
    I'll be the contrarian on this one. Yes, it will work. Just fine (but see below). That's standard practice for many servers if you need them to be kept up 24/7, and a RAID 1 mirror split is the ONLY way to do an instantaneous clone of a drive. Other solutions (SuperDuper!, CCC, etc.) scan the disk, build a list, and copy files, such that any files that change/disappear/appear during or after the scan and before the copy will be missed. We do exactly that (well, not quite) with our Xserve G5 before doing any software update.
    However, the approach we use is to add a spare drive to the mirror, allow the mirror to rebuild, then split the spare drive from the mirror. We use SoftRAID, and it has worked flawlessly for this (again, see below), and there is even a write-up on the SoftRAID web site (http://www.softraid.com) about this approach.
    Realize, though, that such a mirror split has two issues. First, it is not a backup and it should not be confused with such. No archiving is done for old files, and it's not possible to go back weeks/months to get an inadvertently deleted file. A proper backup strategy (we use Retrospect to tape in an autoloader) is necessary for that.
    Second, even though the mirror split is instantaneous, it still might not give a proper bootable image. Some programs, such as database programs (e.g., cyrus for email) maintain sets of files on disk that need to be consistent, and there needs to be (and is not) coordination between the instant of mirror split and all running services. For this reason, we shut down essential services (this can be scripted) such as email service, prior to the instant of mirror split, do the mirror split, then restart the services. Doing that avoids, for example, cyrus database corruption.
    As mentioned above, we only do this for our OS volume, which is RAID 1 (SoftRAID) with mirror secondary being a RAID 5 LUN on an Apple Hardware RAID card and with mirror primary being a Seagate Cheetah attached to an ATTO UL4D. We add another mirror secondary (a Firewire drive) to the RAID 1 mirror, allow the mirror to rebuild, shut services by scripting, split the Firewire drive from the SoftRAID mirror, restart the services. Takes only an instant, gives a good bootable OS copy. Note that we only use this approach prior to any software change on the Xserve, allows us to roll back if the software change/update goes badly, as happened once. But that's all we use it for. We count on the RAID 5 LUN for our user data, plus our daily tape backups, to handle failures and accidental deletions, and we count on the RAID 1 mirror of the OS volume to handle failure of the Apple Hardware RAID card (which MUST have the write cache turned off - there is a bug in the card - fails to fully flush write cache on graceful power down - doubt Apple will ever fix this bug) and also to improve performance of the OS volume.
    SoftRAID 3.5 is not scriptable, so this has to be done manually. I understand that scriptability is in the works, which would allow us to automate the process. SoftRAID 3.5.1 also has some issues with intel processors, the SoftRAID 3.6 beta just released resolves those issues (general release of SoftRAID 3.6 is expected shortly before MacWorld).
    One thing for you to think about is all the handling/removal/insertion of the drive, and transporting the drive home each night. That will reduce the life of the drive and of the ADM connector into which the drive plugs.
    I note upon re-reading your post that you use Retrospect, as do we. Be sure, before doing the mirror split (if you use this technique for your OS volume) that you disable Retrospect's schedule prior to the mirror split and then re-enable the schedule after the split. Otherwise, if you later try to boot from the earlier mirror split drive, all scripts with intervening scheduled times will be runnable and, just at the instant you are in a moment of crisis, trying to boot your precious backup copy, Retrospect will be firing off backups, etc. Been there, done that.
    Good luck.
    Russ
    Xserve G5 2.0 GHz 2 GB RAM Mac OS X (10.4.8) Apple Hardware RAID, ATTO UL4D, Exabyte VXA-2 1x10 1u
    Message was edited by: rhwalker (add Retrospect comments)

  • File Server Resource Manager 2012 - Fails to generate storage report - Event ID: 8242 and 602

    Installed file server resource manager roll on new 2012 file server.   When I attempt to run a dup report on the local volume, I received an error message: "the report generation task failed with the following errors: Error generating report
    job with task name".  "
    Event ID 8242 and 602 are logged in the event viewer.
    Log Name:      Application
    Source:        SRMSVC
    Date:          6/24/2013 11:11:03 AM
    Event ID:      8242
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      xxxxxxxxxxxxxxxxx
    Description:
    Reporting or classification consumer '' has failed.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="SRMSVC" />
        <EventID Qualifiers="32772">8242</EventID>
        <Level>2</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2013-06-24T16:11:03.000000000Z" />
        <EventRecordID>1276</EventRecordID>
        <Channel>Application</Channel>
        <Computer>xxxxxxxxxx</Computer>
        <Security />
      </System>
      <EventData>
        <Data>
        </Data>
        <Data>
    Error-specific details:
       Error: (0x80131501) Unknown error</Data>
        <Binary>2D20436F64653A20434E534D4D4F444330303030303234332D2043616C6C3A20434E534D4D4F444330303030303231322D205049443A202030303030333036302D205449443A202030303030333734382D20434D443A2020433A5C57696E646F77735C73797374656D33325C73726D686F73742E657865202D20557365723A204E616D653A204E5420415554484F524954595C53595354454D2C205349443A532D312D352D313820</Binary>
      </EventData>
    </Event>
    Log Name:      Application
    Source:        SRMREPORTS
    Date:          6/24/2013 11:11:03 AM
    Event ID:      602
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      xxxxxxxxxxxxxxxxxxxx
    Description:
    Error generating report job with the task name ''.
    Context:
     - Exception encountered = System error.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="SRMREPORTS" />
        <EventID Qualifiers="0">602</EventID>
        <Level>2</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2013-06-24T16:11:03.000000000Z" />
        <EventRecordID>1277</EventRecordID>
        <Channel>Application</Channel>
        <Computer>xxxxxx</Computer>
        <Security />
      </System>
      <EventData>
        <Data>Error generating report job with the task name ''.
    Context:
     - Exception encountered = System error.
    </Data>
      </EventData>
    </Event>
    When I click on schedule a new report task, I get an error "Class not registered".
    nada

    Hi,
    When we schedule a new job, we add a scheduled task to the c:\windows\tasks folder.
    The scheduled task will contain the following command line
    "c:\WINDOWS\system32\storrept.exe reports generate /scheduled /Task:"FSRM_Report_Task{GUID.......}"
    There is also a folder on the system drive
    C:\StorageReports\Scheduled
    We Also store information in the system volume information folder in the following files:
    c:\system Volume Information\SRM\Settings\ReportSettings.xml (we use .old and .alt extentions}
    c:\system Volume Information\SRM\reports\reportX.xml (where X = an incrementing number set, in writing to these files, we also use .old and .alt extentions}
    When experiencing issues relating to scheduled report jobs, you will want to examine these files and check for NTFS permissions issues on these locations also.
    Make sure you check the volume that you will be running the report on.
    TechNet Subscriber Support in forum |If you have any feedback on our support, please contact [email protected]

  • [Access right]File information are different between local a server storage

    Good morning all,
    We have serveral small issues and I am saking to my self 8and to you) if this can not have an impact.
    My iMac have 3 partitions.
    1) OS X 10.6.2
    2) User's datas
    3) Boot camp
    All of my users use a network account and their Home are automounted. I means that we they save a file to their home, the file is saved into the server. This is the same for the /Library/Preferences/... .
    I discovered some think interesting:
    When I create and save a file, on loacl iMac, then I cmd+i (file info windows), under the sharing & Permission section, I can see my user name with read&write permission, *AND my user name is followed by "(me)"*
    If I do exactely the same, but on a server folder, and then I pomme+i (cmd+i), the file info window, show me the same information, *but without the "(me)"*.
    Some one could explain me what does that "me". Which impact it has, and frist all, *why that "me" does not appear on the server local while the network account are created on the server and not on local*
    Many thank for your advise

    Hello,
    Someone has an idea about the (me).
    To resume :
    I have a file on my local laptop.
    If I press cmd+i and I look at the "Share & Permission", I can see my user name followed by a : (me).
    If that same file is syncronized to the file server, and I cmd+i on that file. Under the "sahre & permission", I still can see the same information, but excepted for my username, the (me) does not follow my user name any more.
    Do you know why (me) is not displayed on remote files?
    Thk

  • Need some help with a file server!

    Hi,
    I'm wanting to get a mac mini or the mac mini server (whichever is needed) and connect all of my random external drives i have lying about to it. Basically, i'm wanting to set up a mini file server that i can access from anywhere in the world. I have enough space with the external drives to store my music library, other general storage and maintain a backup of my OS drive and portable external that i carry around with me, but i don't know how i'd set it up so that i could access my hard drives and use them normally from anywhere.
    Also, with Mail Server, i'd like it if i could store all of my emails on the server computer and be able to access them on my Macbook. I don't know what i need though. For my needs, do i even need the server program or can i just use Lion (or even Snow Leopard if i choose) by itself? And if i did need the server, would i need it on just the server computer or both the server and my Macbook?
    Sorry if i haven't explained this very well, if you didn't fully understand just ask me and i'll try to rephrase it.
    Thanks in advance
    mr meister

    Be careful what you wish for.
    Either Mac OS X, or Mac OS X Server can act as a simple file server for your LAN.
    Granting access to external/remote users is largely a facet of setting access controls in your router to allow external clients to access your machine, but you have to consider the security implications of doing so - how do you make sure that you, and only you, access your data and not your local script kiddie down the street - or evem some hacker in China?
    HOWEVER, as simple as that may be, performance is going to be your issue.
    Local disks in your machine are typically connected on a bus that runs as several gigabits per second.
    Even the server on your LAN is connected to your client over, typically, a gigabit connection.
    However, your internet connection is likely to be measured in megabits per second... or two orders of magnitude lower than a local connection. You're really not going to want to use this for normal usage - e.g. accessing files - it's probably only practical for copying files to/from your machine.
    As for mail, there are a myriad of issues in running your own mail server, especially if you don't have your own domain and static IP addresses. I'd seriously defer that question until you're more settled in your server plans.

  • G4 Quicksilver as a file server

    I'm planning to make my G4 Quicksilver 2x iGhz / 1 GB into a file server connected via ethernet to my other intel machines.
    I have various SATA HDs that I plan to populate the G4 with interfacing to an OWC Firmtek SeriTek / 1V4 4 port 1.5 Gb/s eSATA PCI / PCI-x Host Adapter card.  Then I intend to run the G4 headless using screen sharing vie ethernet to control the machine.
    Questions:
    1.  Any experiences out there with that OWC card interfacing up to four internal SATA HDs?  Is ther something else that might be better?
    2.  Will the power supply of the G4 be capable of handling power requirements for up to four HDs?  I've seen various discussions here with G4/G5 PSU issues and want to make sure my configuration won't cause problems.  Access to the HDs will be exclusively for reading data off one unit at a time to the external network with no HD to HD transfers internal to the machine or any writing of data to the HDs.
    3.  I'm planning to run a minimal version OS 10.5.8 on one of the HDs, no applications, etc.  Should I use 10.5.8 or are there any benefits to running an earlier version of Tiger or Panther?  I have install disks for all of them so no problem there.
    4.  Any issues with headless control of the G4?
    5.  Anything else I should be aware of?
    Would appreciate any thoughts or advice on this.

    Thanks for the info, BD.  I hadn't thought about the heat issue, will have to watch that.  Will the drives spin down if not accesed for a while?
    I already have two OWC FW 800 SATA docks that I daisy chain for my working storage, video editing, photoshop, etc.
    The G4 will be used as a data archive system and not for daily updates.  So the price of the RAID unit and its capabilities are beyond my needs.  I have duplicates of the disks I'll be puting in the G4 so backup should be okay.
    I tried out the G4 with its current ATA/IDE drives, 10.5.8, and screen sharing from my iMac worked without a display connector attached to the G4 so that seems to be okay.  It wasn't a full size screen but menu commands were functional.
    Still would be interested hearing from anybody that has experience with the OWC Firmtek SATA card.

  • Windows 2008 R2 Multi-Site (geo) Cluster File Server

    We need to come up with a new HA file server (user drive data) solution complete with DR. It needs to be 2008 R2, cater for about 25TB of data, and be suitable for 500 users (nothing high end on I/O). I don't want to rely on DFS for any form of resilience
    due to its limitations for open files. We have two active-active data centers (a third can be used for file share quorum).
    We could entertain:
    1)
    Site1 - 2 x HP ProLiants with MSA storage, replicating with something like DoubleTake to a third HP Proliant at site 2 for DR.
    2)
    Site1 - 2 x HP ProLiants with local storage and VSA or HP StoreVirtual array (aka LeftHand), using SAN replication to site 2 where we could have a one or two node config of the same setup.
    Ideally I would like all 3/4 nodes in these configurations to be part of the same multi-site cluster to ensure resources like file shares are in sync. With two pieces of storage across this single cluster (either a DoubleTake or SAN replication to local
    disks in DR) will this work? How will the cluster/SAN fail over the storage?
    We do have VMWare 5.0/1 (not 5.5 yet). We don't have Hyper-V yet either. Any thoughts on the above, and possible alternatives welcome. HA failover RTO we'd like in seconds. DR longer, perhaps 30 mins.
    Thanks in advance for any thoughts and guidance.

    For automated failover between sites, the storage replication needs to have a way to script the failover so you can have a custom resource that performs the failover at the SAN level before the disks come online. 
    DoubleTake has GeoCluster which should accomplish this. I'm not sure about how automated Lefthand's solution is for multi-site clusters.
    VMware has Site Recovery Manager, though this is really an assisted failover and not really an automatic site failover solution. It's automated so that you can failover between sites at the push of a button, but this would need to be a planned failover.
    RTO of seconds might be difficult to accomplish as you need to give the storage replication enough time to reverse direction while giving the MS cluster enough time to bring cluster applications online. 
    When planning your multi-site cluster, I'd recommend going with 2 nodes on each site and then use the file share witness quorum on your 3rd site. If you only had one node on the remote site, the primary site would never be able to failover to the remote
    site without manually overriding the quorum as 1 node isn't enough to gain enough votes for quorum. With 2 nodes on each site and a FSW, each site has the opportunity to gain enough votes to maintain quorum should one of the sites go down.
    Hope this helps.
    Visit my blog about multi-site clustering

  • Scale Out File Server for Applications using Shared VHDX

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage
    that will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage that
    will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell
    Sure you can do it. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Scenario 2: Hyper-V failover cluster using file-based storage in a separate Scale-Out File Server
    This scenario uses Server Message Block (SMB) file-based storage as the location of the shared .vhdx files. You must deploy a Scale-Out File Server and create an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
    The following table describes the physical host prerequisites.
    Cluster Type
    Requirements
    Scale-Out File Server
    At least two servers that are running Windows Server 2012 R2.
    The servers must be members of the same Active Directory domain.
    The servers must meet the requirements for failover clustering.
    For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate
    Hardware for a Failover Cluster.
    The servers must have access to block-level storage, which you can add as shared storage to the physical cluster. This storage can be iSCSI, Fibre Channel, SAS, or clustered storage spaces that use a set of shared SAS JBOD enclosures.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • 2 Hyper-V Servers with Failover Cluster and a single File Server and .VHDs stored on a SMB 3 Share

    I have 2 X M600 Dell Blades (100 GB local storage and 2 NICs)  and a Single R720 File Server (2.5 TB local SAS storage and 6 NICs).  I´m planning a Lab/developer enrironment using 2 Hyper-V Servers with Failover Cluster and a single File Server putting
    all  .VHDs stored on a SMB 3 Share on the File Server.
    The ideia is to have a HA solution, live migration, etc, storing the .VHDs onm a SMB 3 share
    \\fileserver\shareforVHDs
    It is possible? How Cluster will understand the
    \\fileserver\shareforVHDs as a cluster disk and offer HA on it?
    Or i´ll have to "re-think" and forget about VHDs on SMb 3 Share and deploy using iSCSI?
    Storage Spaces makes difference in this case?
    All based on wind2012 R2 STD English version

    I have 2 X M600 Dell Blades (100 GB local storage and 2 NICs)  and a Single R720 File Server (2.5 TB local SAS storage and 6 NICs).  I´m planning a Lab/developer enrironment using 2 Hyper-V Servers with Failover Cluster and a single File Server putting
    all  .VHDs stored on a SMB 3 Share on the File Server.
    The ideia is to have a HA solution, live migration, etc, storing the .VHDs onm a SMB 3 share
    \\fileserver\shareforVHDs
    It is possible? How Cluster will understand the
    \\fileserver\shareforVHDs as a cluster disk and offer HA on it?
    Or i´ll have to "re-think" and forget about VHDs on SMb 3 Share and deploy using iSCSI?
    Storage Spaces makes difference in this case?
    All based on wind2012 R2 STD English version
    You can do what you want to do just fine. Hyper-V / Windows Server 2012 R2 can use SMB 3.0 share instead of a block storage (iSCSI/FC/etc). See:
    Deploy Hyper-V over SMB
    http://technet.microsoft.com/en-us/library/jj134187.aspx
    There would be no shared disk and no CSV just SMB 3.0 folder both hypervisor hosts would have access to. Much simplier to use. See:
    Hyper-V recommends SMB or CSV ?
    http://social.technet.microsoft.com/Forums/en-US/d6e06d59-bef3-42ba-82f1-5043713b5552/hyperv-recommends-smb-or-csv-
    You'll have however a limited solution as your single physical server being a file server would be a single point of failure.
    You can use Storage Spaces just fine but you cannot use Clustered Storage Spaces as in this case you'll have to take away your SAS spindles from your R720 box and mount them into SAS JBOD (make sure it's certified). So you get rid of an active components
    (CPU, RAM) and keep more robust all-passive SAS JBOD as your physical shared storage. Better then a single Windows-running server but for a true fault tolerance you'll have to have 3 SAS JBODs. Not exactly cheap :) See:
    Deploy Clustered Storage Spaces
    http://technet.microsoft.com/en-us/library/jj822937.aspx
    Storage Spaces,
    JBODs, and Failover Clustering – A Recipe for Cost-Effective, Highly Available Storage
    http://blogs.technet.com/b/storageserver/archive/2013/10/19/storage-spaces-jbods-and-failover-clustering-a-recipe-for-cost-effective-highly-available-storage.aspx
    Using
    Storage Spaces for Storage Subsystem Performance
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn567634.aspx#enclosure
    Storage
    Spaces FAQ
    https://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx
    Alternative way would be using Virtual SAN similar to VMware VSAN in this case you can get rid of a physical shared storage @ all and use cheap high capacity SATA spindles (and SATA SSDs!) instead of an expensive SAS.
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Windows Server 2012 R2 Scale out file server cluster using server disks as CSV

    Hi,
    My question is if I can create a Scale Out File Server Cluster with CSV using the disks that comes with the servers, we have 2 servers with 2 arrays each one, 1 array for the OS files and 1 array that we could use for the CSV.
    Regards.

    Hi,
    a SoFS needs some kind of shared Storage, this could be in the old days ISCSI or FC SAN and now also a Shared SAS JBOD with Clustered Storage Spaces.
    If you have 2  Servers with "local" Disks you need some sort of Software to create a Shared Disk Layer out of that local Disks, like Starwind or DataCore.
    Scale-Out File Server for Application Data Overview
    http://technet.microsoft.com/en-us/library/hh831349.aspx
    check out Step 1: Plan for Storage in Scale-Out File Server
    oh i forgot the normal 4th Option, some Kind of Clustered Raid Controller like HP or Dell offer in some Solutions.
    Udo
    Udo, clustered RAID controllers still require SAS disks mounted into external enclosure *OR* everything mounted into single chassis for Cluster-In-A-Box scenario (but that's for OEMs). The only two benefits somebody would have with them are a) ability to
    privide RAID LUs to Clustered Storage Spaces (non-clustered RAID controllers would be used as a SAS controllers only in pass-thru mode) and b) abillity to have caches synchronized so VM moved from one physical host to another would not start from the "cold"
    state. Please see LSI Syncro reference manual for details:
    Syncro 8i
    http://www.lsi.com/downloads/Public/Syncro%20Shared%20Storage/docs/LSI_PB_SyncroCS_9271-8i.pdf
    "LSI Syncro CS solutions are designed to provide continuous application uptime at a fraction 
    of the cost and complexity of traditional high availability solutions. Built on LSI MegaRAID 
    technology, the Syncro CS 9271-8i enables OEMs and system builders to use Syncro CS 
    controllers to build cost-effective two-node Cluster-in-a-Box (CiB) systems and deliver high 
    availability in a single self-contained unit.
    Syncro 8e
    http://www.lsi.com/downloads/Public/Syncro%20Shared%20Storage/docs/LSI_PB_SyncroCS_9286-8e.pdf
    LSI Syncro CS solutions are designed to provide continuous application uptime at a fraction
    of the cost and complexity of traditional high availability solutions. Built on LSI MegaRAID
    technology, the Syncro CS 9286-8e solution allows a system administrator to build a costeffective,
    easy to deploy and manage server failover cluster using volume servers and an offthe-
    shelf JBOD. Syncro CS solutions bring shared storage and storage controller failover into
    DAS environments, leveraging the low cost DAS infrastructure, simplicity, and performance
    benefits. Controller to controller connectivity is provided through the high performance SAS
    interface, providing the ability for resource load balancing, helping to ensure that applications
    are using the most responsive server to boost performance and help prevent any one server
    from being overburdened.
    So... Making long story short: 8i is for OEMs and 8e is for end-users but require a JBOD.
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Maybe you are looking for

  • Error in downloading

    Hi, I am doing one function module. in which i have to download the final table data in Excel file. The name of the file should be dynamically generated (i.e as in ALV during downloading). I am using Ws_file_get  function module. But this is giving r

  • How to I center a dialog box in the screen?

    Simple question, and I hope a simple answer. I need to center various sized dialog boxes in the screen. I've tried getcenterpoint(), but I don't know EXACTLY how to use it.

  • Spaces in file names

    Hi all, I was wondering if any of you have run into this problem before. I am using java on an as400 system and there is a space in the file name. When I got to read/write from this file it comes up with a java.io error and says the file doesn't exis

  • Clock icon static

    The apps that come with the iPhone include a Calendar and a Clock. The icon for the Calendar changes to show the current date. Why does the icon for the Clock always show 10:15? I realize that the time shows at the top of the phone screen anyway, but

  • App store error message on Macbook (2010)

    I'm trying to download an app from the App Store and receiving this error message: "We could not complete your request. There was an error in the App Store. Please try again later. (4)". I've tried clearing the caches in iTunes and that didn't help.