RAID Storage for iBook

I have an iBook Duo with a Minimax 1TB external drive almost full of videos and data. I am trying to find the best solution for redundancy, speed and cost effectiveness.
Not sure what the best option is...I need at least 2 TB of data protection and was planning on a RAID 10 set-up (two mirrored stripped sets) using the software RAID in OSX6 but was told by the Genius Bar that it would be sketchy at best since I'd need to run the HDD's through a hub.
Second option is two G-Raid RAID 0 drive with software mirroring but again would have to use a Firewire HUB
Third option is an external RAID controller with such as G-Speed but the cost is over-whelming and to get one with RAID 10 I'd need to concert eSATA to Firewire/USB to connect it and don't know how that will turn out.
Last option is selling my entire system and spending a lot more money for a Mac Pro since I really don't need a laptop anymore. Can anyone help with options or share their set-up before i buy the wrong thing or scrap my system (or God-forbid my HDD fails!!).
Thank you!!
Jim

Hey Shock Monster
I think you've stumbled into the wrong forum--you're in the iBook (Colors) forum, and those machines are about 10~11 years old by now.
Perhaps you have a MacBook?
Also, what OS are you running? I'll ask the hosts to move your post to a more appropriate spot so you can get some answers!
~Lyssa

Similar Messages

  • Looking for some RAID storage ideas...can anyone help?

    Hi, I'm looking for some recommendations out there. I'm going to be getting a new PowerMac tower with a cinema display and will also be looking to get some RAID-type storage for it so that I can do lots of video editing and photo storage.
    Ideally, I'd like to get a rack-type storage with swappable drives much like the XServe RAID but I don't need it to have server capabilities like the XServe does, i.e. I'm not going online with anything. This is purely going to be working as an external storage tower that two adjeacent computers will be directly firewired to.
    Obviously, the XServe would fit this, but like I said, I don't need the computer/server capability and also would not want to spend $6000 just for a single TB of storage that the XServe has. I'm thinking more along the lines of a swappable drive rack system like the old Avid systems used to have 10 years ago. Does anyone know if Apple makes such a thing? Or if any third party companies do?
    Thanks.

    If you are using typical DV cameras you don't need striped RAID to capture the video or edit it either. And striped means you lose one drive you lose two drives worth of data.
    The LaCie Bigger and Biggest disks may be suitable solutions, they are hardware RAIDed internally and come in 1 and 1.5TB models, using FW 800 they are plenty fast for video.

  • RAID storage solution for photographer

    Hi all
    I hope this question is not out of topic here.
    I am looking for a RAID solution for storing my photographs, but I have little knowledge about RAID.
    So far I have been using dvds but realised that after a couple of years some are not readable any more, so this isn't the right solution. So far no disaster as I have duplicates too but I need to do this quite soon before I loose important files.
    Before I go further, I only have a PB and an old and slow intel3 PC recovered from a friend throwing it out.
    I did read quite a few reviews about products, but there isn't too many around.
    I believe that I need a Raid1 for data safety.
    One merchant told me he had this solution for me, but can't find any review about it.
    It does look very similar to another brand, which had bad reviews with failure after a few weeks.
    I also cannot find the max capacity details.
    Is there any photographer around who have found a good solution for storage and/or backing up?
    Do you work on files directly from the unit, or transfer files on the PB when needed?
    My PB has a firewire 400 that I use very often and a firewire 800 which I have never used so far.
    But, are these little unit a good solution, or would a server be a better solution?
    Not sure the advantage of servers.
    I believe these little units exist also with network connectivity, but I am not sure if there is any advantage for me. I understand it is a bit slower. Can they be accessed online or do I need a server for that?
    Can I convert my old PC as a server?
    Sorry for the many questions, but I don't want to start spending money on something that does serves me well in the long run.
    I am looking for effectiveness, longevity, price and simplicity in that order, but I don't have too much spare cash to splash.
    I am mainly concerned about Apple dropping the firewire.
    Many thanks in advance.
    If you have a good solution that you are happy about, please let me know.
    Cheers
    Laurent

    Thank you both.
    I am a bit unfamiliar with the term redundancy. Do you mean that I need to back-up the RAID stuff?
    I though that it was already a back-up? or is is just in case the raid enclosure fails?
    Smokerz, do you mean that you just duplicate your data across few drives manually?
    I like the drobo concept, but my main concern is if the drobo box fails, one needs to purchase the exact same product (same firmware) to be able to recover the data, or do you have a back-up off the drobo box on a single drive? Although I like the concept of having the facility to use any hard drive lying around, and not having to purchase identical drives. Are these special drives for RAID and server or just consumer drives.
    Sorry if my questions might sound a bit stupid, but all the RAID stuff is a bit abstract for me so far.
    I like the idea of being able to through the drive in any type of enclosure. My experience with electronic product, is that they are great until something goes wrong and that is when one discovers the limitation. The facility to access data with maximum compatibility with any other product is a must. I never purchase an enclosure of any sort without multiple connectivity, and it has served me well so far. But I have mainly backed-up system in that way.
    So, while using a RAID1 (as I understand other raid just spread the data across few drives), and in case of a enclosure failure, would one be able to put the drives in an single drive enclosure just until a new enclosure is purchased? Or are the drives formated only to work in an array? Is there a solution against all troubles or am I just dreaming?
    I have noticed that DU provides RAID facilities. Is this just for internal drives?
    Many thanks

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

  • RAID-1 for root/swap on Solaris 10 x86

    OK, now that I have solved my problem with the update manager I am in the process of trying to create RAID-1 volumes for root and swap. I am working my way through the process by following the procedure in Solaris Volume Manager Administration Guide. I have run into a few errors in the doc, e.g. referent to using installboot as opposed to installgrub.
    Now I am trying to create my new volume using the Enhanced Storage tool within the Solaris Management Console. I get as far as trying to create the new volume (RAID-0) with the root slice in it but it fails with metainit telling me that the slice is already mounted on /. (No kidding -- that is the whole idea.)
    Does anyone have a working procedure to create RAID-1 for root and swap?
    (BTW, I am trying to do this from the doc and admin guides without stealing time from folks on the list here but I just keep running into walls.)
    (Once I get that done I hope to use zfs for the rest of the storage.)
    Brian
    Message was edited by: brian -- initial post was truncated.
    brian.lloyd

    I am attempting to set up RAID-1 (mirroring) for root and swap. I am attempting to work from "Solaris Volume Manager Administration Guide," Part No: 816�4520�12. (Same place you pointed me except I downloaded the pdf so I could page through it faster.) On page 105 you will find an entry for "Create a mirror from the root (/) filesystem," which directs you to page 122, "�x86: How to Create a RAID-1 Volume From the root (/) File System by Using DCA�.
    I have two identical 300GB PATA drives with 20GB root/boot and swap partitions. The slices are all identical as to size and location, i.e. c0d0s0 is root and c1d0s0 will be the mirror for root. (I will use "mirror drive" to refer to c1d0.
    Proceeding through the steps I have verified I can boot the mirror. I have created the slices. I have used fdisk to put the master boot block on the mirror drive. I have installed grub on the mirror drive.
    At this point I am on step six which reads:
    "Create a new RAID-0 volume on the slice from the previous step by using one of the following
    methods. Only the single slice can be included in the RAID-0 volume. " At that point I decided to try to proceed using the Enhanced Storage Tool from smc.
    As to RAID-0 vs. RAID-1, it is my understanding from the doc that one must first create the RAID-0 volumes, each containing just the single slice, one for each of the two volumes that will be combined to make the RAID-1 mirror, hence my comment about RAID-0.
    I see your point about using the -f option to force the creation of the volume for the mounted root filesystem. One would think that, if they included the option of using the Enhanced Storage Tool that perhaps it might work. (I did not see a way to force the creation of the RAID-0 volume.) Certainly using metainit with the -f flag makes sense.
    Next step will be to go back and punt EST in favor of using metainit from the command line.
    I get the feeling that no one at sun is actually trying what is in the manuals to ensure correctness.
    Brian

  • The best option to create  a shared storage for Oracle 11gR2 RAC in OEL 5?

    Hello,
    Could you please tell me the best option to create a shared storage for Oracle 11gR2 RAC in Oracel Enterprise Linux 5? in production environment? And could you help to create shared storage? Because there is no additional step in Oracle installation guide. There are steps for only asm disk creation.
    Thank you.

    Here are names of partitions and permissions. Partitions which have 146 GB, 438 GB, 438 GB of capacity are my storage. Two of three disks which are 438 GB were configured as RAID 5 and remaining disk was configured as RAID 0. My storage is Dell MD 3000i and connected to nodes through ethernet.
    Node 1
    [root@rac1 home]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:39 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:40 /dev/sda1
    brw-r----- 1 root disk 8, 16 Aug 8 17:39 /dev/sdb
    brw-r----- 1 root disk 8, 17 Aug 8 17:39 /dev/sdb1
    brw-r----- 1 root disk 8, 32 Aug 8 17:40 /dev/sdc
    brw-r----- 1 root disk 8, 48 Aug 8 17:41 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 18:26 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:43 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 18:34 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:43 /dev/sdf1
    brw-r----- 1 root disk 8, 96 Aug 8 18:34 /dev/sdg
    brw-r----- 1 root disk 8, 97 Aug 8 18:43 /dev/sdg1
    [root@rac1 home]# fdisk -l
    Disk /dev/sda: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8844 71039398+ 83 Linux
    Disk /dev/sdb: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 4079 32764536 82 Linux swap / Solaris
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 17784 142849948+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    Disk /dev/sdg: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 53352 428549908+ 83 Linux
    Node 2
    [root@rac2 ~]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:50 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:51 /dev/sda1
    brw-r----- 1 root disk 8, 2 Aug 8 17:50 /dev/sda2
    brw-r----- 1 root disk 8, 16 Aug 8 17:51 /dev/sdb
    brw-r----- 1 root disk 8, 32 Aug 8 17:52 /dev/sdc
    brw-r----- 1 root disk 8, 33 Aug 8 18:54 /dev/sdc1
    brw-r----- 1 root disk 8, 48 Aug 8 17:52 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 17:52 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:54 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 17:52 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:54 /dev/sdf1
    [root@rac2 ~]# fdisk -l
    Disk /dev/sda: 145.4 GB, 145492017152 bytes
    255 heads, 63 sectors/track, 17688 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8796 70653838+ 83 Linux
    /dev/sda2 8797 12875 32764567+ 82 Linux swap / Solaris
    Disk /dev/sdc: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 17784 142849948+ 83 Linux
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 53352 428549908+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    [root@rac2 ~]#
    Thank you.
    Edited by: user12144220 on Aug 10, 2011 1:10 AM
    Edited by: user12144220 on Aug 10, 2011 1:11 AM
    Edited by: user12144220 on Aug 10, 2011 1:13 AM

  • 3rd Party SATA raid cards for internal drives?

    All --
    Apart from the discussion as to if RAID actually benefits a home desktop system, I am wondering if anyone has the lowdown on using 3rd party SATA raid cards to support the INTERNAL hard drives on the Mac Pro series?
    My Mac Pro is still slated to be built at Apple and for now I have the minimum memory and HD spec being requested at Apple, with the plan to upgrade the memory and drives from OWC or another vendor.
    I've toyed with the idea of utilizing Disk Utility's software RAID features (e.g. RAID 0 for scratch disks, RAID 1 for boot, RAID 0+1 for all else.) I've also toyed with the notion of searching for a hardware raid solution which would allow me to transfer the internal SATA cable runs from the motherboard to a host adapter card for an internal multi-channel experience (with options to create and break mirrors to external devices for backup purposes.)
    So....
    Has anyone experience or utilized 3rd party hardware raid controllers which can connect to the internal HD bays? Are there limitations to this (ie, does the boot drive HAVE to reside off the internal motherboard controllers, or can an internal hardware controller successfully boot the system) of which ought be noted?
    Finally, in the event that a host adapter card cannot drive the interna bays, can anyone give feedback to hardware SATA cards to power external drive bays with support for Disk Utility (to allow RAID1 pairings of internal drives to external snapshot-backup drives)?
    Thanks for your time,
    Ian Poulin
    Richmond, Va

    I am wondering if anyone has the lowdown on using 3rd party SATA raid cards to support the INTERNAL hard drives on the Mac Pro series?
    There are many 3rd party controllers that support the internal HDs if an internal iPass connector is used. The problem is that some are bootable but most are not.
    The Areca ARC-1680ix-12 and the HighPoint RocketRAID 4320 are bootable. However, the system cannot be installed via the Apple DVD. Instead the user needs to clone a boot drive with the proper drivers to the boot volume on the controller and then boot from the 3rd party controller.
    The other issue I found is that these controllers do not support Boot Camp. If Boot Camp is desired, my recommendation would be to leave the internal HDs on the Mac Pro internal bus intact and use the 3rd party controller for external storage. This method provides four internal bays that are bootable, support Boot Camp and can be used for system backups. I use the 3rd party controller for external storage for large RAID sets and hot swapping hard disks.
    With the internal bays intact and external hot swap RAID storage available the user can support Boot Camp, multiple system volumes and large external RAID sets. From my experience using a 3rd party controller with the internal HD bays always has some limitations. The user usually does not realize it unit later when Boot Camp does not work or the computer fails on a system upgrade or the controller does not work at all with a new version of Mac OS X.
    Staying with the standard internal Mac Pro bay configuration will be the best configuration to avoid compatibility issues with future versions of Mac OS X. It is rumored that the new Snow Leopard may require 64-bit drivers. If that is the case, I would expect most if not all existing 3rd party controller drivers to fail. Some drivers will be upgraded after a few months while others may not. Having the internal Mac Pro SATA controller intact should at least allow the Mac Pro to boot if my guess about compatibility issues is correct.
    can anyone give feedback to hardware SATA cards to power external drive bays with support for Disk Utility (to allow RAID1 pairings of internal drives to external snapshot-backup drives)?
    There are a large number of external controllers that work with Disk Utility. Here are some of my favorites.
    1. FirmTek SeriTek/2SE2-E and the SeriTek/5PM
    http://firmtek.stores.yahoo.net/sata5pm2se2.html
    http://www.amug.org/amug-web/html/amug/reviews/articles/firmtek/5pm/
    2. Sonnet Tempo E4P
    http://www.amug.org/amug-web/html/amug/reviews/articles/sonnet/mac-pro/
    3. DAT Optic eSATA_PCIe8
    http://www.amug.org/amug-web/html/amug/reviews/articles/datoptic/pcie8/
    Have fun!

  • What RAID storage system should I use?

    To set the stage here. I'm somewhat of a newbie to the video industry. I've worked as a videographer for a non-profit for 5 years. At that job, we just skimped by on what we could afford, which wasn't much. I just started a new "professional level" job for a school district and have been given the keys to a fairly substantial budget to get whatever I need to do the job.
    I want to do right by them and not waste money, so I want my purchase decisions to be educated. I'm an intermediate computer user, but have never used raid configurations before, so please be kind. Also, we can really only purchase through a few vendors. B&H is where I'm getting all my other video equipment, so I'm only looking at options available there for my storage needs as well.
    Right now we record using Canon XA10 and XA20 model cameras. I'm hoping to upgrade to XF300's with this new budget, but still we're only talking MXF files, 1920x1080 at 50Mbps 4:2:2. So I'm not dealing with huge uncompressed footage.
    Still I record a fair amount of footage. In the first 2 months on the job I've accumulated about 460GB of raw video, and I don't expect demand to go down in the future.
    Right now, my idea is to purchase two Western Digital 12TB Raid Arrays in Raid 0.
    http://www.bhphotovideo.com/c/product/1053138-REG/wd_wdblwe0120jch_nesn_12tb_my_book_duo.h tml
    (I should note, I'm using a Windows 7 PC, so I only have access to USB 3.0, not Thunderbolt)
    The first raid array would be my scratch disk, the second would be used for manual backup at the end of every day. (Using a utility like SyncBack)
    Once my projects are complete, and I'm sure I won't need to access them, I'd like to move them off to a 3rd RAID array like this set up in Raid 5 for redundancy.
    http://www.bhphotovideo.com/c/product/1018063-REG/owc_other_world_computing_mercury_qx2_4_ by_hw.html#specOWCM3QX2K0GB
    This array would serve primarily as an archival unit, with only occasional transfers to it and use in only rare circumstances where I need access to several month old footage.
    Do you have any suggestions of a better system or workflow?
    Thanks so much!

    Would be good to raise this over on the:
    Hardware Forum ...
    https://forums.adobe.com/community/premiere/hardware_forum
    Neil

  • Best Raid Configuration for a 8-1tb hhd server

    Hi All,
    I have been trying to figure out what would be the best raid configuration for my Windows 2012 essentials server. So I was wondering if I could get some advise on this. Here is what I would like to get out of this configuration.
    I would like to split the 8-1tb drives into 2 VHD one with about 300gb for the OS to be installed on, and the rest to be use  for storage. I want like to have good redundancy, fairly good read & write capabilities.
    I will be using this server for a Small Web Page and Game Hosting, Email Exchange, Critical Application hosting.
    I would like the ability to enlarge the overall raid storage by swapping out my 1Tb drives with 2Tb or larger as the need for more storage arises.
    Any help would be greatly appreciated,
    Thanks

    You may be better of using Storage spaces and not using raid at all.
    Robert Pearman SBS MVP
    itauthority.co.uk |
    Title(Required)
    Facebook |
    Twitter |
    Linked in |
    Google+

  • Extra monitors and raid storage on a 17" Macbook Pro

    I have a 17" Macbook Pro, and I want 2 things:
    An externnal monitor, as well as thunderbolt raid storage.
    Is there any way to connect an extra monitor while using the Thunderbolt for raid storage?
    Or do I have to use the eSata card for raid storage?
    I need the real estate on the big screen to get things done, but I also would like to have the Thunderbolt raid storage.
    What are my options? Any help appreciated.
    Thanks.

    So yours is an Early 2011 MBP then http://support.apple.com/kb/SP621
    May I suggest the 27" Apple ThunderBolt Display http://www.apple.com/displays/specs.html plus a ThunderBolt Storage from these http://store.apple.com/us/browse/home/shop_mac/mac_accessories/storage?n=thunder bolt&s=topSellers
    Stefan

  • Storage for RAC & ASM

    I am planning to install Oracle 10g RAC with ASM for our Oracle 10.2.0.4 database on Solaris 10. We have a Sun StorEdge SAN.
    I would like to get your suggestions for the best storage infrastructure for the RAC/ASM.
    Can someone share their storage design - RAID LUN's, Disk layout for the RAC environment? Do you have RAID volumes or individual Disks presented to the O.S? If RAID Volumes, then what RAID level are you using? Since ASM stripes and mirrors (normal or high redundancy), how have you layed out your storage for ASM?
    Should I create one RAID LUN and present it to the operating system, so that ASM diskgroup can be created from that LUN or individual disks can be presented?
    If anyone can point me to any documentation that can put some light on the storage design and layout, it would be really helpful.
    Thanks in advance!

    Refer:- it may helps you.
    http://www.oracle.com/technetwork/database/clustering/tech-generic-unix-new-166583.html
    http://www.orafaq.com/node/66
    http://www.dba-oracle.com/real_application_clusters_rac_grid/cluster_file_system.htm
    Regards,

  • After creating RAID 5 for hard disks, CIMC shows Modrate fault

    Dear all,
    There are two UCS servers which the model is C240 M3
    When I created RAID 5 for hard disks, CIMC shows Modrate fault.
    The warring message is
    storage virtual drive 0 degraded: please check the storage controller, or reseat the storage drive ucs
    anybody has idea for this case? Thank you!

    Is it an solution? Thank you!
    The warning message appears if you do not have a Battery BAckup Unit Instllated on the server. I just worked with Cisco TAC and following is the solution.:
    1. Launch KVM Console of the server
    2. Power Cycle the Server
    3. When LSI RAID configuration window displays, press CTRL+H and start the RAID Configuration Utility.
    4. Click on Virtual Drive. Example Virtual Drive 0.
    5. Change Default Write Mode to "Write Through" and click "Change"
    6. Press "Yes" to take the effect.
    7. Exit from RAID Configuration Utility nad restart the server.
    From:
    https://supportforums.cisco.com/discussion/11842501/ucs-c220-m3-megaraid-9266-8i-cache-degraded

  • Pegasus RAID Storage with Thunderbolt opinions / backup solutions?

    I'm trying to get some opinions about the Pegasus RAID Storage with Thunderbolt
    http://www.promise.com/storage/raid_series.aspx?m=192&region=en-global&rsn1=40&r sn3=47
    since I've been researching some info for a friend who is looking for extremely reilable local storage solution for his record label.
    Does anyone here use this? If so, do you use it with Time Machine? And also what type of RAID setup did you use?
    I was thinking of a RAID 5 setup, since I feel that would be best in the long run should any drives fail, the backed up data would still be available on the other drives even if the performane is a bit slower until the failed drive is replaced. I would like to hear about people's experience with these drives and RAID setups. Also if anyone might want to suggest other disk options, please feel free to mention them here.
    This would mostly be for "Setup 1" which is mostly audio production dedicated.
    There is also a need for a "Setup 2" which would be mostly business/document dedicated.
    Would it be better to get another smaller local back up solution for Setup 2?
    We're not too keen on using Time Capsule for Setup 2 as their long term performance does have a great track record.
    We're also going to be using Carbonite to have an offsite/remote backup of everything we have locally, but I'm not too familar with their services as well and if anyone here can comment on that I would apprecaite it. I've read up a bit on them and I would think to probably use their Business level service.
    Any help, advice, opinions are welcome and appreacaited!
    Thanks!

    You can (re-)install to the Pegasus raid and boot from it.
    However, you'll actually get a faster system if you use an SSD for the system drive and then just use the Pegasus for data files (and Time Machine backups of the SSD ...).
    In any case:
    Be sure to use the Promise Utility (download it from their site) to configure the RAID level etc if you don't want the default RAID 5 configuration.  (For me I prefer RAID 6 if there are more than 4 drives, RAID 1 for 2 drives and RAID 10 for 4 drives – RAID 5 is just too risky; even configured with a spare).
    If you use the Pegasus as your system drive, consider using RAID 10 on it.  You'll "only" get 6 TB storage; but it'll be fast and quite safe (much safer than RAID 5).

  • TS130-1105​1CU Raid Storage Console

    Anyone have any idea what Windows RAID Storage Console I can use to monitor within Windows 2008R2 Foundation? I have it all configured during bootup just fine, but would like to be able to monitor from with Windows. Especially helpful for Remote Administration...
    Thanx in advance!
    Solved!
    Go to Solution.

    Thanx for getting back to me.
    Sorry, it is for the onboard Intel. I tried various versions of the RST (Rapid Storage Technology) and finally found this one here worked.
    http://support.lenovo.com/en_US/downloads/detail.p​age?DocID=DS013896
    However it didn't add in the "Service" & consequently I would get an error that the console couldn't connect to the service which wasn't even listed in the installed services. So I had to manually add the service by typing in the following at an elevated command window;
    sc create "Intel RST DM Service" binpath= "c:\program files (x86)\intel\intel(r) rapid storage technology\iastordatamgrsvc.exe"
    Then I had to insure that the “Intel RST DM Service” was set to Startup Type “Automated (Delayed Start)”
    All is good now! I wanted to test it out on another system  & I tried the same thing on my own server that I had been procrastinating on doing for quite some time now (Intel M/board with same Xeon processor & Intel C200 chipset) but didn’t have to manually add the service, it did it itself.

  • Video Production Storage for Home Environment

    I know what I'm about to talk about is relatively quite small compared to many businesses, but it's quite large for a home environment (and budget!)....
    I've recently started doing some Video Production using kdenlive; my last project was a 12 minute video and all the files for the project totalled ~120gb.
    I want to do more VP work, but I'd like to get some ideas on storage for this kind of work; lots of space, but also fast access needed. At the moment I'm using a 4-bay external enclosure in RAID-10 connected via USB3 which seems to work OK, but I'm wondering if I it would be worth turning that into a RAID-6 (more efficient use of raw disks) for archiving projects, and build a smaller internal RAID1/10 array of SSD's for doing "live" work?
    Do you do this? How do you handle it? What are your suggestions?

    I'm not sure I'm understanding it right, maybe I'm totally missing the point...?
    Do you have the project you're working on on the same storage as the ones you don't currently need? Does that mean you need fast access to the old projects, too?
    My setup and requirements are far below yours, but so far I'm doing fine with 50-300GB projects on a normal internal drive with only the 2-3 projects I've been working on last (<3 that velociraptor 1TB hd). Isn't one internal drive usually already a fair bit faster than such an USB3 RAID-10? Depends on what bottlenecks play the lead I guess...
    In background, I simply sync that drive together with the rest of my system to my backup "array" that keeps hardlinks to deleted files in folders by date... so I can just delete files from the work-drive without fear of loosing anything. Those backup arrays aren't raid either - I simply use backup HD's like it used to work with those C64 "cassette drives" - if it's full, I dump the directory tree to a text file for later reference, put a label on the backup hd, put it in a drawer and stick the next big, cheap, slow, empty HD into my cheap slow USB3 dock. In my case, the redundancy that results from this method turns out just right (all projects end up on at least 2 HDs in the end, only small ones on more than 4). That it works out so well might be partly coincidence though.
    I tried to involve RAID arrays in the past... turned out to make more problems & work than it had advantages in my case. I'm clumsy and also like to misplace hardware on occasion. Without raid there's a lot less room for me to mess things up.
    Last edited by whoops (2013-07-29 11:45:04)

Maybe you are looking for

  • Installing components using auto-generated plans from Jython CLI

    Hi All, for specific reasons, we want to install a set of components with fixed signatures from Jython CLI, but without appropriate plans. In fact, we want to simulate the behavior that we know from Plugin UI, i.e. linking directly to an installer of

  • I have Mac OS X 10.5.8, what do I need to do to be able to be able to install itunes 11.1?

    I have Mac OS X 10.5.8, what do I need to do to be able to be able to install itunes 11.1?

  • Exchange sync causing iPhone to crash?

    I've been using the 2.0 software since release date and it's been geat. I've had a random downloaded app or two crash, but that's about it. No problems with the main iPhone OS. Just recently I pulled my iPhone (first gen) out of pocket only to see it

  • Get currently opened file in code editor

    Hi, I am writing an extension for Oracle Jdeveloper. I have created a view and now I need to access the currently opened file in the code editor. However, all I get is the context of my own view: Context.newIdeContext(this); But how can I get the Cod

  • Question about sap memory

    May I ask that how long did you restart your sap system?  Because in our sap system, the memory usage is always rising,     in the beginning of sap starting, the computation memory usage is 60%, after several days, it becomes 90%, and then uses pagin