Partitioning on T3 arrays

We are in the process of implementing T3 storedge arrays. can anyone advice how to oragainise partitions on that any document which can help? any information on building RAID on T3.
thanks

http://www.sun.com/blueprints/browsesubject.html
several useful documents there, enough for you to
install/admin:
* Sun StorEdge[tm] T3 Array: Installation, Configuration and Monitoring Best Practices (October 2001)
-by Ted Gregg
In order to fully realize the benefits of the capabilities built into the Sun StorEdge[tm] T3 array, it must be installed, configured, and monitored with best practices for RAS. This article details these best practices. It includes both Sun StorEdge T3 array configuration and host system configuration recommendations, along with brief descriptions of some of the available software installation and monitoring tools.
* Sun StorEdge[tm] T3 Dual Storage Array Part 3 - Basic Management (April 2001)
-by Mark Garner
The final article in the series looks at the configuration of basic management and monitoring functions on the T3 array. It concludes with example Expect scripts that could be used as a starting point for automating your own T3 installations.
* Sun StorEdge[tm] T3 Dual Storage Array Part 2 - Configuration (March 2001)
-by Mark Garner
This second article in the series addresses the installation and configuration of a T3 array partner group. It covers how two single arrays would be reconfigured to form a partner group, how the new devices are created on the host and how VERITAS Volume Manager integrates into the solution.
* Sun StorEdge[tm] T3 Dual Storage Array Part 1 - Installation, Planning and Design (February 2001)
-by Mark Garner
This article looks at the planning and design requirements for the installation of a Sun StorEdge T3 Array partner group. It is the first of three articles which address planning and design, configuration and basic management of a Sun StorEdge T3 Array.

Similar Messages

  • Partitions on raid-array not recognized on boot

    Hey there,
    I already had my share of problems with my raid setup and kernel/udev updates, but this one is new:
    I have a raid1 setup with several partitions on the array which get recognized as /dev/md0p1, /dev/md0p2, etc.
    My root-device is on /dev/md0p2 by the way.
    After the last kernel-update I get dropped in the rescue shell at boot because the message "waiting 10 seconds for device /dev/md0p2" timed out. In the rescue shell I can see, that indeed the array is properly assembled (cat /proc/mdstat) is fine, but somehow the corresponding partitions (md0p1, sda1, sdb2, etc) are not linked in /dev or recognized at all (/proc/partitions).
    However one execution of "blkid" seems to trigger some udev-event and afterwards all partitions and links magically appeared.
    Has anyone any idea what causes this behaviour and how to fix it? It would be very enlightening to know which part of my system is actually responsible for setting up the partition-links (udev?).

    Sorry guys, I've been inundated with problems lately.
    Anyway, to answer your questions, yes I've swapped out everything.
    As it stands right now, I have green lights on everything both sides even show up on IP and in ARD. The only issue now is that I can't see half of my RAID in DiskUtilities from my Xserve. I can format one side and not the other.
    My server is an older G5 single 2GHz, build 7W98 and 2.5GB memory, (for reference).
    Thank you for your input. I appreciate it.
    DaveB

  • Unable to add partition on raid array, device or resource busy.

    Greetings,
    I want to be able to create a disk image of a software raid of one of my arch box.
    I'm able to create my image with G4U successfully. I'm also able to restore my image without error on my new box.
    When my system boot up, I make sure that my raid array are up by doing cat /proc/mdstat.
    I can see that md1 and md2 are 2 of 2 and active raid 1. But, when I look at md0 this is what I got:
    md0 : active raid1 sdb3[1]
               6289344 blocks [2/1] [_/U]
    I try to add the partition sda3 to md0 array with this command:
    mdadm --manage /dev/md0 --add /dev/sda3
    The output of this command give me this :
    mdadm: Cannot open /dev/sda3: Device or resource busy.
    It seems that this error only occurs on /dev/md0 (/) array. I'm 100% sure that both, my image and my drive (vmware hdd) are good.
    This is my partition table:
    /dev/sda1 /dev/sdb1 = /boot (md1) 100MB
    /dev/sda2 /dev/sdb2 = swap (md2) 2048MB
    /dev/sda3 /dev/sdb3 = / (md0) 8GB
    I have also tried the image creation with Acronis... same error.

    I solved my issue.
    My menu.lst was wrong..
    kernel /kernel26 root=/dev/md0 ro
    I should add this to my menu.lst:
    kernel /kernel26 root=/dev/md0 ro  md=0,/dev/sda3,/dev/sdb3
    Now it work.
    I followed the archlinux raid guide who's telling this:
    Nowadays (2009.02), with the mdadm hook in the initrd it it no longer necessary to add kernel parameters concerning the RAID array(s).
    Which is wrong because my distro is 2009.02, if someone can add a note to the wiki it can be usefull.
    Thanks for your support

  • Dmraid ignores partitions on primary array

    I have two RAID 0 arrays on my installation.  One array is two OCZ SSDs (named Sigma) and the other is a 2TB array (named Chi) for storage  purposes. Both arrays were build using the Intel Matrix Storage Manager tool on my motherboard (just above the BIOS), and partitioned with a live-session of GParted. Originally when I installed Arch, dmraid did not pick up on the partition of the 2TB array, just the drive itself.  I decided to rebuild the 2TB array, and the problem has reversed; dmraid will pick up on the storage partition of the larger array, but ignores the two partitions on the SSD array (it still sees the other array though). This means that I cannot boot into my installation on the SSD array, and have to run a live-session off of a USB drive to troubleshoot. The SSD array has two partitions, one for Windows and another for Arch so I can dual boot for gaming and non-open source software purposes.
    I am currently running a live session of Ubuntu, which acknowledges both arrays and all partitions just fine.  All the commands listed below are from the live-install running, but chrooted into my install on the SSD array.
    Here is the error I get in grub when trying to boot:
    Activating dmraid arrays...
    Waiting 10 seconds for device /dev/mapper/isw_cjcefcajfc_Sigmap2
    Root device '/dev/mapper/isw_cjcefcajfc_Sigmap2' doesn't exist attempting to create it.
    Error: Unable to determine major/minor number of root device '/dev/mapper/isw_cjcefcajfc_Sigmap2'
    Here is the output of dmraid -tay on the Arch install (chrooted):
    isw_bjcehbjhed_Chi: 0 3907039232 striped 2 256 /dev/sdc 0 /dev/sdd 0
    isw_cjcefcajfc_Sigma: 0 250081280 striped 2 256 /dev/sda 0 /dev/sdb 0
    isw_bjcehbjhed_Chip1: 0 3907037184 linear /dev/mapper/isw_bjcehbjhed_Chi 2048
    Here is the output of dmraid -tay on the Ubuntu live session:
    isw_bjcehbjhed_Chi: 0 3907039744 striped 2 256 /dev/sdc 0 /dev/sdd 0
    isw_cjcefcajfc_Sigma: 0 250081792 striped 2 256 /dev/sda 0 /dev/sdb 0
    isw_bjcehbjhed_Chi1: 0 3907037184 linear /dev/mapper/isw_bjcehbjhed_Chi 2048
    isw_cjcefcajfc_Sigma1: 0 197650432 linear /dev/mapper/isw_cjcefcajfc_Sigma 2048
    isw_cjcefcajfc_Sigma2: 0 52428800 linear /dev/mapper/isw_cjcefcajfc_Sigma 197652480
    Here is the output of lspci:
    00:00.0 Host bridge: Intel Corporation 5520/5500/X58 I/O Hub to ESI Port (rev 13)
    00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 1 (rev 13)
    00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 3 (rev 13)
    00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express Root Port 7 (rev 13)
    00:14.0 PIC: Intel Corporation 5520/5500/X58 I/O Hub System Management Registers (rev 13)
    00:14.1 PIC: Intel Corporation 5520/5500/X58 I/O Hub GPIO and Scratch Pad Registers (rev 13)
    00:14.2 PIC: Intel Corporation 5520/5500/X58 I/O Hub Control Status and RAS Registers (rev 13)
    00:14.3 PIC: Intel Corporation 5520/5500/X58 I/O Hub Throttle Registers (rev 13)
    00:1a.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4
    00:1a.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5
    00:1a.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6
    00:1a.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2
    00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller
    00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 1
    00:1c.2 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 3
    00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Root Port 5
    00:1d.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1
    00:1d.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2
    00:1d.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3
    00:1d.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1
    00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
    00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface Controller
    00:1f.2 RAID bus controller: Intel Corporation 82801 SATA RAID Controller
    00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller
    02:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5870 (Cypress)
    02:00.1 Audio device: ATI Technologies Inc Cypress HDMI Audio [Radeon HD 5800 Series]
    04:00.0 IDE interface: Marvell Technology Group Ltd. 88SE6121 SATA II Controller (rev b2)
    05:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8056 PCI-E Gigabit Ethernet Controller (rev 12)
    07:02.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0)
    Thanks for taking the time to help me troubleshoot my install, I'm looking forward to getting my desktop to run Arch Linux.

    Okay, I'm still not entirely sure whats going on here... but I did manage to fix it using the above idea. I destroyed the system array and rebuilt it from scratch - this gave it a device name higher alphabetically than the second array, and as such the correct set is activated by dmraid, allowing me to boot.
    The bug still exists though - and indeed dmraid refuses to activate the partitions on the second array. I can access these partitions by running partprobe on the volume after boot.
    Hope this helps someone, sorry i cant actually fix this or figure it more rigorously.
    John

  • Is it possible to create a Windows 7 Partition via Bootcamp while having an internal RAID 0 Setup ?

    Is it possible to create a Windows 7 Partition via Bootcamp while having an internal RAID 0 Setup ?

    Yes, just not on the RAID. Boot Camp Assistant will only partition a single drive containing OS X. You cannot partition a RAID array.

  • K8N DIAMOND: New Raid array and old HD... I'm going crazy!!! Please

    Hi guys
    First, sorry for my poor english...
    My problem is:
    I bought two Raptor 36 Gb for my new raid array.
    I have my old HD Hitachi 250 Gb connected on sata 1.
    I connected my two raptor, on sata 3 and 4, I enabled all sata port and raid config for ports 3 and 4.
    I restart PC, typed F10 and set a stripe array with two raptor. All run ok...
    Restarted and boot with my copy of WinXP with SP2 and NF4 raid drivers.
    Installation found my array and my Hitachi;
    Hitachi with 4 partitions; C (os), D (driver), E (films), F (music)
    I create one partitions on my raid array, and it takes I: letter... So I formatted and start to copy os...
    Errors occurs when, restarting pc and resetting the boot to Hard disk (the order of booting is 1 - nvidia array, 2 - hitachi 250 gb), just before loading installation, It shows a black screen with an error: "there is an error in your hard disk bla bla bla.. try to control your connection or connect to windows help....bla bla..."
    The Raid array works properly, infact I try to disconnect my Hitachi from sata port 1 and all installation works (I'm writing from Win xp on the raid array).
    I tried to connect the hitachi on port 1 of silicon image controller.... same error..!!
    I'm desperate... I have all my life on my hitachi...
    I think that there's  a sort of conflict in drive letter assignement... I cant find a solution ..
    PLEASE HELP ME!!!

    Glad it worked, I had a feeling it would. 
    Quote
    One question for you.. on G: partition, there's a directory called "Windows", do you suggest me to format this partition??
    You can format it if you want to free up space, but unless you moved things around the My documents folder and everything in it is on that partition, along with anything you might of had on the old desktop during that Windows install.  You might have something you want there, I usually leave mine for a few month, and figure out if I have everything I need.
    Quote
    What I  have to do, if I need to reinstall WIn XP on first partition of raptor array??
    Things should be fine now as Windows marked the Hitachi drive as G. You should be able to reinstall without issue. But if you have a lot of sensitive info on the Hitachi, I would always disconnect the Hitachi if doing a fresh install.  Once windows is done installing, hook it back up.  But next time you shouldn't have to reconfigure NVRAID after disconnecting and reconnecting.
     

  • 2 partitions on a mirror drive with a Xserve 10.5 server impossible

    Hello, have installed a XServe with 2 1TB drive but i cant create 2 partitions (1 OS and 1 DATA)
    The option is not available after i create my RAID. (I don't have a RAID card)
    I have tried to do 2 partitions before on the 2 disk and MIRROR them after but with no luck.
    Thank's

    You cannot partition a RAID array. You can make partitions on each drive prior to making the array, then create two separate arrays pairing one volume from each drive in each array. Of course if you must access both arrays at the same time you'll see a bit of a slowdown since you cannot access two volumes on the same drive at the same time. Normally, one would not configure RAIDs in this way. If you need two arrays then use separate drives for each array.

  • Raid0 array sata on sas ports

    Hello and thanks for help
    I have a D partition on raid0 array made of four sata disks on sas ports of D10 under win 7.  Used to be ok, now I lost that D partition and lsi utility doesn't see the disks. I suppose I messed the configuration but don't remenber at all what to do in bios or elsewhere ...
    I have documents I need for yesterday
    Thanks for help
    Solved!
    Go to Solution.

    welcome to the forum!
    did you lose just the partition or the entire array?   can you see the partition in diskmgmt.msc?
    please understand that since RAID 0 has no redundancy, you may have lost everything forever.   it's next to impossible to recover data from RAID 0 because the data is spread across multiple drives in random chunks.
    ThinkStation C20
    ThinkPad X1C · X220 · X60T · s30 · 600

  • RAID 0 failing on reboot [SOLVED]

    I'm new to Arch and I'm setting up a NAS and I'm stuck on the RAID setup.
    I have an SSD (sdb) for the filesystem (non-RAID) and I'm trying to set up my two 2TB hdds, sda1 and sdc1 in a RAID 0 software array via the Arch Raid wiki.  These drives were pulled out of my old HTPC, but were not previously in a RAID setup.
    The NAS is currently headless (kinda), so I've been doing the setup through ssh.  When I got to the part of the Raid setup that says to securely wipe the drives, my ssh session ended before this was completed. I don't think this has any effect on my problem, I just thought I'd mention it.
    I completed the "Build the array" step:
    # mdadm --create --verbose --level=5 --metadata=1.2 --chunk=256 --raid-devices=5 /dev/md/<raid-device-name> /dev/<disk1> /dev/<disk2> /dev/<disk3> /dev/<disk4> /dev/<disk5>
    When I tried the next step, updating the mdadm.conf file it said the file was busy.
    I skipped this step and formatted the array successfully, and was able to mount it and copy files to it.
    The next step said "If you selected the Non-FS data partition code the array will not be automatically recreated after the next boot."  I used GPT partition tables with the fd00 hex code, so I felt comfortable skipping that step.
    I also skipped the next step, Add to kernel image. I don't know why, but my files were copying over just fine, so I guess I figured I was done.
    Then I rebooted and it goes into emergency mode.
    After reading up on my problem, I learned more about this process, and I figured my problem was one of the steps I skipped.  So I went back and finished the remaining steps, albeit out of order.
    The first thing I did was update my configuration file.  So from the looks of it, that should be done before putting the filesystem on it.  Do I have to re-format?  I definitely want to avoid that.
    When I read about this problem I thought for sure it was the mkinitcpio hooks that I was missing, so I added mdadm_udev to the HOOKS section of mkinitcpio as well as ext4 and raid456 to the MODULES section, per the wiki.  Then I regenerated the initramfs image and rebooted but the problem remains.
    Here's what I copied from the output of the boot process:
    The first hint of a problem occurs during boot:
    A start job is running for dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device
    Then it times out:
    Timed out waiting for device dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device
    Dependency failed for /mnt/nas.
    Dependency failed for Local File Systems.
    And drops me to emergency mode.
    Here's the relevant excerpts from journalctl -xb:
    kernel: md: bind<sda1>
    kernel: md: bind<sdc1>
    kernel: md: raid0 personality registered for level 0
    kernel: md/raid0:md127: md_size is 7814053888 sectors.
    kernel: md:RAID0 configuration for md127 – 1 zone
    kernel: md: zone0=[sda1/sdc1]
    kernel: zone-offset= 0KB, device-offset= 0KB, size=3907026944KB
    kernel:
    kernel: md127: detected capacity change from 0 to 4000795590656
    kernel: md127: unknown partition table
    so it looks like the kernel sees the RAID, right?
    systemd[1]: Job dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device/start timed out.
    systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device.
    --Subject: Unit dev-disk-by\x2duuid-5e553f3d:c258f28b:d07571ea:ff289d13.device has failed
    --The result is timeout.
    Systemd[1]: Dependency failed for /mnt/nas.
    -- Unit mnt-nas.mount has failed.
    -- The result is dependency.
    systemd[1]: Dependency failed for Local File Systems.
    -- Subject: Unit local-fs.target has failed
    -- Unit local-fs.target has failed.
    -- The result is dependency.
    Any help is apprecated.
    Last edited by lewispm (2013-08-27 11:42:59)

    jasonwryan wrote:What shows up in /proc/mdstat? Before and after manually assembling the array?
    Here's after assembling the array, since it works now:
    $ cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [raid0]
    md127 : active raid0 sda1[0] sdc1[1]
    3907026944 blocks super 1.2 256k chunks
    unused devices: <none>
    jasonwryan wrote:You should stick with mdadm_udev; the alternative is no longer supported...
    I switched back, and its still working with the /dev/md127 in fstab.
    fukawi2 wrote: suggests your /etc/mdadm.conf file isn't right.
    Here's /etc/mdadm.conf:
    ARRAY /dev/md/nas metadata=1.2 name=lewis-nas:nas UUID=5e553f3d:c258f28b:d07571ea:ff289d13
    and the output of
    mdadm --detail --scan
    is identical, since the wiki directed me to generate the mdadm.conf file like this:
    mdadm --detail --scan > /etc/mdadm.conf
    but the array is /dev/md127, not /dev/md/nas,is that the sticking point?  Is it the UUID?  If so, how can I check?

  • Updating Mail Server

    I have an XServe running 10.6.6. There is a Solid State Hard Disk Drive (SSHDD) on which is installed the OS and a 1.9TB RAID 3 array on which mail, calendar, wikis and websites are stored. This server is also an OD Replica.
    In the past, when updating the server, I've split the SSHDD into two partitions. I clone the boot partition, prior to the upgrade, to the other partition. I then upgrade the boot partition. If the update fails (and it HAS - TWO TIMES since 10.6.0), I can boot off the cloned partition and try the update again. (In effect I clone the GOOD partition back to the partition that had the boot OS on it where the update failed, bringing the server back to the state it was prior to the update.) Usually, the second time works.
    My problem is now the SSHDD is NOT large enough to hold both partitions. And we all know what happens to a server when the OS runs out of hard drive space - it's NOT a pretty picture.
    I should update to 10.7 but am in a quandary how to do this so I have a "fall-back" scenario.
    I should say I have "mailbagging" set up with MXSAVE.com, so I'm not worried about losing mail.
    I see two scenarios:
    SCENARIO #1:
    Clone the existing OS to an external drive - run the update on the server.
    If the update fails, boot off the external drive, clone the SSHDD with the good OS on the external drive, boot up the server off the SSHDD and try the update again.
    SCENARIO #2:
    I just thought of this one...
    When I realized that the partitions were getting too small I expanded the partition in Disk Utility. So now, rather than two partitions (BOOT and LKG - "Last Known Good") I only have a single boot partition.
    ANYHOW, why couldn't I do that with the existing RAID array, that is repartition a chunk of the array (I'm using a PITIFULLY SMALL portion of the 1.9 TB anyhow) and drop the cloned OS to that. Then if the update fails I'd boot off the RAID array partition.
    The point of this post is to ask if what I plan to do makes sense?
    Thanks for any thoughts.
    John

    Just a follow up on this.
    Today the server crashed.
    When I got in this morning there was the blue screen that shows up during Post but before you get to the desktop. The server must have rebooted last night.
    I had to power down the server and then restart it manually.
    When the server came up the phantom drive problem resurfaced. This is a known issue with XServes using the SSHDD and not only is it not fixed, I THINK one of the recent updates may have made it worse because I was able to "fix" it in the past.
    The USUAL solution was to shut down all the services that use the RAID array, dismount the Phantom Drive and then reboot. I have done that many times in the past. Unfortunately, NO DICE today. Every time I rebooted the phantom drive reappeared.
    A a result of this thread, I created a "Last Known Good" partition on the RAID array. (I added a partition to the RAID and cloned the SSHDD to the new partition.) I booted off the partition on the array and the server booted up normally now everything is running well.
    I'm going to make ANOTHER partition on the RAID array so I can update the server to 10.6.7. Then I think I'm going to retire the SSHDD.
    So...at least I know this works.
    Message was edited by: [email protected]

  • Encore 1.5 Transcoding could not start not enough disk space

    Dear all,
    I have a relative new raid 0 with four Hitachi (HDS721075KLA330) disks. At the moment I've got 2794.54 GB capacity (quite enough) and 1825.83 GB of free space. When finishing a Encore 1.5 project for making a DVD folder there is the error message: "Transcoding could not start there is not enough disk space available. 0 is available on E:\, 237.55 is required. Please free up disk space." Also messages during saving: "No disk space".
    The weird thing is I've hadn't had this error before using the same raid disks, but since a few days I am stuck.
    My OS is Windows CP Professional (SP2), 2 GB RAM, Radeon X1650 video card, Matrox RT.X100 video editing card. Thanks for your help.
    Hans Hermans

    If I remember correctly, this is a bug caused by the sheer size of your target array. In the olden days when 1.5 was the released version, Terabyte drives were uncommon, and the package could not actually see it as this large, and will throw this error.
    The workaround was to partition the target array into smaller sizes (less than 1 Terabyte) and you should be able to return back to normal.

  • RAID 5 performance

    I have 4 320gb disks. I have put them in RAID 5(Intel ICH10). I have used RAID 0 and RAID 1 before. This is the first time I am using RAID 5. Read performance is excellent, but write is very slow at 15MB/sec. Is it normal for RAID 5?Single disk write for my disks is around 70MB/sec.

    Those write speeds are normal for a RAID5 array using an even number of disks. To get excellent write speeds on software RAID5 controllers like the ICH8R-ICH10R, you need an odd number of disks. The only options are a 3 or 5 disk RAID5 array since there are only a maximum of 6 ports.
    See this thread to get an idea of what I'm talking about. The thread is about the nForce onboard SATA RAID, but the concepts also apply to ICHR chipsets as well:
    http://forums.storagereview.net/index.php?showtopic=25786
    To summarize the thread, when dealing with RAID0 or RAID5, to get optimal performance you need to use aligned partitions, ideally created at offsets of 1024KB x (number of usable drives in array). The overall best stripe size to use for the best read, write performance for all filesizes is 32KB. The best cluster size to use when formatting the partition with NTFS is 32KB when storing a variety of filesizes on the array. If you are only dealing with very large files on the array (256MB+), you can get the best performance by using a 128KB stripe size for the RAID array and a 64KB cluster size for the NTFS partition. I store a variety of files on my RAID array, so I use the 32KB stripe/32KB cluster option.
    The next thing you need to do is create an aligned first partition on the array. If you use Windows Vista or later to create a partition it will create an aligned first partition by default. If you are using Windows XP, then you need to use a utility called diskpar.exe (not diskpart.exe since the XP version does not have partition alignment capability, but Windows Server 2003's diskpart.exe, and Vista and later's diskpart.exe do). You can gain slightly more performance by manually aligning the partition yourself using diskpar/diskpart. If you have a 5 disk RAID5 array, you would align the first partition on the array to 4096KB, or 1024KB for every non-parity (i.e. usable) drive in the array. For a 3 disk RAID5 array, you would align on 2048KB.
    Yes, you can get awesome RAID5 write speeds on an Intel onboard RAID controller using the information above. My 5 disk RAID5 array with Samsung F1 500GB drives has a maximum read speed of 350MB/s and write speeds of nearly 300MB/s. They trail off linearly as you get further into the array just as any mechanical HDD's performance does when looking at HDtach/HDTune benchmarks. You're never going to get read and write speeds that match a RAID0 array with the same number of drives, but with proper stripe/cluster/alignment you can get close. For comparison, the same 5 drives in RAID0 have a max read speed of 450MB/s and write speeds over 400MB/s. On ICHR chipsets, RAID0 arrays are not severely hampered by non-aligned partitions, but alignment does help quite a bit. For RAID5, partition alignment is essential for good write performance.
    Diskpart.exe usage (done on a clean drive/array with no partitions):
    1) Open a command prompt window and type diskpart then hit Enter.
    2.) Type: list disk then hit enter. Look for the disk number that corresponds to your RAID array
    3.) Type: select disk 1 (if disk 1 is your RAID array)
    4.) Type: create partition primary align 4096 (if you have a 5 disk RAID5 array, use 2048 if you have a 3 disk array)
    That's it. Format the drive making sure to select the correct cluster size (at least 32KB). If you created a first partition that didn't fill the drive, any subsequent partitions you create on the drive will be aligned because the first one is aligned.

  • Advice about a external RAID system for my iMac

    OK, so I have a few questions and bare with me because while I'd say I'm rather tech-literate I know very little about RAID arrays.
    I need to expand my external storage and right now I'm looking at getting a RAID enclosure and populating it with disks. The other alternative is a Drobo, but I'll get to that in a second.
    Let's say I got a RAID enclosure which was filled with four 2TB HDDs. I've been looking into my options and for what I'm looking for it would be either RAID5 or RAID10. From what I gather, RAID10 is faster but RAID5 is higher capacity? Are there any other differences in terms of reliability that would make RAID10 better than RAID5?
    I hear that RAID10 repairs faster than RAID5 but for my purposes that's not mission critical as long as the rest of my data isn't inaccessible while the repair is going on.
    My real concern is what's the process of repairing and expanding RAID5 and RAID10 arrays? Let's say I had some extra money and wanted to add two 3TB HDDs to my array. Would it be more difficult with RAID5 or RAID10, or would there be no real practical difference?
    Now, the other reason I ask all of this is because a Drobo requires no management at all for any of these tasks. I basically drop in the drives and let it do it's thing. Are RAIDs so much trouble that there's a real benefit to this or is that really only for people who are just that tech-illiterate?
    I don't know, and that's why I'm asking. I'd really appreciate any advice you guys could give me at this time.
    What I basically want is a drive I can use for file storage that's fast enough that I can pull HD content from it and have it run smooth as glass (which I imagine any FireWire system would be able to). I'm not using it for video editing, I'm probably not going to use it for backups unless you can partition a RAID array in such a way as I can section off 1TB of it just for backups (not necessary, I do have a drive for that).

    I did some research and, from the sound of it, the way I'd increase a RAID5 array is to just swap out drives for higher-capacity drives and let it repair itself and once I had upgraded them all form 2TB to 3TB I'd have the increased space.
    The problem is the RAID enclosure I was looking at didn't support the ability to do that, for some reason.
    Still no idea about RAID10 but I think after researching it a bit more I'm going with a Drobo S. It'll cost me more (no surprise there) but the lack of any required management on my end really makes up for that. Plus, instead of having to buy four 3TB drives when I decide to increase capacity, I can just buy them one at a time.

  • Sync two calendars without merging data

    New Incredible user (had the phone ~ 1 week), and I cannot find the answer to this. I would like to sync the phone to two calendars (one at home (personal) and one over Exchange (business)). However I only want the phone to display the two calendars without updating information on both. In other words, I want to leave my personal appointments only on my personal calendar and my business appointments on my Exchange calendar.
    Is this possible?
    If I set the phone to sync both calendars, will it merge ALL appointments onto all my calendars?

    If I understand it's your goal to put both system and data volumes on a single volume then turn the two volumes into one, i.e., repartition the RAID into a single volume. And, you wish to do that without losing the system volume.
    Does the RAID card permit such an action on the fly? I know that Disk Utility does not, but most likely your RAID is not under DU's software RAID. If this isn't permitted by your card then I don't think you can do what you want without essentially starting over.
    In most cases setting up multiple volumes on a RAID array must be done when the array is set up. The drives are first partitioned then multiple arrays are structured using matching partitions.
    Now if in your situation each "volume" is actually a pair of drives, then it may be possible to do what you want. RAID 5 supposedly does not split data across the drives so essentially joining one "volume" to the other could be feasible.

  • Recommended File System - 3.5 TB

    I'm about to build a large file server (3.5 TB):
      - 8 * 500GB Sata drives
      - Raid 5 (hardware)
    The server will be primarly storing cd/dvd images and large graphics files.
    I'm just starting my research on different file system.  Any recommendations?
    Thanks,
    Chris....

    I was about to test a couple of different file systems and ran into a problem partitioning the drive.
    To recap I am using 8 * 500GB WD Sata drives on a 3ware 9500 Card.  I have created one large Raid 5 array (in the 9500 BIOS program).
    The 3ware instructions tell me to use parted to partition the drive (since it is large than 2TB).  However, parted does not support XFS...so I'm looking for another way to partition the drive.
    I used fdisk.  When I start fdisk /dev/sda, it informs me that I have to set the cylinders before I can perform any actions.  I set the cylinders to the maximum fdisk will allow (some very large number).  I then created a partition using 1 as the start cylinder and 3500GB as the stop (which is the size of my RAID array).
    I'm a little concered about the cylinder size I put in.  I know it is wrong, since I don't have any idea what the cylinder size should be.  Not sure cylinder size really makes sense across multiple disks though.
    Also, every time I start fdisk (even to just verify the partition table), it asks me to input the cylinder size.
    Any help/insights into how I can correctly partition the raid array would be appreciated.
    Thanks,
    Chris....

Maybe you are looking for

  • Form 16 error in ESS

    Hi Portal Experts , While trying to view the Form 16 link in Benefits and pay section we are getting the below error . W ehave integrated ESS /MSS 1.41 with EP and in backend able to preview the form 16 .   System connection is working fine and jco a

  • Is it possible to use time machine to back up a hard drive?

    Is it possible to use time machine to back up a 1 hard drive onto another, then periodically update the backups just like I usually do with the Mac HD onto an external? Thanks!

  • Attachments with Webservices -  weblogic 9.2

    Is there a standard stack available for sending attachments with SOAP ? I guess SAAJ 1.2 is partially supported by weblogic. But does this require any changes to WSDL ? SWA and SAW ref implementations as indicated in the JAX-RPC specification is resu

  • Exporting SWF diminishes quality of movie clip

    Im working on a Dragonball Z MMO. things are goin great so far utilizing Flash PHP and MySQL. but thats not the issue right now. Im running tests of the battle system. i have the enemy set up as Mr. Popo (character from the series) The quality of the

  • What is commulative search help & what is elementary search help

    plese explain me what is commulative and elementary search help