ZFS raid slow???

I have set up a ZFS raidz with 4 samsung 500GB hard drives.
It is extremely slow when I mount a ntfs partition and copy everything to zfs. Its like 100kb/sec or less. Why is that?
When I copy from ZFSpool to UFS, I get like 40MB/sec - isnt it very low considering I have 4 new 500GB discs in raid? And when I copy from UFS to ZPool I get like 20MB/sec. Strange? Or normal results?

I have set up a ZFS raidz with 4 samsung 500GB hard drives.
It is extremely slow when I mount a ntfs partition and copy everything to zfs. Its like 100kb/sec or less. Why is that?
When I copy from ZFSpool to UFS, I get like 40MB/sec - isnt it very low considering I have 4 new 500GB discs in raid? And when I copy from UFS to ZPool I get like 20MB/sec. Strange? Or normal results?

Similar Messages

  • MacPro RAID SLOW?

    I keep reading Disk Whack test results online using Aja's System Tester with setups just like mine and my results are pitifully slow. I have 2 Seagate 750 GB 7200rpm drives setup in a Raid 0 stripe and an Apple Raid Card. With NOTHING on the drive I'm getting roughly a write speed of 78MB/s and a read speed of about 90 MB/s. Everywhere else I've seen this config tested using all the same parameters I'm seeing results more than double that. I called Apple support and got nowhere. Any ideas would be GREATLY appreciated. I'm thinking of going with a CalDigit Raid Card and drives but I still want to know what's up with this config first.

    I get almost identical readings on the RAID at 60% capacity and at 0% capacity. 78 write and 90 read.
    Here is my Raid Utility setup. Maybe someone can spot an issue in here:
    Mac Pro RAID Card:
    PCI Slot: Slot-4
    Hardware Version: 1.00
    Firmware Version: M-2.0.3.3
    Expansion ROM Version: 0018
    Shutdown Status: Normal shutdown
    Battery Info:
    Firmware Revision: 1.0d19
    First Installed: 3/8/08 12:19 AM
    Last Date Conditioned: 10/22/08 1:41 PM
    State: Working battery
    Fault: Normal battery operation
    Status:
    Charging: Yes
    Conditioning: No
    Connected: Yes
    Discharging: No
    Sufficient Charge: Yes
    Drives:
    Bay 1:
    Product ID: ST3750640AS P
    Serial Number: 5QD1PT1P
    Firmware Revision: 3.BTH
    Type: SATA
    SMART Status: Verified
    Capacity: 698.64 GB
    RAID Sets: R0-1
    Status:
    Assigned: Yes
    Failed: No
    Foreign: No
    Missing: No
    Reliable: Yes
    Roaming: No
    Spare: No
    Bay 2:
    Product ID: ST3750640AS P
    Serial Number: 5QD1P9X5
    Firmware Revision: 3.BTH
    Type: SATA
    SMART Status: Verified
    Capacity: 698.64 GB
    RAID Sets: R0-2
    Status:
    Assigned: Yes
    Failed: No
    Foreign: No
    Missing: No
    Reliable: Yes
    Roaming: No
    Spare: No
    Bay 3:
    Product ID: ST3750640AS P
    Serial Number: 5QD1P96S
    Firmware Revision: 3.BTH
    Type: SATA
    SMART Status: Verified
    Capacity: 698.64 GB
    RAID Sets: R0-2
    Status:
    Assigned: Yes
    Failed: No
    Foreign: No
    Missing: No
    Reliable: Yes
    Roaming: No
    Spare: No
    RAID Sets:
    R0-1:
    RAID Level: Enhanced JBOD
    Capacity: 698.45 GB
    Available Capacity: 0 bytes
    Drives: Bay 1
    Volumes: Vol-R0-1
    Status: Viable (Good)
    R0-2:
    RAID Level: 0
    Capacity: 1.36 TB
    Available Capacity: 0 bytes
    Drives: Bay 2, Bay 3
    Volumes: R2V1, Vol-R0-2
    Status: Viable (Good)
    Volumes:
    R2V1:
    BSD Name: disk3
    Capacity: 698.57 GB
    Read Command Size: 2 MB
    Read Ahead Margin: 16 MB
    RAID Set: R0-2
    Status:
    Degraded: No
    Inited: Yes
    In Transition: No
    Viable: Yes
    Vol-R0-1:
    BSD Name: disk1
    Capacity: 698.45 GB
    Read Command Size: 2 MB
    Read Ahead Margin: 16 MB
    RAID Set: R0-1
    Status:
    Degraded: No
    Inited: Yes
    In Transition: No
    Viable: Yes
    Vol-R0-2:
    BSD Name: disk2
    Capacity: 698.45 GB
    Read Command Size: 2 MB
    Read Ahead Margin: 16 MB
    RAID Set: R0-2
    Status:
    Degraded: No
    Inited: Yes
    In Transition: No
    Viable: Yes

  • RAID slower than no-RAID

    I work on HUGE photshop files, and over the years in these forums, people have called me CRAZY for not working from RAID, but from regular hard drives. I finally upgraded, after much much research. Got the seritek 5mp with five of the new 1TB samsung hard drives, as these were according to OWC MUCH faster than other hard drives ("no comparison", sales parson said). I got a rocketraid card and set up the 5 drives to a eSATA 4TB RAID. I just timed opening a 10GB photoshop file from the RAID vs from the seagate startup disk. The RAID 5 is 20 seconds SLOWER with a psd file than opening from a non-raid hard drive, and with an uncompressed Tiff it's about 5 seconds slower! I now realize it's slower because RAID 5 requires all these calculations to go on. So what is the huge hoolabalooba about the RAID 5? and the samsung drives show no faster speed over any of my other internal drives....
    My question now is: Am I better off, just doing a JBOD? I could then move the drives around as I want, and occasionally bring copies off-site for even safer backup than RAID 5 provides? could I then also plug these off-site drives into a docking station attached to my home computer and access my files from there? I know I can't currently do that with the RAID5....

    Is there any difference in speed when it comes to an internal RAID 0 set by "disk utility" vs, an external RAID 0 attached to a RocketRaid portmultiplier esata card?
    That depends on the card. There are multiple RocketRAID cards and ways to configure them. For example, you could use the actual RAID controller on the card, or you could use JBOD mode and still use Disk Utility to setup and manage the RAID.
    If the specific RocketRAID card you're using is setup to manage RAID then you will typically gain a performance edge - the OS only has to dump data to the card and let the card work out the RAID component. If using Disk Utility then the OS has to do the RAID work.
    Can a RAID 0 set up by disk utility be moved between different computers?
    Yes. All the RAID data is stored on the drives, so as long as all drives are available the array should come up on another machine. I wouldn't recommend this as a matter of course, though - there's always an inherent risk in moving drives from one system to another.
    If so, this is my new solution for fast storage/continuous backup/keeping a mirrored copy of everything at home
    I'm not quite seeing your vision. You talk about RAID 0, but go on to refer to 'a mirrored copy'...?
    Given your second paragraph, you're proposing to periodically swap out three internal drives with three others? I wouldn't recommend this at all. It would be far better, in my opinion, to purchase a multi-disk drive enclosure and just use the external device for your Time Machine backup - it's far easier to unplug a FireWire and/or eSATA enclosure and take that home than it is to repeatedly swap out internal drives.

  • ZFS on HW RAID with poor performance. Ideas?

    Hi all,
    I'm trying to figure out why my RAID performs so poorly using ZFS compared to even UFS. Any thoughts on the following would be helpful.
    H/W Config:_
    - SunFire v245 w/ 4GB RAM and LSI1064 mirrored 73GB SAS
    - QLogic QLE2462
    - Fibre Channel 4Gb Hardware RAID w/ 1GB ECC cache
    - 16 x 1TB SATA II drives
    - Configured 1 volume RAID-6
    S/W Config:_
    - Solaris 08/07 SPARC - 5.10 Generic_120011-14
    - Latest 10_Recommended as of 1/08
    - zpool create fileserv /dev/dsk/c3t0d0s6
    - zfs set atime=off fileserv
    Test Raw:_
    To the "RAW" disk or volume I get descent numbers:
    # dd if=/dev/zero of=/dev/rdsk/c3t0d0s6 bs=1048576 count=16384
    # dd if=/dev/rdsk/c3t0d0s6 of=/dev/null bs=1048576 count=16384
    Both c3t0d0 and c3t0d0s6 result in similar numbers:
    5 x 16GB writes took 475.50 seconds, yielding an average of *172.28 MB/s*.
    5 x 16GB reads took 384.68 seconds, yielding an average of *212.97 MB/s*.
    Test ZFS:_
    Creating and mounting ZFS to the same volume, I modified my scripts to use dd created files from /dev/null instead:
    # dd if=/dev/zero of=./16G bs=16777216 count=4096
    Then ran:
    # dd if=/dev/zero of=./16G bs=1048576 count=16384
    # dd if=./16G of=/dev/null bs=1048576
    The result for ZFS on a 14.6TB volume is horrible:
    5 x 16GB writes took 1013.28 seconds, yielding an average of *80.84 MB/s*.
    5 x 16GB reads took 980.36 seconds, yielding an average of *83.56 MB/s*.
    What would cause a delta difference of more than half my capable speed from the unit just by running ZFS? Thinking it might just be a dd issue, I ran iozone using the same parameters as "milek's blog" at http://milek.blogspot.com and got the same crappy results:
    # iozone -ReM -b iozone-2G.wks -r 128k -s 2g -t 32 -F /test/*
    (snippet:)
    Output is in Kbytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 Kbytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
    Throughput test with 32 processes
    Each process writes a 2097152 Kbyte file in 128 Kbyte records
    Children see throughput for 32 initial writers = 90584.21 KB/sec
    Parent sees throughput for 32 initial writers = 79823.98 KB/sec
    Min throughput per process = 2186.76 KB/sec
    Max throughput per process = 3225.63 KB/sec
    Avg throughput per process = 2830.76 KB/sec
    Min xfer = 1427456.00 KB
    Children see throughput for 32 rewriters = 88526.23 KB/sec
    Parent sees throughput for 32 rewriters = 88380.95 KB/sec
    Min throughput per process = 2264.00 KB/sec
    Max throughput per process = 3152.58 KB/sec
    Avg throughput per process = 2766.44 KB/sec
    Min xfer = 1510656.00 KB
    I can understand that ZFS wants to have the entire "disk" as in a JBOD, but that's just not a viable option in my Corporate America world. Does presenting a RAID Controller handled LUN Vs. a JBOD presented matrix of drives really make this much of a difference? I'm just at a loss here as to why a unit that is capable of 4Gbit per channel is working as it should via raw, but runs like a single disk in performance as a ZFS filesystem.
    I would appreciate some assistance in figuring this one out.
    Thanks to all in advance.
    --Tom                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Robert,
    Thanks for the help. Unfortunately, using RAID-Z or Z2 is not an option here as I'd rather rely on my hardware RAID over using software. While doing testing, I set my RAID to pass-thru mode and allowed ZFS to inherit the RAW drives from the RAID Chassis, set up a pool, and did my testing. In all cases, write performance was not much better overall, CPU utilization with compression on was ~30% higher, and my RAM utilization shot up under load. I just don't see that as an option on a heavily hit CVS server where IOPS and CPU/RAM are important for compilations and the like.
    Things Hardware RAID gives me that ZFS can't do are:
    Ease of use - changing failed drives for example is just different. It's a complicated filesystem once you get into it.
    Pre-write buffer performance - This is ZFS's bottleneck afterall.
    Fastest I/O - RAID Controllers handle more drives better than ZFS and IOPS are much faster under a H/W RAIDThere's a whole line of reasons why I choose not to use ZFS RAID-Z/Z2. Why else would Sun continue selling both H/W RAID and JBOD? I will check into the Intent Logging though, that's a good idea. I appreciate the link to solarisinternals, I had not thought about using their wiki for this.
    Thanks again!
    --Tom                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Solaris 10 u5 Samba slow transfer rates?

    Hi!
    I've installed Solaris 10 x86 (Core2Duo - x64) server, with Samba over ZFS RAID-Z. Samba is a part of Active Directory Domain. I've managed to join it to domain, to get the users and groups from A.D. and to translate them to Unix IDs. Everything works really good. Samba is installed from the packages from Solaris 10 DVD.
    Only problem I have is the performance :( It's disastrous!
    On 100Mbit Realtek NIC, Samba can manage around 4 MB/s if log level is set to very high (10). If I lower it to 0, then transfer rates go up to 7.5-8.5MB/s and they fluctuate in that interval.
    On the same network, there is a Debian Samba server, and transfer rates go high as 10.5-11.0MB/s.
    Next test I did was switching to Gbit interface. That increased transfer rates up to 25 MB/s, but that is still 5 times slower than the theoretical limit.
    So, next thing I've tried was to switch to Blastwave (CSW) Samba instead of SUNW Samba.... My transfer rates went back to normal immediately! It was a bit of shock for me... I could transfer about 10MB/s on 100Mbit interface, and around 45MB/s on 1Gbit interface. 45MB/s is theoretically limit of the workstation hard drive I was doing transfers from.
    Sun packaged (SUNW) Samba is 3.0.28 patched today to the latest patchlevel, and CSW uses 3.0.23. I used CSW Samba with the exact same smb.conf file. Only problem is - I never managed to connect CSW samba to ADS on my network :( So I gave up on that, and I'm facing a dilemma. Managers request full speed of the Samba server (comparable to Linux/Windows shares), but I just can't connect to Domain with CSW package.
    So I'm asking you guys - any ideas what could be the problem with SUNW Samba and performance? Is it just the 3.0.28 vs 3.0.23 issue, or what? Why is there so big difference in transfer rates? :(
    Please help!

    OK, here goes my smb.conf:
    [global]
    workgroup = MYCOMPANY
    realm = MYCOMPANY.LOCAL
    server string = server4 (Samba, Solaris 10)
    security = ADS
    map to guest = Bad User
    obey pam restrictions = Yes
    password server = server1.mycompany.local
    passdb backend = tdbsam
    log file = /var/samba/log/log.%m
    max log size = 50
    socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE
    load printers = No
    local master = No
    domain master = No
    dns proxy = No
    idmap uid = 10000-90000
    idmap gid = 10000-90000
    winbind separator = +
    winbind enum users = Yes
    winbind enum groups = Yes
    winbind use default domain = Yes
    [share]
    comment = Share on ZFS Raid-Z
    path = /tank/share
    force user = local_user
    force group = users
    read only = No
    guest ok = Yes
    vfs objects = zfsacl

  • Why can't I install IOS x Lion on Mac RAID?

    I have a Mac Pro, OSX 10.6.8. Four hard drives 6TB in total. 3TB Start Up RAID striped, 3 TB backup Time Machine. I want to install Lion but get the message an error has occured. Any advice please?

    I am not a fan of installing Mac OS X on a RAID. I think you get a better overall performance boost from establishing a Boot Drive, which stops the System from competing with your data reads and writes.
    RAID gets its speed from having the heads sitting right at the next spot to read from, and having the second drive transferring while the first drive is seeking. Mac OS X does a whole lot of "snacking" -- reading (and some writing) smallish bits of stuff from all over the Boot Drive. So it does not really get much of a boost from RAID. And having data and Sytem mixed on the RAID slows down both, because frequent System accesses move the heads away from the data files.

  • ZFS pool frequently going offline

    I am setting up some servers with ZFS raids and and finding that all of them are suffering from I/O errors that cause the pool to go offline (and when that happens everything freezes and I have to power cycle... then everything boots up fine).
    T1000, V245, and V240 systems all exhibit the same behavior.
    Root is mirrored ZFS.
    The raid is configure as one big LUN (3 to 8 TB depending on system) and that lun is the entire pool. In other words, there is no ZFS redundancy. My thinking was I would let the raid handle that.
    Based on some searches I decided to try setting
    set sd:sd_max_throttle=20
    in /etc/system and rebooting, but that made no difference.
    My sense is that the troubles start when there is a lot of activity. I ran these for many days with light activity and no problems. Once I started migrating the data over from the old systems did the problems start. Here is a typical error log:
    Jun 6 16:13:15 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1 (mpt3):
    Jun 6 16:13:15 newserver Connected command timeout for Target 0.
    Jun 6 16:13:15 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1 (mpt3):
    Jun 6 16:13:15 newserver Target 0 reducing sync. transfer rate
    Jun 6 16:13:16 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:16 newserver SCSI transport failed: reason 'reset': retrying command
    Jun 6 16:13:19 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:19 newserver Error for Command: read(10) Error Level: Retryable
    Jun 6 16:13:19 newserver scsi: Requested Block: 182765312 Error Block: 182765312
    Jun 6 16:13:19 newserver scsi: Vendor: IFT Serial Number: 086A557D-00
    Jun 6 16:13:19 newserver scsi: Sense Key: Unit Attention
    Jun 6 16:13:19 newserver scsi: ASC: 0x29 (power on, reset, or bus reset occurred), ASCQ: 0x0, FRU: 0x0
    Jun 6 16:13:19 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:19 newserver incomplete read- retrying
    Jun 6 16:13:20 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:20 newserver incomplete write- retrying
    Jun 6 16:13:20 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:20 newserver incomplete write- retrying
    Jun 6 16:13:20 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:20 newserver incomplete write- retrying
    <... ~80 similar lines deleted ...>
    Jun 6 16:13:21 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:21 newserver incomplete read- retrying
    Jun 6 16:13:21 newserver scsi: WARNING: /pci@1f,700000/pci@0,2/scsi@1,1/sd@0,0 (sd2):
    Jun 6 16:13:21 newserver incomplete read- giving up
    At this point everything is hung and I am forced to power cycle.
    I'm very confused on how to proceed with this.... since this is happening on all three systems I an reluctant to blame the hardware.
    I would be very grateful to any suggestions on how to get out from under this!
    Thanks,
    David C

    which s10 are you running? You could try increasing the timeout value and see if that helps (see mpt(7d) - mpt-on-bus-time). It could be that when the raid controller is busy, it may take longer to service something that it is trying to correct. I've seen drives just go out to lunch for a while (presumably, the SMART firmware is doing something) and comes back fine (but the delay in response causes problems).

  • Building a GOD   BOX    for   PHOTOSHOP CS 2  need help on specs

    I will review my situation in detail but If you would like to also read a linked similar posting here it is::
    http://www.bigbruin.com/forum/viewtopic.php?t=8269
    Ok so, I made a deal with the local PC shop, I will be trading my time as a photographer for a large portion of the cost of hardware that they would otherwise not be selling. Some things they will have to order but most of the things I need even the high end stuff they already have. So Money is of no object.
    I am running into issues with my current system being that it is nearly 6 years old and I custom built it for under $150 Monitor included. Since then I have upgraded my Camera Equipment as I have become more and more sucsessfull with my photography business. I am currently shooting 16.7 Megapixel files with my Canon 1DS Mark II and 22 megapixels with a rented hasselblad with a Phase One Digital Back. Frankly my 700Mhz machine just isn't cutting it. It's time to upgrade.
    Now Normaly I would go All Mac and Get a top of the line G5 Quad processor unit fully upgraded however I am still young and dont have good credit dispite all my efforts and they just wont finance me. so this local PC shop is my only hope. Thus I need a top of the line system.
    This computer is going to be expandable so I need to have extra drive bays so a Full Server Tower is in order I think!
    Also I am opting for aluminum Case because it is more conductive than plastic and should help radiate some of the heat this GOD BOX is going to Generate.
    I could use some specifics for each of the areas, if anyone has experience with high end systems and would like to recommend hardware I am all ears. The information is only a light template. "think of it as a layer" nothing set in stone. However I would appreciate a swift reply because I will be building the system in less than a week!
    (( STORAGE & SPEED ))
    I heard that having seperate hard drives is the way to go to improve performance. Also It will help me keep things organized and easy to reformat if I have a problem with an OS. here are the specs a friend suggested.
    80 Gig Raptor HD for OS
    180 GIG Serial ATA Raptor HD for Photoshop dedicated Scratch Disk
    180 GIG Serial ATA Raptor HD for working files for Quick accessing (temp files).
    two 320 Gig Serial ATA Raid slower more reliable Hard Drives for a Mirror Raid setup image reduncancy is a good thing for photographers when a currupt Hard drive means missed deadlines lost clients and lawsuits!! YIKES!
    2 DVD + - Combo Drive's that can accept dule layer dvd's unless anyone has heard of the Maxell Holographic Optical Storage Media coming out early january?? 120MB/s transfer speed, 50+ years archive life up to 1.6 terrbytes of storage!! I cant wait until its in production!
    (( Processor BUS SPEED MOTHER BOARDS AND MEMORY ))
    I need the best most reliable PC Mother Board that can accept the fastest long duration memory, I am aware that there are some types of ram that have higher speeds but cant move as much data at a time. Since much of what i do is batch processing longer periods are a better solution for me than faster acceloration.
    This is strictly a photoshop machine No games no email just photoshop the way computers were made to run.
    Whats the best motherboard that suports dule / quad processor systems??? are there quad processor systems available for PCs? can photoshop CS 2 muchless Windows XP 64 support such hardware?
    it's got to be 64bit processors dule core if possible
    someone told me Dule channel DDR PC8000 is the way to go, ANY Comments?
    Whats the best CRT Monitor for under $1200
    I specify CRT vs LCD because LCD still dosn't have the quality in dynamic range that CRT have, and the DMAX of a LCD is much more worse than the Dmax of a CRT.
    Some obvious perifials are Firewire 800 and 400 standard and USB 2.0 ports
    is there any news of Photoshop accepting more than 4 GIGS of ram in the next update?? If so I may opt for a 8 G

    I will review my situation in detail but If you would like to also read a linked similar posting here it is::
    http://www.bigbruin.com/forum/viewtopic.php?t=8269
    Ok so, I made a deal with the local PC shop, I will be trading my time as a photographer for a large portion of the cost of hardware that they would otherwise not be selling. Some things they will have to order but most of the things I need even the high end stuff they already have. So Money is of no object.
    I am running into issues with my current system being that it is nearly 6 years old and I custom built it for under $150 Monitor included. Since then I have upgraded my Camera Equipment as I have become more and more sucsessfull with my photography business. I am currently shooting 16.7 Megapixel files with my Canon 1DS Mark II and 22 megapixels with a rented hasselblad with a Phase One Digital Back. Frankly my 700Mhz machine just isn't cutting it. It's time to upgrade.
    Now Normaly I would go All Mac and Get a top of the line G5 Quad processor unit fully upgraded however I am still young and dont have good credit dispite all my efforts and they just wont finance me. so this local PC shop is my only hope. Thus I need a top of the line system.
    This computer is going to be expandable so I need to have extra drive bays so a Full Server Tower is in order I think!
    Also I am opting for aluminum Case because it is more conductive than plastic and should help radiate some of the heat this GOD BOX is going to Generate.
    I could use some specifics for each of the areas, if anyone has experience with high end systems and would like to recommend hardware I am all ears. The information is only a light template. "think of it as a layer" nothing set in stone. However I would appreciate a swift reply because I will be building the system in less than a week!
    (( STORAGE & SPEED ))
    I heard that having seperate hard drives is the way to go to improve performance. Also It will help me keep things organized and easy to reformat if I have a problem with an OS. here are the specs a friend suggested.
    80 Gig Raptor HD for OS
    180 GIG Serial ATA Raptor HD for Photoshop dedicated Scratch Disk
    180 GIG Serial ATA Raptor HD for working files for Quick accessing (temp files).
    two 320 Gig Serial ATA Raid slower more reliable Hard Drives for a Mirror Raid setup image reduncancy is a good thing for photographers when a currupt Hard drive means missed deadlines lost clients and lawsuits!! YIKES!
    2 DVD + - Combo Drive's that can accept dule layer dvd's unless anyone has heard of the Maxell Holographic Optical Storage Media coming out early january?? 120MB/s transfer speed, 50+ years archive life up to 1.6 terrbytes of storage!! I cant wait until its in production!
    (( Processor BUS SPEED MOTHER BOARDS AND MEMORY ))
    I need the best most reliable PC Mother Board that can accept the fastest long duration memory, I am aware that there are some types of ram that have higher speeds but cant move as much data at a time. Since much of what i do is batch processing longer periods are a better solution for me than faster acceloration.
    This is strictly a photoshop machine No games no email just photoshop the way computers were made to run.
    Whats the best motherboard that suports dule / quad processor systems??? are there quad processor systems available for PCs? can photoshop CS 2 muchless Windows XP 64 support such hardware?
    it's got to be 64bit processors dule core if possible
    someone told me Dule channel DDR PC8000 is the way to go, ANY Comments?
    Whats the best CRT Monitor for under $1200
    I specify CRT vs LCD because LCD still dosn't have the quality in dynamic range that CRT have, and the DMAX of a LCD is much more worse than the Dmax of a CRT.
    Some obvious perifials are Firewire 800 and 400 standard and USB 2.0 ports
    is there any news of Photoshop accepting more than 4 GIGS of ram in the next update?? If so I may opt for a 8 G

  • Getting new hard disks to show up?

    I have installed Solaris 10 to my primary hard disk. The OS is up and running fine as far as I can tell. I have also added two three more hard disks (250GB PATA IDE drives). The BIOS during bootup sees the drives just fine. (Even the Solaris install saw them...). When I'm in the CDE GUI I can't get the OS to see the extra 3 drives.
    I've tried going into the Solaris Management Console 2.1 and then I choose "This Computer" on the "Open Toolbox" dialog. From there I go to the "Storage" --> "Disks" and I get an error message "Attempt to get a list of all Disks failed with unexpected CIM error: CIM_ERR_FAILED"
    Help! Am I just looking in the wrong spot? What do I need to do to get the 3 additional disks to show up? (I'm trying to get them to show up so I can setup a ZFS raid...)

    jebilbrey:
    With all due respect; none of the various *nix are particularly well suited to "point and click" system administration. One of their most redeeming qualitites (to me) is that you learn the innards of the system and moving icons around isn't that ;)
    Now, as for the format command; as root you type in "format" from the command prompt. If you see them with that command you select the disk you want to partition (it will have a number assigned). From there you select part (I think that's it) then you assign the size for each slice of the disk with the exception of slice 2 which is the whole drive from block 0 to block n.
    Once you've laid out the disk with the partitions you want you use the "label" option to write the partition table to the disk.
    If you are going to mount filesystems; you have to use newfs on the partitions you created; then mount them up.
    As I said; not exactly "point and click" system administration.
    Cheers,

  • More questions from E71 owner

    I am really disappointed with this phone- maybe I was expecting too much?
    1. When downloading the pc suite the computer is not recognising the usb lead, also should it change the appearance of the applications and `desktop` of the phone?
    2. The downloads for it are so boring- I want something to cheer it up like emoticon texting- but that download is not compatible, any suggestions to make it more fun?
    Bored!

    I am a little concerned to heat the '7300 is closer to the Radeon 9800 on the MDD'. What is the point of all the Intel processors if the video card is not that much faster than a two to three year old ATI card?
    I will have to look for some comparison somewhere to find out. I may have to consider getting the next level video card if possible if the 7300 is not as good as I thought, and the RAM will have to wait a little longer.
    Also ADC to DVI converters are available on Ebay, I was not thinking of paying new price for these. Apple price is way over-the-top, eBay prices are far better.
    I think adding RAM is more important than anything, disk space I have with an external FW400 RAID, slow I know but its the best way of protecting those 1000 of digital photos. Its a hardware RAID, do not trust software RAIDs!
    What I may do is look to reduce the noise in my MDD, replace the aging boot disk, new fans etc. And put that on the end of a piece of CAT 7 gigabit Ethernet and use it as file storage, i'll look for Tiger server prices on eBay. I am used to the noise level in my home office now and if the Pro makes no noise then I loose nothing. I already use two displays in the MDD, so no difference there either.
    Just one more note: can I take my Superdrive out of my MDD, into the Pro? I have the stock CD-RW in the MDD? DVD burning on my MDD is slooowww, another reason for upgrading.

  • GDM login via command line?

    The subject states my immediate question but for some back story I'm very close to building my ultimate NAS/HTPC server. I'm running Gnome Shell for the DE (including GDM, obviously) and XMBC for the HTPC application with the proprietary NVIDIA drivers. OS is running off a small SSD, with RAIDZ2 for the data.
    What I need is a relatively simple way to control the GUI of the server, which is hooked up to my TV. I've been getting by with my Nexus 7 using x11vnc and "Yatse," which is an XMBC Android wifi remote. It's a bit clumsy, though, as I have to SSH in to start VNC, then VNC in to log in and start XBMC, then start Yatse to control XBMC. I know I can kind of do that with VNC, but Yatse has very cool integration with XBMC to the point you can browse your library directly in the app.
    What I don't want is auto or passwordless login, since that ZFS RAID not only has video and audio files, but all of the rest of my data.
    I discovered an Android app called "Remote Launcher" that will run pre-set commands on your Linux server (it also has WoL and can even integrate with Tasker). So now I'm thinking maybe with the right commands I can get Remote Launcher to log in to GDM, start and stop XMBC and Netflic Desktop, and power on and shutdown the server if necessary. The app's wiki already has the XBMC and reboot type commands, which aren't too tough in the first place. Which brings me back to the subject line, is there a command I can run on the server that will log the specified user in to the desktop? And once I'm logged in, if it's on the lock screen, is there a command to run to again log back in?

    adamrights wrote:
    Can you just run
    sudo systemctl restart gdm.server
    Do you mean gdm.service? Wouldn't that just reload the loginscreen? Or am I mistaken and that will somehow log me in?

  • Disk IO problem

    Hi, I'm having some problems with a backup job that i've inherited. Today we have three layers of file systems. The topology looks something like this:
    (1st layer)
    Root disks (Oracle home)
    |
    |
    (2nd layer)
    ASM-mounted
    (Data, online logs.. etc)
    |
    |
    (3rd layer)
    ZFS(Raid-5)-mounted disks
    Archivelogs
    The problem is that the ZFS mounted disks store the archivelogs and we use RMAN to do a backup of the archivelogs. The dumb thing is, when we do a backup with RMAN we read from the ZFS disks and write a copy of the archivelogs to the same file system. This uses a lot of IOPSs.
    Is there a way to tell RMAN not to copy the archivelogs and write them to the disk again? Like a normal "mv" command where the OS just change the pointer of the data. Or are there any other suggestions how to lower the IOPS?
    /Regards

    You have several options to bypass this problem:
    1. You ca perform an OS move and then register those new moved compies to RMAN catalog.
    2. To reduce the amount of io you do by using ZFS lzjb or gzip-1 upto gzip-4 compression or RMAN compression of low or medium if you are using 11g R1 or R2.
    3. Ensure that your physical layout is optimal for sequential RMAN io and you are using 128k ZFS blocksize for archive and backuo location.
    Regards

  • Raid mode very slow/ crashing after installing driver (Z68A-GD65 (G3))

    I recently purchased a Z68A-GD65 (G3) motherboard, i5-2500K, 8 gb ram, and a Crucial 64GB 4m SSD drive. The intention was to use the SSD drive for Intel Smart Response Tech.
    Instructions tell me to place the SATA ports in RAID mode, which I did. During installation of Windows 7 (64-bit), I provided the "F6" raid drivers which installed. After that point, my hard drive (a WD 250GB SATAII drive) begins to perform incredibly slow. Once Windows 7 installs, which takes way longer than it should, Windows has problems booting and won't work properly.
    I tried installing Windows 7 without the "F6" raid drive on install and that worked fine. Everything seemed to be working great. I then installed the Intel Rapid Storage Tech software and rebooted. Now my system is acting up again. It takes far longer to boot, it shuts down as soon as it boots, and it has even blue screened on boot. It is very unstable.
    My SSD is unplugged during all of this. I'm just trying to get Windows 7 to run properly on my one mechanical drive in raid mode.
    Do I have a bad motherboard? Anyone have any ideas?
    Edit: I'm not using the Marvell ports. The hard drive works fine in IDE/ACHI modes... I've been using it for years.
    Thanks

    So it turned out that the issue was the hard drive itself. It had been running flawlessly for years as my system drive. However, the Intel iastor driver exposed a hidden issue with the drive.
    I decided to check the Windows system event log for any clues. In there I saw errors with event id 9 on iastor. They were lined up with the times of the hangs and blue screens. Solutions I found online for this issue did not help. Luckily I have two hard drives in my system, although they are both the same model. I moved my data around and clean installed Windows on the other drive instead. The hangs and blue screens are no longer occurring.

  • Windows Server 2012 Storage Spaces Simple RAID 0 VERY SLOW reads, but fast writes with LSI 9207-8e SAS JBOD HBA Controller

    Has anyone else seen Windows Server 2012 Storage Spaces with a Simple RAID 0 (also happens with Mirrored RAID 1 and Parity RAID 5) virtual disk exhibiting extremely slow read speed of 5Mb/sec, yet write performance is normal at 650Mb/sec in RAID 0?
    Windows Server 2012 Standard
    Intel i7 CPU and Motherboard
    LSI 9207-8e 6Gb SAS JBOD Controller with latest firmware/BIOS and Windows driver.
    (4) Hitachi 4TB 6Gb SATA Enterprise Hard Disk Drives HUS724040ALE640
    (4) Hitachi 4TB 6Gb SATA Desktop Hard Disk Drives HDS724040ALE640
    Hitachi drives are directly connected to LSI 9207-8e using a 2-meter SAS SFF-8088 to eSATA cable to six-inch eSATA/SATA adapter.
    The Enterprise drives are on LSI's compatibility list.  The Desktop drives are not, but regardless, both drive models are affected by the problem.
    Interestingly, this entire configuration but with two SIIG eSATA 2-Port adapters instead of the LSI 9207-8e, works perfectly with both reads and writes at 670Mb/sec.
    I thought SAS was going to be a sure bet for expanding beyond the capacity of port limited eSATA adapters, but after a week of frustration and spending over $5,000.00 on drives, controllers and cabling, it's time to ask for help!
    Any similar experiences or solutions?

    Has anyone else seen Windows Server 2012 Storage Spaces with a Simple RAID 0 (also happens with Mirrored RAID 1 and Parity RAID 5) virtual disk exhibiting extremely slow read speed of 5Mb/sec, yet write performance is normal at 650Mb/sec in RAID 0?
    Windows Server 2012 Standard
    Intel i7 CPU and Motherboard
    LSI 9207-8e 6Gb SAS JBOD Controller with latest firmware/BIOS and Windows driver.
    (4) Hitachi 4TB 6Gb SATA Enterprise Hard Disk Drives HUS724040ALE640
    (4) Hitachi 4TB 6Gb SATA Desktop Hard Disk Drives HDS724040ALE640
    Hitachi drives are directly connected to LSI 9207-8e using a 2-meter SAS SFF-8088 to eSATA cable to six-inch eSATA/SATA adapter.
    The Enterprise drives are on LSI's compatibility list.  The Desktop drives are not, but regardless, both drive models are affected by the problem.
    Interestingly, this entire configuration but with two SIIG eSATA 2-Port adapters instead of the LSI 9207-8e, works perfectly with both reads and writes at 670Mb/sec.
    I thought SAS was going to be a sure bet for expanding beyond the capacity of port limited eSATA adapters, but after a week of frustration and spending over $5,000.00 on drives, controllers and cabling, it's time to ask for help!
    Any similar experiences or solutions?
    1) Yes, being slow either on reads or on writes is a quite common situation for storage spaces. See references (with some of the solutions I hope):
    http://social.technet.microsoft.com/Forums/en-US/winserverfiles/thread/a58f8fce-de45-4032-a3ef-f825ee39b96e/
    http://blogs.technet.com/b/askpfeplat/archive/2012/10/10/windows-server-2012-storage-spaces-is-it-for-you-could-be.aspx
    http://social.technet.microsoft.com/Forums/en-US/winserver8gen/thread/64aff15f-2e34-40c6-a873-2e0da5a355d2/
    and this one is my favorite putting a lot of light on the issue:
    http://helgeklein.com/blog/2012/03/windows-8-storage-spaces-bugs-and-design-flaws/
    2) Issues with SATA-to-SAS hardware is also very common. See:
    http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/5d4f68b7-5fc4-4a3c-8232-a2a68bf3e6d2
    StarWind iSCSI SAN & NAS

  • Disk Utility restore on a RAID is extremely slow

    here is my story: i want to replace old internal 250gb hdd with 2х500gb striped raid set.
    i have 2 G5 Powermacs, so i put 2х500 in one of them and build striped raid. then i booted it in target disk mode and connected via firewire to other G5. other G5 was booted from system dvd. i launched disk utility and choose restore from old 250 hd to new raid set. there are 100 gb of data on 250 hdd.
    for now this process goes about 5 hours, and blue line is just on a half. is it normal when i restore a single partition on a raid, or something is wrong? why it's sooo slow?

    No, changing/adding them to a Raid will lose all info.
    Haven't actually tried it on a Concantated RAID, but I still suspect the Directory structure to be so different that it will wipe any info.
    Only way is to backup everthing on the HD then move it back after it's setup for RAID.
    RAID is really nice for the speed,but boy is it ever unreliable, of all the RAID setups I've had, the longest one went before going bonkers is 6 months, most far less. Never rovered one of them either.

Maybe you are looking for

  • Can you copy music and apps from one iTunes account to another?

    My nephew got an iPod touch for Christmas and for some reason my husband decided to add it as a new device to my iTunes account and purchased games and music and put these onto the iPod.  So now he has access to my password, etc. What I really want t

  • Popup window question

    Hi, I did a Portal application using JSF and RAD 6.0 in the Portal page I have two portlets side by side the portlet in the right side has a link to open a popup window when I click in that link and the popup window opens the portlet in the left side

  • Oradata folder size over 6.5 gb

    What should I do in this case? One temp.dbf file>3.5 gb. Which files can I delete from oradata folder Amitesh

  • Time dependent Attibute in my Infoobject ?

    Hi All, Scenario :I have one Char Infoobject with time dependent attribute in that . Data is loaded as Direct update with full load We are this to maintain History Question : 1.When I go to maintain masterdata I get two attribute add along with my at

  • JMS Configuration

    Hi All, I have configured the scenario to communicate with external JMS system.I have created the party and business service for the external JMS system. I used this business service and its communication channel for the configuration. I have not def