Defrag Raid?

Is it possible or advisable to defrag a raid 1 set?

There is some value in doing a backup, erase and restore.
Moving up to Leopard, I've rebuilt all my RAIDs.
And there is value: you get a backup! But I am not a fan of defragging usually, unless your work with audio or video is improved.
Some files will be - when rewritten - written back as a single unfragmented file.
On some systems, mixed results on latency, searches, and writing updates.

Similar Messages

  • CS5 Liquify tool renders Photoshop unusable

    Hello Forum,
    I have downloaded the CS5 trial version and I have problems with the Liquify tool. Whenever I try to liquify a big file (RAW image) Photoshop becomes unresponsive, the Liquify screen does not pop up and i need to use the task manger to stop the application. I have encountered the same problem in Photoshop CS4 and I don't know how to fix it.
    I have a Dell Latitude E6400, Core 2 Duo P8700 @ @.53 GHz, 4 Gigs of RAM. Integrated graphics card, Mobile Intel 4 Series Express (Adapter String: Mobile Intel(R) GMA 4500MHD).
    Best regards,
    Alex

    I have the same problem. The resource business is total nonsence. My workstation has 2 Intel Xeon 3.4 Gz processors with 6 cores each ( 24 cores total including FP cal)  and 48 GB of 1333 MHz  DDR3 RAM. OS is 64 bit Windows 7 on Raid 0 which has 4 Photofast SSDs each reading and writing at 270 MB/Sec and total raid read and write performace that more or less saturates the front side bus at over 1 GB/sec. Data is on a Raid 10. File sixe 6000 px by 3000 px single layer 8 bit Tiff.
    I have tried all combinations of memory allocation recommended by Adobe. Adobe CS5 runs in 64 bit. Scratch disk over 6 TB defragmented raid 10 in the workstation. No network raids used. Only programs running Windows 7and Photoshop CS5.
    All Adobe products have the latest build. The video card is FireGL 8750 with 2GB GDDR5 graphics frame buffer memory with 115.2 GB/s bandwidth. This card has the latest sofware updates.
    It is a DISASTER in CS5 in Windows 7.  It works in XP Pro CS4 on a machine with lot less resorces.
    The liquify tool allows the application of the effects but after you  select OK then it goe to Render and progress bar of the render action. WHEN THE RENDER IS COMPETED WHICH HAPPENS FAIRLY FAST THE IMAGE DOES NOT SHOW THE RESULTS OF LIQUIFY FILTER. They all disappear.
    P.S.
    I have been working on this issue and have managed to solve it. One program that makes real time back up was running in the background and stealing the resources. Even though  performance panel showed only 1 -2% of CPU use with 4 -5 GB of RAM use, but it was worse than that. As soon as stopped that process, it all works fine.

  • CS5 Liquify Tool Bugging

    I'm having an issue with the Liquify effect in CS5. When I use the Bloat Tool in Liquify at smaller brush sizes, the tool doesn't do what it's supposed to. It doesn't bloat the image, but rather blurs it, and further strokes over a worked area result in unwanted blurring and clone stamp tool-like results as well as erasing completely after stroking over it a few times.
    Are there any plug-ins, updates, etc. or anything that I can do to fix this problem?
    Thanks

    I have the same problem. The resource business is total nonsence. My workstation has 2 Intel Xeon 3.4 Gz processors with 6 cores each ( 24 cores total including FP cal)  and 48 GB of 1333 MHz  DDR3 RAM. OS is 64 bit Windows 7 on Raid 0 which has 4 Photofast SSDs each reading and writing at 270 MB/Sec and total raid read and write performace that more or less saturates the front side bus at over 1 GB/sec. Data is on a Raid 10. File sixe 6000 px by 3000 px single layer 8 bit Tiff.
    I have tried all combinations of memory allocation recommended by Adobe. Adobe CS5 runs in 64 bit. Scratch disk over 6 TB defragmented raid 10 in the workstation. No network raids used. Only programs running Windows 7and Photoshop CS5.
    All Adobe products have the latest build. The video card is FireGL 8750 with 2GB GDDR5 graphics frame buffer memory with 115.2 GB/s bandwidth. This card has the latest sofware updates.
    It is a DISASTER in CS5 in Windows 7.  It works in XP Pro CS4 on a machine with lot less resorces.
    The liquify tool allows the application of the effects but after you  select OK then it goe to Render and progress bar of the render action. WHEN THE RENDER IS COMPETED WHICH HAPPENS FAIRLY FAST THE IMAGE DOES NOT SHOW THE RESULTS OF LIQUIFY FILTER. They all disappear.
    P.S.
    I have been working on this issue and have managed to solve it. One program that makes real time back up was running in the background and stealing the resources. Even though  performance panel showed only 1 -2% of CPU use with 4 -5 GB of RAM use, but it was worse than that. As soon as stopped that process, it all works fine.

  • Defrag Detects RAID-0 HDD as SSD

    I originally posted this on the Microsoft Community Windows 8.1 forum.  A Microsoft Support Engineer responded by telling me it would be better suited in Microsoft TechNet forum.  So here I am.  Any help would be appreciated.
    Anyway, I have Windows 8.1 Update 1 installed on a SSD drive.  I have a second SSD drive that is used for programs that benefit from high read rates.  Finally, I have four HDDs that are setup up as two RAID-0 arrays.  One is used for programs
    while the other is used for storage.  Defrag detects the SSDs as SSDs, and the first RAID-0 array is detected as a HDD.  However, the second RAID-0 array is detected as a SSD.  As a result, I can't defrag the drive using the GUI utility.  (UPDATE: 
    Running Defrag from the CMD line seems to work) 
    The problematic array is identified as an HDD using Intel's SSD Toolbox and RST driver utility.  So I'm not sure why Windows thinks it is a SSD.  Rerunning WINSAT did not resolve the issue.  Is there a way to overide the Windows drive detection?

    Hi
    DarWun,
    Disk Defragmenter uses the
    Windows System Assessment Tool (WinSAT) to evaluate the performance of the device by getting the “random read disc score” of the device. The performance threshold was determined by some I/O heuristics
    through WinSAT to best distinguish SSD from rotating hard disks. 
    Here is a blog for reference:
    Windows 7 Disk Defragmenter User Interface Overview
    http://blogs.technet.com/b/filecab/archive/2009/11/25/windows-7-disk-defragmenter-user-interface-overview.aspx
    There is a key in the registry, the Defragmenter will recognize the device as SSD when the value is above a specific valus.I recommend you check the values
    of “randomreaddiskscore”.Here is the path:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WinSAT
    Here is a similar discussion(Pay attention to Tim`s response ):
    Why Windows 8 and 8.1 defragment your SSD and how you can avoid this
    http://www.outsidethebox.ms/why-windows-8-defragments-your-ssd-and-how-you-can-avoid-this/#sel=77:7,77:9
    NOTE: This response contains a reference to a third party World Wide Web site. Microsoft is providing this information as a convenience to you. Microsoft
    does not control these sites and has not tested any software or information found on these sites.
    Best regards   

  • VIA Raid 0: Red screen of Death - fatal error during boot sequence

    I have been using two Maxtor 120GB SATA drives in a RAID 0 array on my Athlon 64 MSI MB.  I have occasionally had messages saying "Hard disk error. Can't find drive 0".  I've assumed that this has been a driver issue because if I power down, wait a while I can reboot and continue without issue (other than chkdsk running during boot up).  However after the last such error, when I next tried to reboot I got an error message as the RAID was initialising.  The error message was in a red box on a black screen and stated that there was a fatal error and that the raid relationship and all HHD would be destroyed if I continued.  Naturally I switched off at this point, but on rebooting got exactly the same message.  Eventually, as I have backups, I continued past this point and sure enough the RAID had gone.  So I rebuilt the raid array and everything seems to be OK.
    However, I'd forgotten that my MSI DVD burner was incapacitated due to a failed firmware update so had temporaril;y replace it with a DVD rom drive to get access to my backups.  BUT the drive won't read all my backup discs, in particular it won't read the Drive C backups .
    Has anyone else had this RAID erro.  Is it a driver error or is it a dsik failing (even though they were new at Christmas).
    I am using a Superflower power supply which has 22A on the 12V rail
    1GB PC3200 ram by A-Data (CAS 2.5)
    Elderly HP 9100 CDRW
    MSI DR-8 DVD RW
    Ethernet card
    Radeon 9600 128MB
    AMD64 3000 processor
    Freeflow

    Raids do not really help benchmarks unless you considure hard drive benchmarks. Most games and benchmark proggies load everything they need prior to running. So unless you have a low memory condition that forces the game or benchmark to use the swap file the only place that you would typically see any gain in speed is during the loading of the game or benchmark. That is hard to quantify other than through experience.
    Where a raid shines is in file tranfer situations. When you create and save large files such as movies, Reading or copying large files, defragging, or any other massive file effort. Overall the gains can be very small compaired to the rest of what you do with your system. However that is not to belittle what a Raid does. It does give a finer sheen to the overall computers operations and will make sluggish file transfers seem much more bearable overall.

  • Groupwise 8 on Server 2008R2 SP1 Performance and Defrag Questions

    I am running GroupWise 8.0.2 on an HP DL360G G7 with 24 GB of RAM and Dual Xeon X5650 processors under Server 2008 R2 Sp1. Post Office is using roughly 562 GB of an 819 GB Disk located on an HP p2000 g3 SAS connected enclosure comprised of 12 x 146 GB (15k rpm) SAS disks in a RAID 10 configuration. Typically never more than 700 users connected to the Post Office at any time. I am experiencing client slowness at regular intervals. Roughly every three weeks. The solution tends to be to restart the server and give it a fresh start, which holds for a while.
    Concerns.
    1. When the server restarts I see minimal memory utilization maybe 1.5 GB within half an hour the Used memory climbs a bit to about 2GB but free memory is quickly pre-allocated to standby memory. Within about 2 hours the Free memory is all consumed and I have about 20+ GB of Standby Memory. Running RAMMap indicates that the memory is being used by Metafile and Mapped File, Which tells me that the Post Offices files are being indexed and cached into RAM. Then after a couple weeks go by the amount of active RAM exceeds 8GB (still mostly Metafile, not so much Mapped File). And standby RAM still consumes the remaining RAM between Metafile and Mapped file, leaving no free memory. Typically once I reach about 8GB of memory actively used by the system (mostly Metafile), it's time for performance to drop for the clients.
    Also, I'm seeing increases in Disk Queue Length for the Post Office Volume. Typically below 2 rarely as high as 5,
    I suspect my best solution is to start a regular defrag process as my volume is now 29% defragmented [yeah NTFS :( ]
    Question:
    I am concerned a defrag could take as much if not longer than 10 hours. Which is too long for the agents to be down. So I was wondering if anyone had used DiskKeeper or alternative 3rd party Defrag utilities that might be able to defrag open files in the background, or if anyone had run defrag with the agents running to get the defragable files, then shut down the agents for a second pass which should be considerably shorter. Any advice that can be offered, or other suggestions for my described issue, would be greatly appreciated.
    Thank You!

    In article <[email protected]>, Matt Karwowski wrote:
    > I am running GroupWise 8.0.2 on an HP DL360G G7 with 24 GB of RAM and Dual Xeon X5650
    > processors under Server 2008 R2 Sp1. Post Office is using roughly 562 GB of an 819
    > GB Disk ... I am experiencing client slowness at regular intervals. Roughly every
    > three weeks. The solution tends to be to restart the server and give it a fresh start,
    A) Updating to latest code may assist as this could also be a memory fragmentation type
    issue or such that has been fixed
    B) Perhaps even more RAM might help. How much space do the ofuser/*.db and ofmsg/*.db files
    take up? The more mail flows, the more of those DB file content are held in memory. A few
    versions ago, you tended to need as much RAM as the total of the DB files, and while those
    days are gladly past, it is still a good number to watch out for.
    C) Explore Caching mode for at least some of the users as that significantly reduces the
    load on a server.
    > I suspect my best solution is to start a regular defrag process as my volume is now
    > 29% defragmented [yeah NTFS :( ]
    Still way better than old FAT ;)
    This is one of the reasons why GroupWise runs better on Linux with either EXT3 or NSS
    volume types, and is why SLES is provided along with your GroupWise license.
    As for running Defragments, even running it for just a few hours a week will gradually
    help, especially if you can fit in one big burst near the beginning. So if you can automate
    the process of shut down agents, run defrag for X hours and then shut it down, then restart
    the agents, you may clear this all up. Just having the agents down an hour a week might
    clear the memory usage issue for you.
    I would be very hesitant to run any defrag on open database files of any email system
    unless the defrag tool knew explicitly about that email system. But a smart defragger that
    can keep the *.db files in their own (fastest) section of the drive and the rest off to the
    side would go a long way to making the defragmentation process much more efficient.
    I haven't directly run a GroupWise system on any flavor of Windows, since OS/2, so this is
    more a combination of all my platform knowledge, but I hope it gets you closer to a smooth
    running system. And if the other GW on Windows admins can pipe in, all the better.
    Andy Konecny
    KonecnyConsulting.ca in Toronto
    Andy's Profile: http://forums.novell.com/member.php?userid=75037

  • Is After Effects CC disk cache taking up double the space on my RAID 0 SSD? Should I move it?

    In AE CC the available space on my RAID 0 (2 x 120GB SSD) drive drops from 100GB to around 40GB as soon as I do even a small RAM preview and even though I have set the disk cache in the preferences to 30GB. It also causes AE to prompt that I may not have enough space on the drive for the amount of cache I have set in preferences even though there appears to be 40GB left on it.
    I really value working with long fast RAM previews but other than that I'm not sure if the performance is jeopardised at all here. At one point I was enjoying 60GB of disk cache.
    RAID 0 is striped so it shouldn't take up double the space, am I right?
    I am about to install a RAID 0 HDD (2 x 3TB) for source files as I work with video a lot and have a 4K RAW project on the go so no time like the present. So should I move the disk cache from the ridiculously fast RAID 0 SSD to the new RAID HDD drive (as suggested in Harm's guide)?

    Thank you Mylenium. All this time I have been absolutely convinced that you don't need to defragment SSDs. Is this a complete misunderstanding? Even if you do though, I have Windows 8 (not 8.1) which is doing that once a week automatically (apparently even to SSDs).
    Are you saying that the 30GB disk cache set up works like a contiguous file?
    i7 3930k (12 virtual cores) on P9X79 Deluxe
    2 x 120GB Force GT Raid 0 SSD on SATA 6gb/s is my boot drive as well as where I keep programs and the disk cache. I do a disk clean up to remove temp files a couple of times a week but that's all.
    1 x 3TB Barracuda on on SATA 6gb/s for back ups and exports
    3TB Lacie on USB 3 for source video and project files
    2TB Lacie on USB 3 for archive projects
    660ti 2GB
    I'll try a manual defrag of my RAID 0 SSDs now.

  • Should I disable RAID 0?

    I need some advice on what to do with my present system. I will be using CS4 Prod Prem for editing HD, AVCHD, SD, etc. ...here's the issue -
    I bought a custom built comp with -
    Antec 902 Mid Tower ATX case
    750Watt Corsair power supply w/140mm fan
    2-120mm fans
    ASUS P6T mobo
    Core i7 920 CPU
    Kingston V-Series SSD 128gb drive (for Win 7 + app's)
    2- WD Caviar Black 1TB
    12gb RAM (mushkin 4gbx3)
    nVidia Quadro FX 1800 + Elemental Accelerator
    +
    I have 4 USB external drives, and one Firewire 800 drive attached as well. (Unfortunately these are 'mostly' maxed out, but I can still defrag.)
    I had originally asked for the 2 WD Caviars to be set to RAID 0, but after I realized I would have no redundancy I asked for it just to be set up as two single drives. But, they kept it in RAID 0 . The RAID is controlled by the ASUS mobo. I am thinking a RAID 3, for now....eventually expanded to RAID 30 will be my best option.
    I know I'll have to get more HDD's, but I need to know what would be my best option.....
    1. Should I disable the RAID 0 until I can get more HDD's so I won't lose data? If so, how do I go about that....I would need to re-install Win 7 correct?
    2. Or, should I just go ahead, get 2 more drives and go with a different RAID (x)?
    3. If I have to get a RAID controller card for RAID 3, what's my cheapest/most reliable option? I just don't think I can swing the high end Areca cards right now.
    4. If I go RAID 3, and want to expand to RAID 30, will I have to run an external RAID tower instead of internal discs?
    5. If a drive goes down, how fast do I need to get the new drive in?
    Lost, dazed and confused....please help!

    1. Should I disable the RAID 0 until I can get more HDD's so I won't
    lose data? If so, how do I go about that....I would need to re-install
    Win 7 correct?
    Win 7 and your programs are on the SSD, so there is no need to re-install Win7. To disable the current raid0, you have to know how it was setup. Was this done in the BIOS using the ICHR10 or the Marvell chip or was it done in software under Windows? Look in the user guide in section 4.4 for instructions on how to break out the raid to individual disks.
    2. Or, should I just go ahead, get 2 more drives and go with a different RAID (x)?
    Getting 2 more internals is always wise, since they are a lot faster than your current USB externals and these are already pretty full. I would make sure you get the identical model as your current WD Caviar Black. Also make sure that ACPI is disabled in the BIOS, because it can disrupt reliable operation of the Caviars in a raid. BTW, I'll explain later, but consider getting 3 Caviars instead of 2.
    3. If I have to get a RAID controller card for RAID 3, what's my cheapest/most reliable option? I just don't think I can swing the high end Areca cards right now.
    AFAIK Areca is the only controller card to offer Raid3. Also keep in mind that buying an Areca controller card is like buying a Vinten or Sachtler tripod and fluid head. Pretty expensive, but usually they last a lifetime. Now the Areca may not last a lifetime, but can certainly last a couple of PC generations.
    4. If I go RAID 3, and want to expand to RAID 30, will I have to run an external RAID tower instead of internal discs?
    Not at all, if your case is large enough. For instance in my case I currently have 2 BRD burners and 17 3.5" disks. If I want I can increase that to 2 BRD burners plus 21 3.5" disks of which 15 hot-swappable.
    5. If a drive goes down, how fast do I need to get the new drive in?
    As I said above, I suggest you get 3 WD CB disks. You can then configure them in a 4 disk raid5 array plus 1 hot-spare. The dilemma is that AFAIK neither the ICHR10 nor the Marvell support hot-spares, so you may need an Adaptec, Areca or 3Ware controller to get hot-spare support. If you don't have the budget for an Areca controller, then in the future you may find that the more affordable Adaptec or 3Ware card (or even Hightpoint or LSI) have no further use if the time comes for a raid3 card.
    With a hot-spare in a raid5 when one disk fails, you can take your time with getting a new one (although with reduced security untill replacement). If one disk fails, you will have reduced performance for less than an hour, maybe only for minutes, untill the hot-spare kicks in. You can easily take a week or even two weeks to get a replacement disk if you can live with the reduced security of not having a hot-spare available anymore.
    With hot-swappable drive cages you gain easy access to all your disks, like for instance the SuperMicro CSE-M35T, http://www.newegg.com/product/product.aspx?Item=N82E16817121405
    Hope this helps.

  • Boot camp assistant ignores legit disks if the boot disk has a raid slice

    I don't know if this is the right place to post... sorry if not.
    Here's my setup:
    disk0
    Mac OSX HD
    OS Share
    Mirror (RAID slice)
    disk1
    Mac Fast
    Windows Fast
    Mirror (RAID slice)
    disk2
    Time Machine (one big Journaled HFS+ partition, that's it)
    I wanted to use the assistant to make a new BOOTCAMP partition on disk2, but it never let me get past the opening screen because the startup disk had been partitioned outside of the assistant (true, it was) and was not a single Journaled HFS+ partition (also true). But disk2 is perfectly ripe for the, uh, boot camping.
    I got rid of the RAID completely, and the assistant was happy to let me create the BOOTCAMP partition on disk2. After I got that working, I went back and re-created the RAID on disks 0 and 1. Everything works perfectly now.
    Was I doing something wrong, or is this just a (subtle) bug in the Boot Camp Assistant?

    Just installing OS updates and programs will
    Write the installer
    Uncompress the file at say 7x and write the actual installer in a free area that often may be anywhere but at the far end of the partition is common.
    Write new version or copy of the files being changed/updated (myfile.new and myfile.old)
    Delete the entry in the directory for the temp files and versions.
    A program that can show your hard drive and where files are and where free space is, and show fragments.
    OS X tries to eliminate small files with fragments when they are rewritten and when it can find contiguous free space to write the new file.
    That can actually work against you, as it might fragment the free space also.
    When a drive gets down to 20% free space there is a performance hit, you are using slower areas of the drive (inner tracks are usually 50% slower) and having more trouble finding filles and where to write files.
    Clone Mac OS is one of the easiest and safest methods.
    Click on source (Mac Harddrive), click on or drop the backup partition / drive into target icon.
    Click "Run"
    you end up with bootable (test it to be certain) exact copy.
    Unplujg and keep it on a shelf
    Update it before you inistall a new version etc of OS X or make changes.
    Have two: one safe on the shelf from Week #1 and new Week #2
    Disk Utility can do that.
    With huge hard drives free space should not be an issue. The lack of a feature to defrag as part of Boot Camp or Disk Utility is why there are iDefrag and others, but they are slow and best run from.... a 2nd hard drive running OS X rather than from CD. and never great idea to do w/o backups or on live system.

  • Failed Software Raid 1, trying to virtualize but getting I/O error failure

    I will try to summarize all what has happened and where I'm at now currently:
    Server is 2008 R2 (DC, exchange, IIS, SQL, etc) setup originally setup with software RAID 1.   At some point years ago the software RAID had failed and the second disk wasn't part of the mirror anymore and it was never fixed in disk management.
    Last week the server and wouldn't boot as the primary drive failed.  I selected secondary plex during startup just to see if that disk was still functioning, which to my suprise it did boot up but data was years old. 
    At that point, I inserted Windows 2008 R2 DVD and started System Recovery from previous backup onto secondary plex drive.  Once complete, the server was backup and running.  Changed the system startup variable to secondary plex so it would
    boot that drive if restarted.
    I am now trying to virtualize the server using disk2vhd in order to move it onto new hardware, however each time I try it fails with I/O error. I have re-tried multiple times stopping all major services to minimize I/O during disk2vhd, but still errors out
    about 3/4 way through.  I have run checkdisk and it repaired a few bad sectors on the secondary plex drive, yet still fails during disk2vhd.  I am currently running a defrag on the disk as it reported 36% defragmention and will try offline disk2vhd
    once complete using System Center VMM 2012 (not R2 as that's been removed - found that out the hard way!). 
    I'm worried this disk has underlying errors which is why it failed years ago and Windows broke the mirror and why it won't work with online disk2vhd.
    I have attached a picture of Disk Management which I find interesting as it shows Disk 0 as unknown (Not Initialized), Disk 1 having errors (can "Reactivate Disk" though unsure if that does anything destructive?), and then a missing disk at the
    bottom which I can also reactivate. I'm not sure if restoring the last working backup from disk 0 somehow messed up Disk 1 or confused it?  BIOS reports Disk 0 as 0MB so I'm thinking that drive is no good anymore. 
    I guess what I'm looking for with this post is any answer or suggestion as to why disk2vhd might be failing, whether an offline disk2vhd maybe a better successful option, and what (if anything) I should do in Disk Management with these drives in their current
    state. I don't have a replacement 500GB drive for the mirror and I don't plan on getting one as I already have a new Hyper-V 2012 Core server setup to replace this failing server.  I just can't seem to get it to virtualize using online disk2vhd and wondering
    if it has something to do with the current drive setup/situation?

    Hi,
    à
    I am now trying to virtualize the server using disk2vhd in order to move it onto new hardware, however each time I try it fails with I/O error.
    Would you please let me know the complete error message that you can get?
    Please refer to following article and check if can help you.
    Troubleshooting Disk Management
    à
    I guess what I'm looking for with this post is any answer or suggestion as to why disk2vhd might be failing, whether an offline disk2vhd maybe a better successful option
    In addition, this issue seems to be more related to Disk2vhd. I suggest that you should post the question in
    the Disk2vhd forum. I believe we will get a better assistance there.
    If anything I misunderstand or any update, please don’t hesitate to let me know.
    Hope this helps.
    Best regards,
    Justin Gu

  • To defrag or not to defrag ...

    that is the question.
    I am new to Macs, and just got a MacBook Pro,
    to edit Final Cut Pro.
    Being used to editing in PCs, I have always been
    in the habit of defragging often, especially when editing
    a video project.
    However, I don't know if Mac hard drives, and external firewire drive that I have attached, need defragging.
    I contacted G-tech, the manufacturer of my external FW Raid drive. They didn't say not to defrag, they told me to buy a 3rd party utility to defrag.
    However, some Mac users have told me it is not necessary to defrag Mac drives.
    Who do I believe? Where can I get the TRUTH?
    thanks

    Both are both right and wrong. While Mac OS does its best to keep each file defragmented while it is written, that does include where that file is placed. Additionally, due to the size of file it cannot guarantee that this is the case.
    G-Tech ultimately suggest defranmentation, as well as optimisation, to ensure the optimal performance of the files already in your drive and subsequently, the use of the free space on the drive.
    So, the right answer depends in your perspective and what you're trying to achieve.

  • Shuttle SN25P nforce4 raid 0 install help

    Does anybody know how I can install arch on a nforce4 sata raid 0 setup? I bought a shuttle sn25p for gaming and would like to dual boot. When I threw in the arch base cd, it booted up, found my network card, but when I went to partition the hard drive cfdisk gave me an error about not being able to read the partition table... I curently have windows running on the box with a 15G partition open for arch. Any ideas?

    Looks like the boot dont pick the pci card , but tries something else.
    Try this:
    Set the order to cdrom on all
    1'st boot device=cdrom
    2'nd boot device=cdrom
    3'rd boot device=cdrom
    Boot other device=enabled
    This way it should skip searching for boot records on floppy and internal
    ide controllers and go straight to the pci card .
    And as in the prevoius reply , i agree. I have tested cluster size 4k and 16k
    and with todays fast computers you don't notice any difference.
    If you run a server , you might get some better performance with special apps
    caused by better I/O ,but for a home machine don't bother.
    Been running NTFS with default cluster ever since .
    Defrag and other disk maintnance also can give you trouble with anything
    else than default size.

  • Non-System Raid SSD recognized as Hard Drive instead of Solid State Drive unit in Windows 2012 R2

    Hello,
         We have a Raid 10 of 4 SSD in LSI hardware Raid card.
         In Windows 2012 R2 the disk is detected as HDD.
         Is there any way to set in Windows that disk as SSD instead o Hard disk drive?
         Searching I saw that win 8.1 users solve that instaling and runing "WinSAT diskformal" but that don't work in win server because the SSD raid is a non-system one.
    http://www.win-raid.com/t70f34-Detection-of-SSDs-by-Win-and-the-use-of-the-Optimizer-former-Defrag-Tool.html
         Thanks.

    festuc,
    I'm having the same problem as you at the moment with a new server install. I've the latest drivers/firmware, but can't seem to get 2012 to recognize the RAID as an SSD. What's even more boggling, I've essentially cloned this server from another (both on
    the hardware and software side) and one actually recognizes it correctly! 
    To provide some key bits of information:
    1) The RAID control in question is Marvel based-- using SuperMicro MoBo with Intel Raid (had problems getting their Adaptec controller-- no SSDs would  be recognized at all).
    2) I installed the 2012 on the raided SSDs in question.
    3) I did install the correct driver prior to the OS installation.
    3) Post OS installation, TRIM is reportedly up (fsutil etc...) 
    4) However, when checking disk optimzation, it is clearly labeled as a HardDisk and optimization does NOT trim.
    As I said before, this is working correctly on a separate server with the exact same setup. I'm absolutely stumped at the moment since I've essentially tried everything I could come up with. Reinstalling drivers, rolling back, alternate manufacturer drivers
    (for the heck of it), etc... all dead ends. 
    Here's hoping you or some other saint on here will over some much needed insight to get this sorted out.

  • RAID Slices Offline

    I have managed to answer my own original question on this but I want to know whether there was a better way.
    Since upgrading to 10.4.9 I have had problems with an Apple RAID mirror. The Disk Utility showed both slices offline. The console log showed messages like:
    hfsswapHFSPlusBTInternalNode: catalog key #24 invalid length (0)
    Any attempt at using Disk Utility to fix the disk failed.
    I did the upgrade with the RAID disks disconnected and I suspect this was the cause of the trouble. The mirror was disk5 built out of disk3s3 and disk4s3.
    In the end I managed to make the computer recognise the RAID again by doing the following:
    diskutil removeFromRAID disk3s3 disk5
    This removed disk3s3 from the mirror and the OS then mounted it on my desktop as a normal partition. After I had done this disk4s3 magically sprang to life so I ended up with two identical disks.
    I then used:
    diskutil addToRAID member disk3s3 disk5
    and waited several hours for the mirror to rebuild.
    Presumably the problem was a change in the format of some meta-data. Is there a better way to fix this sort of information?

    This is the same Tech Tool that comes with AppleCare?
    I bought the APP with my PowerBook and my Mac Pro, but in all these years I have never used it. I got the impression that it was just a glorified diagnostic tool and did very little to actually fix or repair problems. That is from the very little information that Apple provides with the plan.
    Goes to show what I know. LOL. It must have some kind of repair abilities to cause damage to drives when it runs, right? But I don't know if I'd have a lot of confidence in a product that causes damage when it is is supposed to be fixing, unless it was a case of pebcak.
    Long time ago, I used Mac Tools Deluxe and loved it. I actually enjoyed optimizing/defrag days.

  • 865 Neo2 RAID 0 Performance

    Hi guys,
    I've just set up a RAID 0 in the last weekend and its going all right since them.
    But I'm worried about the performance, and I want some comments just to be sure if it's good.
    I set it up with a Stripe size of 128 Kb (default), and I'm using FAT32 (4 kb).
    It's a RAID 0 with 2 x Western Digital 120 GB SE 7.200RPM - PATA.
    In SANDRA I'm just getting 45 MBps (just about the same before doing the RAID)
    In the HD Tach, the average is about that too, look:
    Is that correct about going further then 200 mbps of Read Burst Speed?!
    And I found this ATTO Disk Benchmark:
    In the PCMark 2002, I got 1600 points. It was about 950 with a single drive.
    Please, any comments are welcome.
    Regards,
    Alex

    I find the Sandra benchmark frustrating to use for HDD benchmarking. It doesn't seem to work properly for this. The PCMark 2002 seems like a good program to give a general idea of your perf. Your marks do seem a little low though. Do you keep the Raid defragged? Also, NTFS file system and using a lower stripe size(like 64/32K) will probably give better benchmark results. That doesn't necessarily mean better perf for "real world" usage.
    My favorite disk benchmark is Aida32. Their site is down now but you can download it Here.
    You can see a screenshot of the benchmark Here. It's a comparison of my Seagate Raid0 array and a single 36GB WD Raptor drive.

Maybe you are looking for

  • Business partners migration

    Hi, I'm currently setting up Incident Management for a charity in South East Asia. There are about 2000 key users to be created, so I'm going for LSMW. I want to use IDoc type BUPAFS_FS_CREATE_FRM_DATA2. So in structure relations, I can see all the d

  • Indesign and Illustrator will not open

    I can't open either of the programs. I done tried uninstalling and reinstalling. I troubleshoot both to see if the compatibility work with the server. Is there any other solution to help with my problem?

  • Jdbc:oracle:thin:@[host]:[port]:[sid] problem:

    Hi all When I try to make connection through this string jdbc:oracle:thin:@localhost:1521:orcl , It failed to connect to the database. I am trying to connect to database through obiee 11.1.1.5 by jdbc connectivity. Please tell me its solution..... Th

  • .FUV file to be submitted to Dept.

    Hi Experts, Is there any provision/availablity that we get .FUV file for the income tax payable/paid by employee to the Dept. from SAP Regards ...Sadhu

  • Viewing the "Now Playing" or "Currently Playing" playlist?

    I'm moving over to iTunes at the moment from an older MP3 player and I can't seem to find a playlist that shows the songs that iTunes will corrently work its way through - i.e. a "Now Playing" playlist or group of songs that are ticked in tbe music l