Fragment Size with large Raid Arrays

No, I am not a new user. I've done newfs probably hundreds
of times. But, I've never had to futz around with obscure
parameters because the device geometry was insane.
I want to get a file system made that
doesn't require 4096 byte fragments. Is that too much to ask?
I would think that there is some combination of cylinders,
heads, and sectors that should give that result, but I don't
have time to search for it.
tahoe# newfs -m 1 -o time /dev/rdsk/c2t8d0s2
newfs: construct a new file system /dev/rdsk/c2t8d0s2: (y/n)? y
With 16129 sectors per cylinder, minimum cylinders per group is 16
and the fragment size to be changed from 0 to 4096
With these parameters, it returns immediately, seemingly doing
nothing -f 4096 to specify the fragment size, like:
tahoe# newfs -f 4096 -m 1 -o time /dev/rdsk/c2t8d0s2
This worked, or seemed to.
The problem now is that it I am losing a lot of space when I have
directories with lots of little files.

The relocation you show is a relocation for a 32-bit value
relative to a linker section, in this case the .lbss section.
I would not expect such a relocation to be generated
unless the object file was produced by compiling for the
small memory model.
What type of symbol is maillage_curv.curv_rank_from_index_?
From the name, I would expect it to be a routine name, but
routine names should not be defined in the .lbss section.
Bob Corbett

Similar Messages

  • [Bug?] X-Control Memory Leak with Large Data Array

    [LV2009]
    [Cross-posted to LAVA]
    I have found that if I pass a large data array (~4MB in this example) into an X-Control, it causes massive memory allocations (1 GB+).
    Is this a known issue?
    The X-Control in the video was created, then the Data.ctl was changed to 2D Array - it has not been edited in any other way.
    I also compare the allocations to that of a native 2D Array (which is only ~4MB).
    Note: I jiggled the Windows Task Manager about so that JING would update correctly, its a bit slow, but it essentially just keeps rolling up and doesn't stop.
    Demo code attached.
    Cheers
    -JG
    Unable to display content. Adobe Flash is required.
    Certified LabVIEW Architect * LabVIEW Champion
    Attachments:
    X Control Bug [LV2009].zip ‏42 KB

    Hi Jon (cool name) 
    Thank you very much for your reply. We came to this conclusion in the cross post and it is good to have it confirmed by LabVIEW R&D. Your response is also similar to that of my AE which I got this morning as well - see below:
    Note: Your reference number is included in the Subject field of this
    message. It is very important that you do not remove or modify this
    reference number, or your message may be returned to you.
    Hi Jon,
    You probably found some information from the forum. The US engineer has gotten back and he said that unfortunately that's expected behaviour after they have conducted some tests and this is what he replied:
    "X Controls in the background use events structures. In particular the Data Change Event is called when the value of the XControl changes (writing to the terminal, local variable, or value change property). What is happening in this case is the XControl is getting called to fast with a large set of data that the event structure is queuing the events and data that a memory leak is produced. It is, unfortunately, expect behavior. The main work around for the customer in this case is not call the XControl as often. Another possibility is to use the Synchronous Display Property to defer updates to the Xcontrol, this might slow down a leak."
    He would also like to know if you can provide with more details how you are using the Xcontrol, perhaps there is a better way. Please refer to the link below for synchronous display. Thank you.
    http://zone.ni.com/reference/en-XX/help/371361G-01/lvprop/control_synchronous_display/
    In my application I updated the X-Control @ 1Hz and it allocated at MBs/s up to 1+GB before it crashed, all within a few hours. That is why I called it a leak. I am really worried that if this CAR gets killed, there will still be an issue lingering that makes using X-Controls a major problem under the above conditions. I have had to pull two sets of libraries from my code because of this - when they got replaced with native LabVIEW controls the leak when away (but I lost reuse and encapsulation etc...).
    Anyways, I really want to use X-Control tho (now and in the future) as I like all other aspect of them. If you do not consider this a leak, can a different #CAR be raised that may modify the existing behavior? I offer the suggestion (in the cross-post) that the data be ignored rather than queued? Similar to Christian's idea, but for X-Controls. Maybe as an option?
    I look forward to discussing this with you further.
    Regards
    -Jon
    Certified LabVIEW Architect * LabVIEW Champion

  • Ext2 or ext3 for large RAID array

    I'm just in the process of creating a 10TB array of 5 x 2TB drives.
    I've been burned too many times by EXT4 so it's out for the forseable future.
    My concern is the crazy amount of time required to stabalize the file system when periodic checks are mandated.  I'm using ext3 right now on a 7.5TB file system and have tuned the auto checking down to 2 years and 100 mounts.  It's not the best situation but when the system goes down due to over heating (filter plugs every few months), I turn it on, and it goes into a 2 day recovery procedure during the boot process, it's outside the envelope of acceptibility.
    Last edited by TomB17 (2010-06-26 02:32:55)

    TomB17 wrote:
    I appreciate the comments, gentlemen.
    graysky wrote:Not what you asked but can you describe how you have been burnt by ext4?
    I've been burned by the 0 byte file bug.  The files were all there but some of them went to 0 bytes.
    I did that on a backup array about 6~8 months ago.  Thinking, "It's just a backup array", I tried EXT4 for the first time.  It formatted up nicely, 36 hours of rsync, and I was good to go.  I didn't realize I had the 0 byte file issue until my main array had some issues.  When I went to the backup array, there were tons of 0 byte files, including fstab, and mdadm.conf.  That made it more difficult to rebuild the main array.  I did manage to rebuild the main array.  Once done, I formatted the backup array EXT3 and I've been hessitante to experiment with filesystems.
    The 0 byte file bug is well documented, and perhaps long solved, but I'm not ready to get back on that bandwagon.
    For what it's worth, I was burned by EXT3 several times early in it's existance.  That was a different issue.  The whole filesystem would become corrupt after a while.  It was disasterous but I didn't count on my PC then the way I do now.  That was back in the days I could back up to CD-ROM.  I kept at it and eventually EXT3 stabalized.  These days, I trust EXT3 with my life.
    I encountered very similar issues, which resulted in me switching this workstation to FreeBSD and using ZFS for my raid arrays.
    The beauty of that file system far outweighs anything available on Linux at this current time.

  • Scratch disk with two RAID arrays

    I have 2 500GB disks in a RAID0 array for Windows / programs and another 1T RAID1 array for data. Where should I place the scratch disk?

    Should I create a dedicated partition on the RAID0 array for the scratch space?
    Please do yourself a big favor and do not do any partitions. They will only slow things down, as the OS thinks it's 2 physical HDD's and will tell them (the two logical HDD's), that it needs data at the same time. Remember, it's one physical HDD (even though it's striped, it's managed as one.
    Now, here are my thoughts:
    1.) RAID 0's will speed things up. However if one of the two drives fails, ALL is lost. I do not recommend using a RAID 0 (any flavor of 0), for the system/program disk. If you do, be sure to have a great backup scheme and use it.
    2.) Now, as of CS, PS no longer has a real limitation on how much HDD space it can use for Scratch Disks. Going back, on was limited to 4GB and had the ability to set 4 of these. Back then, having say a 32GB SCSI HDD, and doing some 4GB partitions made sense. Now, it does not.
    I'd get a very fast, smaller system HDD. Then, you can use either of those RAID arrays for your media, and the other for your Scratch Disks. Having a dedicated hardware RAID controller will speed things up too.
    On my system, used almost exclusively for PS, AI, InDesign, Painter and Video editing, I have 4x 1TB SATA II's for Scratch Disks, 2x 1TB SATA II's for still & video and audio. My system drive is a 500GB SATA II. Were I building now, I'd probably go with RAID 0 for the media and also a RAID 0 for my Video Exports, with non-striped 1TB SATA II's for my Scratch Disks.
    Just some thoughts,
    Hunt

  • Help with installing Windows with a RAID array

    Hi!
    I hope you can help me.   When I try setting up Windows XP Home I can get as far as the press F6 section of the installation. I formatted a floppy disk and installed the Nvidia drivers just as it descibes in the manual.  However, when i'm prompted to add the Nvidia RAID drivers I get the message "The file nvraid.sys could not be found." and if I try to install the Nvidia storage control drivers I get the message "nvatabus.sys could not be found.  IF I use the nforce3 SATA RAID Driver floppy provided by MSI I actually get all the way through the formatting of the hard drive but when i'm promted to press enter to start loading the Nvidia drivers into Windows it stops and will not go any further.    
    I have checked the the BIOS settings and double checked the BIOS settings.  At first, I tried installing with the hard drives installed in ports #1 and #2.  I have tried connecting the two sata 120GB Maxtors to sata ports 3 and 4.  
    Under Integrated Peripherals, Enable RAID for SATA #3 and #4, disable port 1+2, and enable port 3+4 and vice versa for the previous setup.
    I would then save changes and restart and at the nVidia RAID BIOS screen where it would usually say to press F10 to check the array setup.
    I am frustrated and stumped.  Please help me...Your help is much appreciated, thank you
    System:
    K8N Neo Platinum
    Athlon 64 3400+
    1024MB Corsair XMS ProLL PC3200
    2 x 120GB Maxtor SATA hard drives
    Lite-on 52x24x52
    ASUS 9800 Pro
    Antec 480 watt PSU

    Maybe you have not put the files in the correct directories needed to install the files from the floppy, or maybe your floppy is just messed up Why not just use the floppy that is supplied to you by MSI? (Psst its in the box )

  • Upgrade hard drive size with mirrored RAID

    First of all thanks for your help.
    I have a G5 XServe with (3) 250GB Drives
    (1) drive is for the OS .... great perfect no change needed.
    The other to drives are mirrored to eath other and hold 240GB worth of work.
    I would like to buy (2) 500GB Drives to replace the 250GB Drives.
    What is the easiest way to do this?
    Idea 1) Will this work???
    Break the raid by ejecting one of drives.
    Put in one new 500GB Drive.
    Copy all data from 250GB to 500GB
    Eject 250GB drive
    Put in other 500GB drive.
    Re-set up Mirror with out losing data?
    Idea 2)
    Copy it all to an external drive....
    Install new 500GB drives
    Set up Mirror
    Copy data back.
    Hmmm... either way I'll have to set up all of my share points?
    Thanks for helping me think this through.

    Either path will work.
    The first should be faster (the data only needs to be copied one), the second gives you a spare/backup copy of the data which might come in handy later.
    Sharepoints are referenced by path, so as long as the new 500GB mirror is mounted in the same place as the existing 250GB mirror, your sharepoints should remain intact.

  • MOVED: Upgrading system with existing RAID Array

    This topic has been moved to Vista problems.
    https://forum-en.msi.com/index.php?topic=106509.0

    The only issue I can foresee is a permissions one. Make sure it is set to "Ignore permissions…" before doing anything. If the drive has specific ownership when you reinstall and you fail to have a matching account to the onwer you'll need to mess with the root account to potentially get back into the drive.
    Most unknowingly avoid this issue though using the same username and short username on the reinstalled system.

  • Serial VISA 'Write' -why is it slow to return even with large buffer?

    Hi,
    I'm writing a serial data transfer code 'module' that will run 'in the background' on a cRIO-9014.  I'm a bit perplexed about how VISA write in particular seems to work.
    What I'm seeing is that the VISA Write takes about 177ms to 'return' from a 4096 byte write, even though my write buffer has been set to >> 4096.
    My expectation would be that the write completes near instantly as long as the VISA driver available buffer space is greater than the bytes waiting to be written, and that the write function would only 'slow down' up to the defined VISA timeout value if there was no room in the buffer.
    As such, I thought it would be possible to 'pre-load' the transmit buffer at a high rate, then, by careful selection of the time-out value relative to the baud rate, it would self-throttle once the buffer fills up?
    Based on my testing this is not the case, which leaves me wondering:
    a) If you try to set the transmit buffer to an unsupported value, will you get an error?
    b) Assuming 'yes' to a, what the heck is the purpose of the serial write buffer? I see no difference running with serial buffer size == data chunk size and serial buffer size >> data chunk size??
    QFang
    CLD LabVIEW 7.1 to 2013

    Hi, I can quickly show the low-level part as a png. It's a sub-vi for transferring file segments.  Some things like the thin 'in-line' VI with (s) as the icon were added to help me look at were the hold-up is.  I cropped the image to make it more readable, the cut-off left and right side is just the input and output clusters.
    In a nut-shell, the VISA Write takes as much time to 'return' as it would take to transfer x bytes over y baud rate.  In other words, even though there is suppused to be a (software or hardware) write and read buffer on the com port, the VISA write function seems to block until the message has physically left the port (OR it writes TO the buffer at the same speed the buffer writes out of the port).  This is very unexpected to me, and what prompted me to ask about what the point is of the write buffer in the first place?  -The observations are on a 9014 RT target built in serial port.  Not sure if the same is observed on other targets or other OS's.  [edit: and the observation holds even if transmitting block-sizes of say 4096 with a buffer size of 4096 or 2*4096 or 10 * 4096 etc. I also tried smaller block sizes and larger block sizes with larger still buffers.  I was able to verify that the buffer re-size function does error out if I give it an insane input buffer size request, so I'm taking that to mean that when I assign e.g. a 4MiB buffer space with no error, the write buffer actually IS 4MiB, but I have not found a property to read back what the HW buffer is, so all I have to base that on is the lack of an error during buffer size setting.) [\edit\]
    The rest of the code is somewhat irrelelvant to this discussion, however, to better understand it, the idea is that the remote side of the connection will request various things, including a file.  The remote side can request a file as a stream of messages each of size 'Block Size (bytes)', or it can request a particular block (for handling e.g. re-transmission if file MD5 checksum does not match).   The other main reason for doing block transfers is that VISA Write hogs a substantial ammount of CPU, so if you were to attempt to write e.g. a 4MiB file out the serial port, assuming your VISA time-out is sufficiently long for that size transfer, the write would succeed, but you would see ~50% CPU from this one thread alone and (depending on baud rates) it could remain at that level for a verrry long time.   So, by transferring smaller segments at a time, I can arbitrarily insert delays between segments to let the CPU sleep (at the expense of longer transfer times).  The first inner case shown that opens the file only runs for new transfers, the open file ref is kept on a shift register in the calling VI.  The 'get file offset' function after the read was just something I was looking at during (continued) development, and not required for the functionality that I'm describing.
    QFang
    CLD LabVIEW 7.1 to 2013

  • How do I create a RAID array

    I want to RAID the two drives in my G4. The upper (boot disc) is 80GB and the lower is 30GB.
    How do I set up the RAID (striped) to use both discs?

    To set up my RAID I had 3 hard drives, one ATA and 2 SCSI. With the startup system on the ATA drive I used Disk Utility to set the two SCSI drives as a RAID 0 array. Then I used Carbon Copy Cloner to copy the system to the RAID array. I now boot from the RAID and use the ATA drive to back up the system. Without the 3rd drive you would need to boot from the installation DVD and use Disk Utility to set up the RAID array, which will erase all data on your drives. To be honest I'm not seeing a huge speed increase with the RAID array; you might be better off using one drive as a system disk and the second to store data, music, video, etc.

  • RAID array activation issue in CS2 & XP

    Yes, I still have CS2, which I love, as well as Production Studio. I have used CS2 for years with a RAID array, first striped and now a mirror. The mirror array lasted about 10 months until the motherboard went. eVGA replaced the motherboard with the same model, I fiddled around a bit trying to get my same array up and running, I couldn't figure it out and ended up doing a reinstall (after a couple of weeks of running single drives). Anyway, I keep having to activate my Adobe Suites. I have talked to people (thick accents, maybe in India) three times now; takes a lot of time. I know that there is an issue with CS2 and RAID (XP and CS2 have all of their available updates). I tell them I have RAID. I ask for a patch to avoid the activation issue. I have been told that I "should not be using RAID", that I will "have to reactivate every time I boot" and that I won't be able to run this "without upgrading to CS4". I KNOW that this configuration will work, because it has worked in the past. I vaguely remember a patch when I originally installed.
    I just called the Tech Support number, instead of the activation number. I also got someone with a thick accent who told me that he had no support available for CS2. He wouldn't give me a phone number to call, I asked about a patch and he "doesn't support CS2", end of discussion. He suggested the website. Ugh!
    I am not locked out at this time, but have just done another reactivation. I suspect Adobe will put a stop to repeat activations. I worry about being locked out when I am under the gun trying to finish something. My son is working on a school project for the end of year right now. What can I do? Does anyone know where I can find a patch or talk to someone who will help a CS2 user?
    I LIKE my RAID array. I know that my data is perfectly backed up at all times. I would prefer not to have to break the RAID array into single drives. I chose this MB because of the RAID capability.
    Thanks in advance!

    Thanks Bob. I asked to speak to a supervisor with calls two and three to India, or wherever. One lady said they didn't have a supervisor and then said, well, of course we have supervisors, but they will not be able to help. I guess I could have thrown a fit, but neither was willing to connect me to a supervisor even though I asked. I did find it amusing (in a sad way) that the one lady initially denied that she had a supervisor.
    I then called Customer Support (call number four) and they simply wouldn't talk to me. They don't deal with CS2, didn't want to talk to me at all.
    I guess I'll try India again, next time it locks me out.
    I'll go through old files (again) to see if I saved some kind of patch.
    Thanks again.

  • Boot problems from RAID Array

    I have recently reinstalled my system (K7N2G-ILSR).
    It was working fine with a RAID array to boot (set to boot from SCSI in Bios).
    The RAID array is 1 SATA drive with 1 IDE on IDE3.
    I have 2 additional IDE HDDs and 2 Optical Drives (On IDE1&2).
    I also have a SCSI controller with one external drive.
    While reinstalling I had problems in the windows installer - after partitioning and copying files it would try to reboot, and would then come up NTLDR missing.
    I initially thought it must be some confusion with my external SCSI drive trying to boot, but removing made no odds.
    I ultimately had to unplug all my non raid HDDs, and it would then boot no problem at all.  Unfortunately having got it working, every time I try to plug any IDE HDD back in, it goes back to NTLDR Missing on boot (this is despite boot being set to SCSI in bios, IDE isnt even later in the boot sequence!)
    I need to be able to reconnect my drives, but cant find anything comparable in the forum.
    Why does it matter if I connect an IDE drive at all when it is set to boot from SCSI (RAID)?  Is my bios corrupt or something?
    Thanks for any help!
    Martyn

    Thanks Noby and Richard for the advice, and sorry I have been slow to read it - I've been away.
    I'm not sure I didn't do my last installation in exactly the way you describe anyway.  Still, it's probably worth another go.  
    I was hoping for an easy solution without reinstalling yet again, since the system is working albeit with only the RAID array connected.
    Assuming the other drives do now have an MBR written to them, even if I reinstall windows without them connected, will they not still cause problems when reconnected?  
    Am I looking at having to wipe the drives and start again with them too?  
    Even if I do this, does formatting a drive wipe any MBR on it?  
    Is there not some way to directly edit the boot record on each drive when windows installer doesn't figure it out right?
    Sorry for the multitude of questions but I have had this problem time and time again and never found a reliable solution - last time it seemed simple chance that I ever got the board working!
    Thanks again
    Martyn

  • ACL not working with RAID array

    Posted this before but it wasn't clear.
    Brand new install of Leopard Server 10.5.1. Have an HFS+ SCSI RAID array attached. I am unable to get ACLs to work using this volume.
    Using the command sudo fsaclctl -a I get the following:
    <CFDictionary 0x106580 0xa00c5174>{type = mutable, count = 9, capacity = 12, pairs = (
    0 : <CFString 0x1020f0 0xa00c5174>{contents = "device_manufacturer"} = <CFString 0x106140 0xa00c5174>{contents = "HPT"}
    1 : <CFString 0x1066b0 0xa00c5174>{contents = "spparallelscsidevicetype"} = <CFNumber 0x1066e0 0xa00c5174>{value = +0, type = kCFNumberSInt32Type}
    3 : <CFString 0x101ac0 0xa00c5174>{contents = "device_revision"} = <CFString 0x106170 0xa00c5174>{contents = "4.00"}
    4 : <CFString 0x106660 0xa00c5174>{contents = "spparallelscsidevicefeatures"} = <CFArray 0x106690 0xa00c5174>{type = immutable, count = 0, values = (
    5 : <CFString 0x106740 0xa00c5174>{contents = "spparallelscsi_target"} = <CFString 0xa00d16b4 0xa00c5174>{contents = "0"}
    6 : <CFString 0x101a60 0xa00c5174>{contents = "device_model"} = <CFString 0x106150 0xa00c5174>{contents = "DISK 1_0"}
    7 : <CFString 0x1066f0 0xa00c5174>{contents = "spparallelscsiit_nexusfeatures"} = <CFArray 0x106720 0xa00c5174>{type = immutable, count = 0, values = (
    9 : <CFString 0x1018f0 0xa00c5174>{contents = "_items"} = <CFArray 0x106030 0xa00c5174>{type = mutable-small, count = 1, values = (
    0 : <CFDictionary 0x106070 0xa00c5174>{type = mutable, count = 14, capacity = 24, pairs = (
    0 : <CFString 0x103c80 0xa00c5174>{contents = "os9_drivers"} = <CFString 0xa00d4024 0xa00c5174>{contents = "no"}
    1 : <CFString 0x106180 0xa00c5174>{contents = "partitionmaptype"} = <CFString 0x1061a0 0xa00c5174>{contents = "applepartition_maptype"}
    2 : <CFString 0x1029b0 0xa00c5174>{contents = "volumes"} = <CFArray 0x106250 0xa00c5174>{type = mutable-small, count = 1, values = (
    0 : <CFDictionary 0x106290 0xa00c5174>{type = mutable, count = 7, capacity = 12, pairs = (
    5 : <CFString 0x1026b0 0xa00c5174>{contents = "mount_point"} = <CFString 0x1063a0 0xa00c5174>{contents = "/Volumes/SUPERBACKUP"}
    9 : <CFString 0x101940 0xa00c5174>{contents = "_name"} = <CFString 0x106270 0xa00c5174>{contents = "SUPERBACKUP"}
    10 : <CFString 0x103820 0xa00c5174>{contents = "writable"} = <CFString 0xa00d4024 0xa00c5174>{contents = "no"}
    12 : <CFString 0x101f40 0xa00c5174>{contents = "bsd_name"} = <CFString 0x106350 0xa00c5174>{contents = "disk3s3"}
    13 : <CFString 0x1025d0 0xa00c5174>{contents = "free_space"} = <CFString 0x106380 0xa00c5174>{contents = "3.27 TB"}
    14 : <CFString 0x1024f0 0xa00c5174>{contents = "file_system"} = <CFString 0x106370 0xa00c5174>{contents = "HFS+"}
    15 : <CFString 0xa00d4ca4 0xa00c5174>{contents = "size"} = <CFString 0x1061d0 0xa00c5174>{contents = "4.09 TB"}
    3 : <CFString 0x101ac0 0xa00c5174>{contents = "device_revision"} = <CFString 0x106170 0xa00c5174>{contents = "4.00"}
    6 : <CFString 0x101a60 0xa00c5174>{contents = "device_model"} = <CFString 0x106150 0xa00c5174>{contents = "DISK 1_0"}
    9 : <CFString 0x101940 0xa00c5174>{contents = "_name"} = <CFString 0x106050 0xa00c5174>{contents = "SCSI Logical Unit @ 0"}
    10 : <CFString 0x103720 0xa00c5174>{contents = "volumes_anonymous"} = <CFArray 0x1060b0 0xa00c5174>{type = mutable-small, count = 1, values = (
    0 : <CFDictionary 0x1060f0 0xa00c5174>{type = mutable, count = 5, capacity = 12, pairs = (
    9 : <CFString 0x101940 0xa00c5174>{contents = "_name"} = <CFString 0x106350 0xa00c5174>{contents = "disk3s3"}
    10 : <CFString 0x103820 0xa00c5174>{contents = "writable"} = <CFString 0xa00d4024 0xa00c5174>{contents = "no"}
    12 : <CFString 0x1025d0 0xa00c5174>{contents = "free_space"} = <CFString 0x106380 0xa00c5174>{contents = "3.27 TB"}
    14 : <CFString 0x1024f0 0xa00c5174>{contents = "file_system"} = <CFString 0x106370 0xa00c5174>{contents = "HFS+"}
    15 : <CFString 0xa00d4ca4 0xa00c5174>{contents = "size"} = <CFString 0x1061d0 0xa00c5174>{contents = "4.09 TB"}
    12 : <CFString 0x101f40 0xa00c5174>{contents = "bsd_name"} = <CFString 0x106130 0xa00c5174>{contents = "disk3"}
    13 : <CFString 0x106230 0xa00c5174>{contents = "spparallelscsi_lun"} = <CFString 0xa00d16b4 0xa00c5174>{contents = "0"}
    14 : <CFString 0x101a40 0xa00c5174>{contents = "detachable_drive"} = <CFString 0xa00d4024 0xa00c5174>{contents = "no"}
    16 : <CFString 0x1020f0 0xa00c5174>{contents = "device_manufacturer"} = <CFString 0x106140 0xa00c5174>{contents = "HPT"}
    17 : <CFString 0x101ec0 0xa00c5174>{contents = "removable_media"} = <CFString 0xa00d4024 0xa00c5174>{contents = "no"}
    30 : <CFString 0x1061f0 0xa00c5174>{contents = "smart_status"} = <CFString 0x106210 0xa00c5174>{contents = "Not Supported"}
    31 : <CFString 0xa00d4ca4 0xa00c5174>{contents = "size"} = <CFString 0x1061d0 0xa00c5174>{contents = "4.09 TB"}
    10 : <CFString 0x101940 0xa00c5174>{contents = "_name"} = <CFString 0x106640 0xa00c5174>{contents = "SCSI Target Device @ 0"}
    ProcessVolume: processing /
    Access control lists are supported on /.
    ProcessVolume: processing /Volumes/Superserver Internal 750
    Access control lists are supported on /Volumes/Superserver Internal 750.
    Does this mean that the RAID volume is incompatible with ACLs despite being a HFS+ formatted array? Would a fresh install of server fix this?
    Completely lost here...can someone help?
    Many thanks,
    Joel.

    I tried our old "unimporant" SCSI RAID and it reported:
    "Support for access control lists is unknown."
    I guess our RAID was formated/initialized ona a pre Intel server, yours too?
    But your volume doesn't support a journaled HFS + (easliy changed in Disk Utility)
    Better/faster if you get a server hang (takes much less time to "replay" the disk at boot than to wait for fsck to finish on such a large volume).
    Ours look like this :
    <CFString 0x1024b0 [0xa016a1a0]>{contents = "file_system"} = <CFString 0x106490 [0xa016a1a0]>{contents = "Journaled HFS+"} <-------------
    It might be worth while trying a repartition on the new server beacuse our say:
    <CFString 0x1062a0 [0xa016a1a0]>{contents = "partitionmaptype"} = <CFString 0x1062c0 [0xa016a1a0]>{contents = "applepartition_maptype"} <------ maybe not really important but (doesn't support booting on Intel systems). What should be used on Intel systems?). Should be GUID instead? Doesn't matter for non boot drives???
    You haven't got too much data on it yet : Size "4.09 TB" - "3.27 TB" free.
    In Tiger you set up the volume to use ACLs in WGM.
    In Leopard SA it's a bit bewildering (haven't look at it since before 10.5.2 update though)...

  • Problem Installing 817 on Windows 2000 with RAID Array

    I am having a problem installing Oracle under windows 2000 which has 2 x 60 GB discs configured in RAID array. When I run the setup program the cursor goes busy for about 10 seconds and returns back to normal and nothing happens !. No errors nothing.
    However when I run the Object Manager in the ods_preinstall directory it starts but in the log file I get the following error message:-
    02/14/02 09:46:18 : (892) : Error in IO Control call for disk PhysicalDrive0 (Error 21)
    02/14/02 09:46:19 : (892) : The count = 28
    02/14/02 09:46:19 : (892) : The signature is 0x443aa035
    02/14/02 09:46:19 : (892) : Error: First Partition of the disk must be an extended disk
    02/14/02 09:46:19 : (892) : Found the partition: 7
    Any idea whats happening....? Has anyone seen this before. Do I need to do something special in teh RAID..?
    TIA
    Martin

    Here are some possible solutions:
    1. The installation notes state that you must be logged on as Administrator to be able to install 9i. Are you logged on as the Administrator? Being logged on as user01 with Administrator priviledges isn't the same as Administrator.
    2. Is the disk full? If I remember my installation correctly, I needed 3 times the size of the files. The documentation will tell you how much space you need.
    Hope these two questions help
    Regards,
    Michael

  • Replace both Raid 1 hard drives with larger ones.

    This computer came set up for Raid 1.  I need to replace both hard drives with larger ones.  What is the proceedure for doing this?

    Hi wweksjk!: Yes you can have 2 HD's with differing capacities as an example on my Xion-MSI PC which was custom built in 2006 I had 2 Western Digital HD's, respectively 160 GB & 250GB divided into 3 partitions
    the brand name isn't as important as the rpm the faster the drive spins your computer will be faster
    also look at the size of the buffer on the drive as well because the higher the number there will speed up the boot time as well
    however installing a new drive as the boot drive will be complex but if you follow instructions that come with the HD installation software it can be done
    for example a 7200rpm drive is faster than a 5400rpm but not as fast as either a 10k or 15k model
    good luck
    hope this helps & enlightens you
    Happy Holidays!
                                                                                                                                    spacechild

  • I would like to migrate from Aperture. What happens to my Masters which are on a RAID array and then what do I do with my Vault which is on a separate Drive please ?

    I am sorry, but I do noisy understand what happens to my RAW original Master files, which I keep offline on a RAID array and then what I do with my Vault which as all my edited photos ? Sorry for such a simple question, but would someone please help my lift the fog ?
    Thanks,
    Rob

    Dear John ,
    Apologies, as I am attempting to get to the bottom of this migration for my wife ( who is away on assignment ) and I am not 100% certain on the technical aspects of Aperture, so excuse my ignorance.
    She has about 6TB worth of RAW Master images ( several 100 thousand ) which, as explained, are on an external RAID drive. She uses a separate Drive as a Vault . Can I assume that this Vault contains all of her edits, file structures , Metadata, etc ?
    So, step by step........She can Import into Lightroom her Referenced Masters from her RAID and still keep them there ? Is that correct ?
    The Managed Files that are backed up by her Vault , are in the pictures folder of her MacPro, but not in a structure that looks like her Aperture library ? This means Lightroom will just organize all the Managed files, simply by the date in the Metadata ? Am I correct ( Sorry for being so tech illiterate ).
    How do I ensure she imports into Lighgtroom in exactly the same format as she runs her workflow in Aperture ?  ( Projects, that are organized by year and shoot location and Albums within those projects with sub-locations, or species , etc ). What exactly do I need to do in Aperture please to organize Managed Files to create a mirror structure of Aperture on my internal Hard Drive ?
    There are a couple of points I am unsure about in regard to Lightroom. Does it work in the same way as Aperture ? Meaning, can she still keep Master Files on an external RAID and Lightroom will reference them ? If the answer is yes, how do you back up your Managed ( edited ) work in Lightroom ? ( Can you still use an external Drive as a Vault ? ) . Will the vault she uses now be able to continue to back up Managed Files post migration ?

Maybe you are looking for