OES11 Read Netware 6.5 NSS RAID?

I have some volumes on a Netware 6.5 SP8 server (disk holding the SYSTEM volume) that has just died. Is it possible to physically move the disks to an OES11 SP1 server and have this server be able to read/mount the volumes? Or must I rebuild the NW6.5 server to access the disks and then do an ncpmount to the OES11 server?
TIA
-L

Originally Posted by mrosen
On 17.04.2013 17:46, LarryResch wrote:
>
> I have some volumes on a Netware 6.5 SP8 server (disk holding the SYSTEM
> volume) that has just died. Is it possible to physically move the disks
> to an OES11 SP1 server and have this server be able to read/mount the
> volumes?
Yes. If they're still intact of course.
CU,
Massimo Rosen
Novell Knowledge Partner
No emails please!
Untitled Document
I am hoping that they are!
Once I rebuild the server with OES11 SP1 (including NSS services?), do I just need to do a scan of my system to import those volumes or what would be the best way?
-L

Similar Messages

  • Migration from Netware 6.x  NSS to Solaris 10 ZFS

    Hi,
    I am looking at Solaris and ZFS as being the possible future of our file store for users.
    We have around 3TB (40million files) of data to transfer to ZFS from Netware 6.5 NSS volumes.
    What is the best way to do this?
    I have tried running utilities like richcopy (son of robocopy) , teracopy, fastcopy from a Windows client mapped to the Netware server via NCP and the Solaris server via Samba.
    In all tests the copy very quickly failed, rendering the Solaris server unusable. I imagine this has to do with the utilities expecting a destination NTFS filesystem, and ZFS combined with Samba does not fit the bill.
    I have tried running the old rsync client from Netware, but this does not seem to talk to Solaris rsyncd.
    As well as NCP, Netware has the ability to export its NSS volumes as a CIFS share.
    I have tried mounting a CIFS share of the Netware volume on Solaris...but Solaris as far as I am aware does not support mount -t smbfs as this is Linux only. You can mount smb:// from the Gui (Nautilus), but this does not help a great deal. I was hoping to run maybe Midnight Commander, but I presume that I would need a valid smb share to the Netware volume from the command line?
    I really want to avoid the idea of staging on say NTFS first, then from NTFS to ZFS. A two part copy would take forever. It needs to be direct.
    BTW..I am not bothered about ACL's or quota. These can be backed up from Netware and reapplied with ZFS/chown/chmod commands.
    A wild creative though did occur to me as follows -
    Opensolaris, unlike Solaris, has its CIFS kernel addition, and hence smb mounts from the command line (I presume), but I am not happy running opensolaris in production.So maybe I could mount the Netware NSS volume as a CIFS share on opensolaris (as a staging server), copy all the data to a ZFS pool locally, and the do a send receive to Solaris 10.......
    Maybe not...
    I suppose there is FTP, if I can get it to work on Netware.
    I really need a utility with full error checking, and that can be left unattended.
    Any ideas?

    Bu unusable I mean that the mapped ZFS Samba drive to the windows workstation died and was inaccessible.
    Logging onto the solaris box after this from the console was almost impossible. There was a massive delay. When I did log in there appeared to be no network at all. There were no errors in the smbd log file. I need to look at other logs to find out what is going on. Looking at the ZFS filesystem some files had copied over before it died.
    After rebooting the Solaris box I then tried dragging and dropping the same files to the ZFS filesystem with the native windows expolrer interface on the windows client. This worked, as in the Solaris box did not die and the files were copying happily (until I manually stopped it). As we all know Windows explorer is not a safe unattended way to copy large amounts of files.
    This tells me that the copy utilities on Windows are the problem, not native windows copy/paste.

  • Typical read/write speeds for Apple Raid card

    I have 3 1TB Samsung F3 Spinpoint drives that I am using in Bays 2-4, coupled with Apple's Raid card. Before striping the drives in a raid configuration, I used Disk Utility to write zeroes to each of the disks individually, which took about 3 hours each (this works out to about 100 MB/s, which makes sense to me). However, the third drive, in Bay 2, took much longer than normal to zero (over 12 hours), so I suspected a defective drive. When I striped them using the Raid Utility program, I was able to create a volume and proceeded to test it using QuickBench. Extended Read tests in the 20-100 MB range were only about 30 MB/s. Is this normal? Shortly thereafter, my Raid Utility program encountered an error with the hard drive in Bay 4, which wasn't the same disk I had problems formatting. I took out that drive, removed the volume and raid, and set up Raid Stripe 0 with 2 disks. I am still encountering Extended Reads of about 30 MB/s, no matter if I have cache enabled or not. Any help would be appreciated. Thanks!

    I suppose I could select a block size other than 32 kB when I set up the raid, but I wanted to save time to see if any posters have other suggestions.
    I know that the windows version of the drive makers utilities are recommended for surface scanning and mapping.
    In the case of weak sectors, this is about all you can do.
    I may be wrong, but I think Apple has a clause in their manual so that people will buy hard drives from them instead of 3rd party manufacturers.
    I am sure that is part of it.
    There is also the onboard ROM of the hard drive.
    Problem with Apple supplied cards of any kind, is that they have a history of ROM manipulation.
    Whether it be crippling (as compared to 3rd party equivalent), or keys to limit performance or compatibility, it is always a question.
    I know I am not the only one who has bought their own hard drives for a Mac Pro RAID setup.
    I'm sure you are not, and was just "crossing t's and dotting i's" in asking....
    BTW, I have 132 MB/s Read, 121 MB/s Write for 2MB-10MB blocks, avg 128 MB/s and 110 MB/s for 10MB-100MB blocks on a 2 drive RAID with WD drives in a 100MHz FSB system.

  • Reading HDD Temperature from a RAID Array (nForce4)

    Hello all!
    does anybody know of a program that can read the HDD temperature from a RAID Array?
    I'm using Speedfan at the moment but the author doesn't seem to like nVidia at all, thus no support for us.

    I read:
    Quote
    To sum it up : The nvidia raid controller doesn't support it .
    That's not true really, because the nVidia utilities support reading the SMART status from the array. So it can be done, it's only matter of having the documentation or doing some research. However Speedfan's author doesn't like nVidia so he's not going to do anything, unless a doc is handed to him.
    I wonder whether the linux community has worked it out 

  • Reading single volume of a RAID 0 from Windows?

    Hi all I have some RAM that is preventing one of my computers from booting. Some of the drives on the computer were configured as a Linux RAID 0 software array. I'm waiting on replacement RAM and I was wondering if there is a safe way to read from one of the drives. Normally, its possible to just use one of the Windows ext(2/3/4) drives to read ext(#) partitions, but the most that any of them do is see /boot and a bunch of unallocated space. 
    I'm thinking of setting up a virtual machine and mounting the drive ro,noload (read only, don't load the journal) in hope that I'll be able to pull some information off of the drive with my T420 over eSATA with the drive mounted in an external eSATA / USB 3.0 enclosure.
    Anyone know if this will work or have a better idea?
    When asking for help, post your question in the forum. Remember to include your system type, model number and OS. Do not post your serial number.
    Did someone help you today? Press the star on the left to thank them with a Kudo!
    If you find a post helpful and it answers your question, please mark it as an "Accepted Solution"! This will help others with the same question in the future.
    My TPs: Twist 2HU: i5-3317U Win 8 Pro, 4GB RAM 250GB Samsung 840 | T420 4177CTO: i5-2520M, HD+, Win 7 Pro x64, 8GB RAM, Optimus, 160GB Intel 320 SSD, Intel 6300 WiFi, BT 3.0 | T400 2764CTO: P8700, WXGA, Win 7 Ult x64, AMD 3470, 8GB RAM, 64GB Samsung SSD, BT, Intel 5300 WiFi | A20m 14.1" PIII 500 (retired). Monitors: 2x Dell U2211h IPS 100% sRGB calibrated w/ Spyder3.

    Sorry, its a RAID 1 (mirroring). I hit the wrong # when I made the subject. Both drives are available, but I have no way to connect them both to the computer at the same time. 
    When asking for help, post your question in the forum. Remember to include your system type, model number and OS. Do not post your serial number.
    Did someone help you today? Press the star on the left to thank them with a Kudo!
    If you find a post helpful and it answers your question, please mark it as an "Accepted Solution"! This will help others with the same question in the future.
    My TPs: Twist 2HU: i5-3317U Win 8 Pro, 4GB RAM 250GB Samsung 840 | T420 4177CTO: i5-2520M, HD+, Win 7 Pro x64, 8GB RAM, Optimus, 160GB Intel 320 SSD, Intel 6300 WiFi, BT 3.0 | T400 2764CTO: P8700, WXGA, Win 7 Ult x64, AMD 3470, 8GB RAM, 64GB Samsung SSD, BT, Intel 5300 WiFi | A20m 14.1" PIII 500 (retired). Monitors: 2x Dell U2211h IPS 100% sRGB calibrated w/ Spyder3.

  • Very very slow file access iSCSI NSS on SLES11/XEN/OES11

    Hi,
    Like many Novell customers while carrying out a hardware refresh we are moving off traditional Netware 6.5 to OES11 and at the same time virtualising our environment.
    We have new Dell Poweredge 620 serves attached by 10gig iSCSI to Equalogic SAN
    Installed SLES will all patches and updates and XEN and then created OES11 SP2 virtual machines, these connect to NSS volume by iSCSI
    Migrated files from traditional netware server to new hardware and stated testing and ran into very very slow files access times
    A 3.5mb pdf file takes close to 10 minutes to open from local PC with Novell Client installed, same if no client and open via cifs. Opening same file off traditional NW6.5 server takes 3-4 seconds.
    We have had a case open with Novell for almost 2 months but they are unable to resolve.
    To test other options we installed VMWare ESXi on the internal usb flash drive and booted off that, created same OES11 VM and connected to NSS on SAN and same pdf open in seconds.
    The current stack of SLES11/XEN/OES11 is not able to be put into production
    Any ideas where the bottleneck might be? We think is in XEN.
    Thanks

    Originally Posted by idgandrewg
    Waiting for support to tell me what the implications are of this finding and best way to fix
    Hi,
    As also mentioned in the SUSE forums, there is the option of using the Equallogic Hit Kit. One of the tools, next to the great autoconfigure options it has, is the eqltune tool.
    Some of the stuff that I've found important:
    -gro is known read performance killer. Switch if off on the iSCSI interfaces.
    - if possible (meaning you have decent hardware), disable flowcontrol as this generally offers stability but at the cost of performance. If your hardware is decent, this form of traffic control should not be needed.
    -To have multipath work correctly over iSCSI and starting SLES 11 SP1. Make sure kernel routing and arp handling are set correctly (not directly relevant if you only have 1 10 GB link):
    net.ipv4.conf.[iSCSI interfaceX name].arp_ignore = 1
    net.ipv4.conf.[iSCSI interfaceX name].arp_announce = 2
    net.ipv4.conf.[iSCSI interfaceX name].rp_filter = 2
    Test if traffic is actively routed over both iSCSI interfaces:
    ping -I [iSCSI interfaceX name] [iSCSI Group IP EQL]
    -Make sure network buffers etc are adequately set as recommended by Dell (set in /etc/sysctl.conf):
    #NetEyes Increase network buffer sizes for iSCSI
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.tcp_rmem = 8192 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    net.core.wmem_default = 262144
    net.core.rmem_default = 262144
    -Settings for the /etc/iscsi/iscsid.conf I'm using:
    node.startup = automatic # <--- review and set according to environment
    node.session.timeo.replacement_timeout = 60
    node.conn[0].timeo.login_timeout = 15
    node.conn[0].timeo.logout_timeout = 15
    node.conn[0].timeo.noop_out_interval = 5
    node.conn[0].timeo.noop_out_timeout = 5
    node.session.err_timeo.abort_timeout = 15
    node.session.err_timeo.lu_reset_timeout = 20 #Default is 30
    node.session.err_timeo.tgt_reset_timeout=20 #Default is 30
    node.session.initial_login_retry_max = 12 # Default is 4
    node.session.cmds_max = 1024 #< --- Default is 128
    node.session.queue_depth = 128 #< --- Default is 32
    node.session.iscsi.InitialR2T = No
    node.session.iscsi.ImmediateData = Yes
    node.session.iscsi.FirstBurstLength = 262144
    node.session.iscsi.MaxBurstLength = 16776192
    node.conn[0].iscsi.MaxRecvDataSegmentLength = 131072 #A lower value improves latency at the cost of higher IO throughput
    discovery.sendtargets.iscsi.MaxRecvDataSegmentLeng th = 32768
    node.session.iscsi.FastAbort = No # < --- default is Yes
    -Have Jumbo frames configured on the iSCSI interfaces & iSCSI switch.
    If you are using multipathd instead of the dm-switch provided with the Equallogic Hit kit, make sure the /etc/multipath.conf holds the optimal settings for the Equallogic arrays.
    Ever since Xen with SLES 11 SP1 we have been seeing strong performing virtual servers. We still use 1GB connections (two 1GB connections for each server, serving upto 180~190Mb/s).
    There could be a difference with the 10GB setup, where multipath is not really needed or used (depending on the scale of your setup). One important thing is that the iSCSI switches are doing their thing correctly. But seeing you've already found better results tuning network parameters on the Xen host, seems to indicate that's ok.
    Cheers,
    Willem

  • Strange directory behaviour on NSS volume OES11 SP1

    Hi All,
    A couple of months ago we installed an OES11 SP1 server with an NSS volume (VOL1:) for all our installation files.
    In the root of the VOL1: volume, we created a directory named _Install (yes, underscore)
    In this directory we created a lot of subdirectories with installation files for several applications, to keep things short, let's say DIR-A, DIR-B, DIR-C DIR-D DIR-E etc.
    So if we opened VOL1:, we just saw the _Install direcory in the root of VOL1:, and if we opened the _Install directory, we saw the DIR-A, DIR-B etc.
    But today, we opened the VOL1:, saw, the _Install directory, but also the DIR-A, DIR-B, DIR-C etc. in the folowing manner:
    _Install (normal)
    _Install\DIR-A
    _Install\DIR-B
    _Install\DIR-C
    etc.
    If we doubleclick on one of the directories in the root of VOL1:, we actually open the directory below the _Install directory. It's just like we have a kind of symbolic links, or shortcut's to the real directories in our root.
    Never seen this before.
    Anybody a clue for this?
    regards,
    Mark

    On 29/10/2013 14:46, MarkHofland wrote:
    > A couple of months ago we installed an OES11 SP1 server with an NSS
    > volume (VOL1:) for all our installation files.
    > In the root of the VOL1: volume, we created a directory named _Install
    > (yes, underscore)
    > In this directory we created a lot of subdirectories with installation
    > files for several applications, to keep things short, let's say DIR-A,
    > DIR-B, DIR-C DIR-D DIR-E etc.
    > So if we opened VOL1:, we just saw the _Install direcory in the root of
    > VOL1:, and if we opened the _Install directory, we saw the DIR-A, DIR-B
    > etc.
    >
    > But today, we opened the VOL1:, saw, the _Install directory, but also
    > the DIR-A, DIR-B, DIR-C etc. in the folowing manner:
    >
    > _Install (normal)
    >
    > _Install\DIR-A
    > _Install\DIR-B
    > _Install\DIR-C
    >
    > etc.
    >
    > If we doubleclick on one of the directories in the root of VOL1:, we
    > actually open the directory below the _Install directory. It's just like
    > we have a kind of symbolic links, or shortcut's to the real directories
    > in our root.
    >
    > Never seen this before.
    >
    > Anybody a clue for this?
    Having seen your other replies, you're opening VOL1: in Windows Explorer?
    What do you see if you do a DIR from a Command Prompt window?
    HTH.
    Simon
    Novell Knowledge Partner
    If you find this post helpful and are logged into the web interface,
    please show your appreciation and click on the star below. Thanks.

  • Recommended vmware settings for OES11

    Are there any recommendations for virtualizing OES11 with vmware vsphere (ESXi 4.1 or 5)?
    1. While installation (near at the beginning) there is a choice for the virtualization type.
    The relevant choices are "physical Computer" or "paravirtualized Computer"
    Is "physical" really the best choice for vmware, because thats also the setting for fully virtualized guests.
    2. SLES11 SP1 support paravitualized drivers for harddisks and usually that is a big performance boost on a suitable server (RAID Controller, SAS Drives, etc.)
    What's with OES11 here even if using NSS Volumes?
    thanks
    Tobi

    The bit about the removing and readding the NIC as VMXNET3 is unnecessary:
    SLES 11.1 distribution includes a VMXNET3 driver, so you can create the VM
    with a VMXNET3 NIC to begin with and have it seen during installation and
    before the tools are added.
    Can't say I bother with pvscsi. You only see benefits under very high I/O
    load. This is outweighed for me by the nuisance value of having disk
    drivers that may fail after a kernel upgrade. VMware tools are not very
    robust in this respect I find.
    regards
    Martin
    "magic31" <[email protected]> wrote in message
    news:[email protected]...
    >
    > TFe;2163217 Wrote:
    >> Are there any recommendations for virtualizing OES11 with vmware vsphere
    >> (ESXi 4.1 or 5)?
    >>
    >> 1. While installation (near at the beginning) there is a choice for the
    >> virtualization type.
    >> The relevant choices are "physical Computer" or "paravirtualized
    >> Computer"
    >> Is "physical" really the best choice for vmware, because thats also
    >> the setting for fully virtualized guests.
    >>
    >> 2. SLES11 SP1 support paravitualized drivers for harddisks and usually
    >> that is a big performance boost on a suitable server (RAID Controller,
    >> SAS Drives, etc.)
    >> What's with OES11 here even if using NSS Volumes?
    >>
    >> thanks
    >> Tobi
    >
    > Hi Tobi,
    >
    > No official word on it yet that I've seen, but the way I'm setting up
    > and which is looking good:
    >
    > 1) Install SLES 11 using the installers virtualization type option for
    > : Physical Computer
    >
    > 2) After having installed OES11;
    > - install the latest VMware tools
    > - remove the eth0 entry in /etc/udev/rules.d/70-persistent-net.rules
    > (assuming single nic setup)
    > - shutdown the VM and configure the Vm's SCSI controller to run in
    > paravirtual mode (pvscsi) and also remove net nic and readd as VMXnet3.
    >
    > Boot back up again and network and disk drivers should be running as
    > optimal as possible.
    >
    > As a note, for SLES 11 SP1 and up this is a good idea... for SLES
    > 10.OES2 (that uses an older kernel) this is not such a hot idea.
    > Also make sure you are either running the latest vSphere 4.1 code or
    > 5.
    >
    > Cheers,
    > Willem
    >
    >
    > --
    > Novell Knowledge Partner (voluntary sysop)
    >
    > It ain't anything like Harry Potter.. but you gotta love the magic IT
    > can bring to this world
    > ------------------------------------------------------------------------
    > magic31's Profile: http://forums.novell.com/member.php?userid=2303
    > View this thread: http://forums.novell.com/showthread.php?t=449924
    >

  • NSS Pool Deactivating - take 2

    I'm having a similar situation to the other thread by this name.
    Our server had a failed RAID drive and replaced it. Since then we had issues with some data files. When we tried to back them up or read them the NSS pool would dismount and we'd have to restart the server. I ran "nss /poolrebuild /purge" yesterday on all the pools and the problem persists.
    We're running NW65 and have N65NSS8C installed.
    What's our next step? I'm not quite ready to throw in the towel, back up what we can, and rebuild the server. I gotta believe there's SOMEthing else we can do.

    Originally Posted by zeffan
    I'm having a similar situation to the other thread by this name.
    Our server had a failed RAID drive and replaced it. Since then we had issues with some data files. When we tried to back them up or read them the NSS pool would dismount and we'd have to restart the server. I ran "nss /poolrebuild /purge" yesterday on all the pools and the problem persists.
    We're running NW65 and have N65NSS8C installed.
    What's our next step? I'm not quite ready to throw in the towel, back up what we can, and rebuild the server. I gotta believe there's SOMEthing else we can do.
    What make of server / array? What RAID level? Software or hardware RAID? If you replace a RAID drive and you lose data, or even have an interruption of service, you didn't have RAID. In many instances, if there is an array rebuild failure, the array will tell you this and then indicate that the bad data ( mirror mismatches ) will still exist until its overwritten. HP does this. The problem is that NSS cannot know which blocks are bad or not, and if it reads a bad block, the read fails and the NSS thinks the disk channel has failed ( which it has ).
    Ultimately if the RAID failed to do its job, every block is suspect. Restore from last good backup.
    More details are required to fully answer your question. But assuming you had an array failure that caused a loss of data, you should recreate the array and restore.
    -- Bob

  • Why no RAID options for new disks?

    Hi support group,
    "My name is Shaun, and I am a mac-aholic"
    I inserted 4 new 3TB disks into the bays of my Mac Pro (2012), and there is no RAID option for them.
    How I got here:
    Inserted the WD Red drives, and they were not initiated.
    I "erased" the drives, gave them names, and they appear in Disk Utility in grey drives. 
    Went to set up the RAID, and move the drives into the RAID, but got an error that they could not be used.
    It appears to me that disk utility is not allowing me to fully use those drives. I Have created RAID drives before with no issue in RAID1 with 3TB drives before on this machine. Am perplexed, I thought this to be the easy and simple part of my upgrade odyssey! Did I screw up the initialization? On all FOUR drives?
    Now, if I click on the drives, the RAID option does not present itself.
    FWIW, I am using OS X 10.9.5, on Mac Pro 5,1. I boot from PCIe card. My end goal is to create a 4-disk RAID 10 using the four SATA connections on the MP.

    It was really pretty simple based on your feedback, I powered down, removed the drives from the SATA backplane, moved each drive separately to the USB enclosure, then re-"erased" in OSX-Etended.  I ejected each drive when finished, and moved to the next one until all four were re-"erased".  Then I restarted the Mac with all four in the SATA connections, and all four were visible and had the RAID option tab.  I created the three RAID schemes (two stripe and one mirror), and dragged the drives into the RAID option.  Then I hit "create" or some other command to finish the RAID.
    Note that I did get an error on the RAID creation.  From memory, this was "cannot create RAID without an unmounted disk" or something like that.  I cleared the error, and then opened back up Disk Utility.  All the RAIDs were listed, and were operational with the green "On" signal.  I could read/write/transfer from the RAID.  Once I transfer the data to this RAID, I will work on a way to test it, but if any risk to the data, I might just leave well enough alone.
    For now, it seems I have a fully functional RAID 1+0 using the four HDD on the SATA connections in my Mac Pro 5,1.

  • Software RAID Failure - my experience and solution

    I just wanted to share this information with the iCloud community.
    I searched a bit and did not find much information that was useful with regard to my software RAID issue.
    I have 27 inch Mid 2011 iMac with SSD and Hard drive which has been great.
    I added an external hard drive (I think if I mention any brand name the moderator will delete this post) which includes an nice aluminum case with two 3 TB hard drives within it, and it has a big blue light on the front and is connected via Thunderbolt. This unit is about 2 years old and I have it configured in a 3 TB mirrored RAID (RAID 1) via a software RAID configured via Mac OS Disk Utility. 
    I had at one point a minor glitch which was fixed using another piece of software (again if I mention a brand the moderator will delete this post) which is like a 'Harddrive Fighter' or similar type name LOL.   So otherwise that RAID has served me well as a site for my Time Machine back up and Aperture Vault, etc.  (I created a 1.5 TB Sparse bundle for Time Machine so that the backup would not use the entire 3 TBs)
    I recently purchased a second aluminum block of drives, and set that up as a 4 TB RAID 1.
    Each of the two RAIDs are set with the option of “Automatically rebuild RAID mirror sets” checked.
    I put only about 400 gb on the new RAID to let it sit for a ‘burning in period.’
    A few days ago the monitoring software from the vendor who sells the aluminum block of drives told me I had a problem.  One of the drives had “Failed.”   The monitoring software strangely enough does not distinguish the drives so you can figure out which pair had the issue, so I assumed it was the New 8 TB model.  Long story short, it was the older 6 TB model, but that does not matter for this discussion.
    I contacted the vender and this is part of their response.
    “This is an indication that the Disk Utility application in Mac had a momentary problem communicating with the drive mechanism. As a result, it marked that drive as "failed" in the header information. Unfortunately, once this designation is applied to a drive by the OS, the Disk Utility will thereafter refuse to attempt any further operations with that disk until the incorrect "failed" marker is manually cleared off the drive.”
    That did not sound very good to me…..back up killed by a SOFTWARE GLITCH?
    “The solution is to remove the corrupted volume header, and allow the generation of a new one….This command will need to be done for each disk in the array… (using Terminal)…
    diskutil zerodisk (identifier)
    …3. After everything is finished, you should be able to exit Terminal, and go back into the Disk Utility Application to re-configure the RAID array on the device.”
    Furthermore they said.
    “If the Disk Utility has placed a flag into the RAID array header (which exists on both drives) then performing this procedure on a single drive will not correct anything.”
    And…
    “When a drive actually does fail, it typically stops appearing in the Disk Utility application altogether. In that circumstance, it will never be marked "failed" by the Disk Utility, so the header erase operation is not needed.”
    This all sounded like a bad idea to me. And what does the Vendors RAID monitor software say then?  “Disk Really Really FAILED, check for a fire.”
    As I tried to figure out which drive was actually the bad RAID pair I stumbled on a solution.
    First I noted that the OS Disk Utility did NOT show a fault in the RAID. It listed both RAIDS as “Online.’ Thus no rebuilding was needed and it did not begin the rebuild process.
    The Vendors disk monitor software saw some fault, but Mac was still able to read and write to the RAID, both disks in the mirror.  I wrote a folder to the RAID and with various rebooting steps I pulled the “Bad” drive and looked at the “Good” Drive….the folder was there…I put the Bad drive back in and pulled the Good Drive and the folder was there on the “bad” drive.  So it wrote to both drives.  AND THE VENDORS MONITORING SOFTWARE SHOWED THE PREVIOUSLY LABELED ‘BAD’ DRIVE AS ‘GOOD’ AND THE MISSING DRIVE SLOT AS ‘BAD’.
    My stumbled FIX.   I moved a bunch of files off the failed RAID to the new RAID  but before I moved the sparse bundle, a folder of 500 gigs movies and some other really big folders the DISK UTILITY WINDOW (which I still had open) now showed that the RAID had a Defect and began rebuilding the mirror set itself, out of the blue!   I don't know why this happened.  But moving about 1/2 of the data off of it perhaps did something?  Any Ideas?
    This process took a few hours as best I can tell (let it run overnight) and the next day the RAID was fine and the Vendors RAID monitor did not show a fault any longer.
    So, the Vendors RAID monitoring software reporting a “FAILED” drive without any specific error codes to look up.  Perhaps they could have more info for the user on the specific fault?  The support line of the the Vendor said with certainty “the Volume Header is corrupted” and THE ONLY FIX is to completely ZERO THE DRIVE! This was not necessary as it turns out.
    And the stick in the eye to me…..
    “I've also sometimes seen the drives get marked as "failed" by the disk utility due to a shaky connection. In some cases, swapping the ends of the Thunderbolt cable will help with this. Something to try, perhaps, if your problems come back. “
    Ya Right…..
    Mike

    Follow up.
    After going through the Zeroing process and rebuilding the RAID set three times, with various configurations, LaCie finally agreed to repair the unit under warrantee.
    I tried swapping the power supplies and thunderbolt wires, tried taking the drive out of series with the newer big brother of it.  And it still failed after a few days.
    I just wanted to share more of what I learned with regard to rebuilding the RAID sets via the Terminal.  The commands can be typed partially and a help paragraph will come up to give VERY cryptic descriptions of the proper use of the commands.
    First Under terminal you can used the command "diskutil appleRAID list" to list those drives which are in the RAID.  This gives you the ID number for each physical drive. For example:
    AppleRAID sets (1 found)
    ===============================================================================
    Name:                 LaCie RAID 3TB
    Unique ID:            84A93ADF-A7CA-4E5A-B8AE-8B4A8A6960CA
    Type:                 Mirror
    Status:               Online
    Size:                 3.0 TB (3000248991744 Bytes)
    Rebuild:              manual
    Device Node:          disk4
    #  DevNode   UUID                                  Status     Size
    0  disk3s2   D53F6A81-89F1-4FB3-86A9-8808006683C2  Online     3000248991744
    -  disk2s2   E58CA8F5-1D2C-423A-B4BE-FBAA80F85879  Spare      3000248991744
    ===============================================================================
    In my situation with the failed RAID, I had an extra disk in this with the status of Missing/Failed. 
    The command is "diskutil appleRAID remove" and the cryptic help paragraph says:
    Usage:  diskutil appleRAID remove MemberDeviceName|MemberUUID
            RAIDSetVolumePath|RAIDSetDeviceName|RAIDSetUUID
    MemberDeviceName|MemberUUID  is the number listed in the "diskutil appleRAID List" command,  and
    RAIDSetVolumePath|RAIDSetDeviceName|RAIDSetUUID is the Device Node for the RAID which here is /dev/disk4.
    I used this command to remove the third entry (missing/failed), I did not copy the terminal window text on that one, so I cannot show the list of three disks.
    I could not get to remove the disk2s2 disk listed as SPARE, as it gave an error message:
    Michaels-iMac:~ mike_aronis$ diskutil appleraid remove E58CA8F5-1D2C-423A-B4BE-FBAA80F85879 /dev/disk4
    Started RAID operation on disk4 LaCie RAID 3TB
    Removing disk from RAID
    Changing the disk type
    Can't resize the file system on the disk "disk2s2"
    Error: -69827: The partition cannot be resized
    But I was able to remove it using the graphical interface Disk Utility program using the delete key.
    I then rebuilt the RAID set by dragging the second drive back into the RAID set.
    I could not get the command: "diskutil appleRAID update AutoRebuild 1 /dev/disk4" to work, because even though it was trying to execute it HUNG.  I put the two drives into my newer LaCie 2big as my attempt at further trouble shooting the RAID (this was not suggested by LaCie tech), rebuild the RAID and now I am going to leave it setup that way for a few days before I ship it back to just see if the old drives work fine in the new RAID box (thus proving the RAID box is the problem). I tried the AutoRebuild 1 command just now and it gave an error.
    Michaels-iMac:~ mike_aronis$ diskutil appleraid update autorebuild 1 /dev/disk4
    Error updating RAID: Couldn't modify RAID (-69848)
    Michaels-iMac:~ mike_aronis$
    In my haste to rebuild the RAID set for the third or forth time as LaCie led me through the testing this and test that phase, I forgot to click the "Auto Rebuild" option in the Disk Utility program.
    Question for the more experienced:
    As I was working on this issue, I notice that each time I rebooted and did work in the Terminal (with and without the RAID plugged in to the thunderbolt connection) I notice that the list of drives would change and my main boot drive would not stay listed as drive 0!  Some times it would be drive 0, sometimes the RAID would be listed as Drive 0.  It's strange to me...I would have thought the designation for Drive0 and Drive1 would always be my two build in drives (SSD and spinning drive).
    Mike

  • Gspeed ES Pro RAID keeps disappearing from Mac, have to re-initialize!

    UPDATED VERSION:
    MAC PRO KEEPS LOSING CONNECTION TO RAID DRIVE.
    After 9 months of organizing a documentary film edit on an external RAID drive (12TB, Raid 0. Yes, I have 2 layers of backup), I purchased the PNY Nvidia Quadro 4000 video card for the primary edit. Soon after, I began having boot up issues. The screen would freeze at various points during the startup process, requiring manual power-downs.
    Over the past two months, my computer has lost communication with the RAID 3 times, forcing me to rebuild it from my backup drives (10TB). One tech at Diskwarrior said my computer lost the partition, and that the 1s and 0s had been largely replaced with gibberish.
    The Raid becomes unrecognized by my computer in one of two ways:
    1-    After two days of copying data back over to the raid (rebuilding it), I get an error code -36, saying a file cannot be read or written, then the Raid partition disappears from disk utility and the next time I start up the computer says it doesn’t recognize my drive. The ATTO configuration tool (the card that attaches my raid to the computer) sees the drive, but my computer does not. I have to re-initialize it and start over again.
    2-    The Raid will go from being recognized to not being recognized when I reboot one day. Again, I have to re-initialize and rebuild.
    I removed the PNY Graphics card and put in the stock ATI Radeon HD 5770 it came with. On day 2 of rebuilding the RAID, I got the error -36 and the Raid disappeared from disk utility again. When I opened the raid folder, which seconds before had 10 TB of data on it, it read as zero items. So the PNY Nvidia card was NOT the problem.
    Apple has worked on the computer 3 times, running extensive hardware tests, but the same problem persists. At one point they said my blackmagic card was the problem, and more recently they said I probably have bad ram. I tried separate ram (the old 6GB my computer came with), and my computer seemed to begin booting up okay, but one day later I had the -36 error, the RAID couldn’t be recognized again and I lost the data during another rebuild. Apple even replaced the Ram Tray, but the same problem persists.
    G technology says my RAID’s hardware is fine, and ATTO, who makes the Raid card, says it looks fine to them too. This does not mean they are 100% not the problem though.
    Yesterday as I was attempting to rebuild the RAID again, I saw the -36 error come up, and files stopped copying. I immediately made a screenshot and noted the time.
    Today I spoke with a really cool tech at Apple named Ron who helped me investigate the Kernel log, and this is what it said (the RAID is called FadingWest):
    Dec  6 17:05:36 Jesses-Mac-Pro kernel[0]: hfs_swap_BTNode: invalid forward link (0xbb739311 >= 0x0000df00)
    Dec  6 17:05:36 Jesses-Mac-Pro kernel[0]: hfs: node=2717 fileID=4 volume=FadingWest device=/dev/disk3s2
    Dec  6 17:05:36 Jesses-Mac-Pro kernel[0]: hfs: Runtime corruption detected on FadingWest, fsck will be forced on next mount.
    Ron believes my problem is with the RAID card. He thinks it’s dropping the connection. So I’ve got an RMA going with ATTO for a new one.
    Here are the final things I can troubleshoot at this point:
    New RAID card (ATTO)
    New RAID drive (G-Technology)
    Motherboard / PCI lanes (Apple).
    NEW COMPUTER (Apple)
    Any ideas are appreciated. Here are the items in question:
    Mac Pro mid 2010 2x2.66 Ghz 6 core Intel Xeon OS X Lion 10.7.5
    G speed ES Pro (G Technology) – 12TB @ RAID 0
    ATTO R680 RAID card

    I found this on another help site, I think it's the trick:
    there are 2 halves to the Dock. With a bar in the middle.
    on the left side, there are 2 sections. the furthest left are the icons that will stay in the dock, then on the right side of the left half of the dock, are the temporary icons, the ones that just stay there till the program is done.
    You can freely move a temporary icon to the left and it will be come a permanent icon.
    I am reminded of the wizard of OZ here... you could always get to firefox... you just had to click your mouse (and drag an icon)..

  • Best Raid Block Size for video editing

    I cannot seem to get my head round about which Raid Block Size I should set my Striped Raid 50 configuration to.
    There seems to be very little info about this, but what info there is seems to imply that it could seriously affect the performace of the Raid.
    I have initialized two Raid array's to Raid 5 and was about to stripe them together using Disk Utility, when I decided to click on options in the bottom left of the Disk Utility window. This is where you can set the Raid Block Size.
    The default is 32K, but it states that there could be 'performance benefits' if this setting is changed to better match my configuration.
    What exactly does this mean?
    I want do read multiple dv streams from my Raid 50 - Any ideas which Block Size I should allocate??
    Should I just leave it as the default 32K??
    Any help will be appreciated
    Cheers
    Adam

    My main concern is really to have as many editors as possible reading DV footage from the Raid simultaneously (up to 5 at once).
    I understand that we may struggle at times, but Xsan isn't an option and I just need to get the best out of a limited budget!
    Chers
    Adam

  • ASM like RAID 1 between two storages

    In my production environment instances of Oracle are under the file system JFS2. Soon we will have to relocate space for these file or switch to ASM. Our preference is going to the ASM, but we do need some tests we are conducting.
    Today, in a production environment, data from storage1 are replicated via AIX / HACMP for storage2.
    Our tests with the ASM has to contemplate the use of a set of disks in storage1 and another set in storage2.
    Below the details of the environment:
    In AIX 5.3 TL8+
    root@suorg06_BKP:/&gt; lspv+
    hdisk17 none None
    hdisk18 none None
    hdisk19 none None
    hdisk16 none None
    root@suorg06_BKP:/&gt; fget_config -Av*
    ---dar0---
    User array name = 'STCATORG01'
    dac0 ACTIVE dac1 ACTIVE
    Disk DAC LUN Logical Drive
    hdisk17 dac0 15 ASMTST_02
    hdisk16 dac0 14 ASMTST_01
    ---dar1---
    User array name = 'STCATORG02'
    dac4 ACTIVE dac5 ACTIVE
    Disk DAC LUN Logical Drive
    hdisk18 dac5 16 ASMTST_B01
    hdisk19 dac5 17 ASMTST_B02
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB
    0 0 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk16 30720 0
    0 1 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk17 30720 0
    0 2 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk18 30720 0
    0 3 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk19 30720 0
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    no rows selected
    SQL&gt; CREATE DISKGROUP 'DB_DG_TESTE' NORMAL REDUNDANCY DISK '/dev/rhdisk16', '/dev/rhdisk18';+
    Diskgroup created.
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB CREATE_DAT MOUNT_DATE
    DB_DG_TESTE_000 1 0 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk16 30720 30669 09/12/2008 09/12/2008
    0 1 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk17 30720 0 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 1 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk18 30720 30669 09/12/2008 09/12/2008
    0 3 CLOSED CANDIDATE NORMAL UNKNOWN /dev/rhdisk19 30720 0 09/12/2008 09/12/2008
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    GROUP_NUMBER NAME SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE TYPE TOTAL_MB FREE_MB OFFLINE_DISKS U USABLE_FILE_MB
    1 DB_DG_TESTE 512 4096 1048576 MOUNTED NORMAL 61440 61338 0 N _30669_
    SQL&gt; ALTER DISKGROUP 'DB_DG_TESTE' ADD DISK '/dev/rhdisk17', '/dev/rhdisk19';+
    select+
    *  lpad(name,15) as name,*
    *  group_number,*
    *  disk_number,*
    *  mount_status,*
    *  header_status,*
    *  state,*
    *  redundancy,*
    *  lpad(path,15) as path,*
    *  total_mb,*
    *  free_mb,*
    *  to_char(create_date,'dd/mm/yyyy') as create_date,*
    *  to_char(mount_date,'dd/mm/yyyy') as mount_date*
    from+
    *  v$asm_disk*
    order by+
    *  disk_number;*
    NAME GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE REDUNDA PATH TOTAL_MB FREE_MB CREATE_DAT MOUNT_DATE
    DB_DG_TESTE_000 1 0 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk16 30720 30681 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 1 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk18 30720 30681 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 2 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk17 30720 30682 09/12/2008 09/12/2008
    DB_DG_TESTE_000 1 3 CACHED MEMBER NORMAL UNKNOWN /dev/rhdisk19 30720 30681 09/12/2008 09/12/2008
    select+
    *  v$asm_diskgroup.group_number,*
    *  lpad(v$asm_diskgroup.name,20) as name,*
    *  v$asm_diskgroup.sector_size,*
    *  v$asm_diskgroup.block_size,*
    *  v$asm_diskgroup.allocation_unit_size,*
    *  v$asm_diskgroup.state,*
    *  v$asm_diskgroup.type,*
    *  v$asm_diskgroup.total_mb,*
    *  v$asm_diskgroup.free_mb,*
    *  v$asm_diskgroup.offline_disks,*
    *  v$asm_diskgroup.unbalanced,*
    *  v$asm_diskgroup.usable_file_mb*
    from+
    *  v$asm_diskgroup*
    order by+
    *  v$asm_diskgroup.group_number;*
    GROUP_NUMBER NAME SECTOR_SIZE BLOCK_SIZE ALLOCATION_UNIT_SIZE STATE TYPE TOTAL_MB FREE_MB OFFLINE_DISKS U USABLE_FILE_MB
    1 DB_DG_TESTE 512 4096 1048576 MOUNTED NORMAL 122880 122725 0 N 46002
    At the end of the creation of diskgroup you can see the query that the space available for the diskgroup is 30669 MB, but after the addition of two other discs the size in MB available 46002.
    It was not expected that the space available for use was approximately 50% of total discs?
    How should I proceed with the creation of diskgroup to have it in the
    mirror storage1 with storage2 without this great loss of space
    Edited by: scarlosantos on Dec 9, 2008 4:39 PM

    Maybe my phrasing was bad in the last post.
    You can do the RAID on IDE 3 by creating the array with either SATA 1 or 2. To install the driver, you must bootup with your Windows CD and hit F6 when being prompt to install 3rd party Drivers for RAID/SCSI.
    You should have the SATA driver Floppy disk with you and it is required to install the drivers.
    After installing the drivers, exit Windows installation and reboot, during reboot press Ctrl F to enter the Promise RAID array menu and you are up to do the RAID. Please read through the Serial ATA Raid manual for more info.

  • Mounting a RAID 0 array that's already been made (OS X RAID - Arch)

    Hey, I'm trying to access my two-disk RAID 0 array that I've been using in OS X (within the same desktop computer).  I've been reading the arch wiki about RAID, but it seems many of the recommended steps may erase all data on my array.  Would jumping to building the array (https://wiki.archlinux.org/index.php/RA … _the_Array) work for solving this problem?  And how would I know what specific settings to use?
    If it would just be easier to make an array from Arch that is also compatible with OS X, that would be an option as well.

    Agree with syar - It's not just a matter of the RAID either - the chances of being able to boot into Windows with or without RAID is slim on a different mobo as the OS uses a load of device drivers and other stuff specific to your particular installation.

Maybe you are looking for