Enabling a disk in T3 Storage Array

Currently using a T3 Storage Array where one of the drive slots is showing disabled. We have already replaced the bad disk and power cycled the array to try and enable the drive slot with no success. Does anyone know of a way to enable the drive slot other then deleting the volume and re-adding it?

Actually i was able to try it just yesterday without success.  At the moment i have all of the disks previously mentioned installed, plus the new 240gb SSD.  For some reason I am unable to add the new disk to the storage pool, and as a result unable
to remove the old 64gb disk.  
Shaon, were you able to add a disk to a storage pool that contained a tiered storage virtual disk that consumed 100% of the storage pool space?  When I try the options are all grayed out.
I do have backups of all of this data and I am capable of deleting the whole volume and starting over.  However, this is new technology I would like to get familiar with in case I am presented with a similar problem in the future without the luxury
of backups.

Similar Messages

  • ISCSI Initator still connects to NAS after deleting OVM Storage Array

    I've got a 3.0.2 test rig with a couple of VM servers, a VM Manager and an Iomega NAS
    I have a working solution using NFS storage, so the next step was to get a working solution with iSCSI
    I started by enabling iSCSI and creating LUNs on an Iomega NAS
    I then created an iSCSI Storage Array in OVM which pointed at the NAS
    I registered a couple of iSCSI Initators in the OVM configuration.
    I then created a server pool etc and everything looks good
    Time to cleanup
    I deleted ISCSI based repository contents and then the serverpool.
    I deleted the LUNS from the OVM Storage Array definition
    I tried to remove the iSCSI Initators but it said it couldn't do that with an iSCSI generic initator
    I deleted the OVM Storage array
    I then went to the Iomega NAS to delete the LUNs and it says that there are still VM Server iSCSI Initators connected.
    I suspect I will be able to get rid of the iSCSI initiators connecting to the NAS by shutting down my two VM Servers and then rebooting the NAS. Since the VM servers aren't running there will be no connections coming from them so I will be able to delete the LUNS and turn off iSCSI.
    Should the delete of the OVM storage Array definition have deleted the iSCSI Initators or did I miss a step somewhere ?
    At this point, is there an easier to way to cleanup without rebooting everything ?

    After hooking up a couple of OVM3.0.2 servers setup to an HP P4000 VSA - I noticed that when deleting iSCSI LUNs first from VM Manager, and then from the VSA - after rebooting the hosts - the OVM servers would try to connect to the LUNs even though they had been deleted.
    If the LUN was deleted from VM Manager, but not the SAN - the LUN would just re-appear.
    If the LUN was deleted from VM Manager, along with the iSCSI Disk Array registration - the LUN would end up in 'unmanaged iSCSI array'.
    For some reason I can't seem to un-assign a server initiator from the arrays 'default access group'. I guess perhaps this is the problem.
    The only way to get around this is to delete the entire array registration!
    I created an SR and the latest news on that is that it's been filed as a bug.....
    Jeff

  • Couldn't see LUNS fpr  Compaq Storage array 1000 solaris box

    Hi All,
    I want to connect Compaq storage Works SAN array to solaris10 Box. I can see array as connected but state as unusable.
    How can i see the luns from the storage on solaris box??
    dmesg output:
    Feb  1 08:29:54 testappl        ndi_devi_online: failed for array-controller: target=11000 lun=0 ffffffff
    Feb  1 08:30:25 testappl fctl: [ID 517869 kern.warning] WARNING: fp(7)::PLOGI succeeded: no skip(2) for D_ID 11000
    Feb  1 08:30:25 testappl genunix: [ID 599346 kern.warning] WARNING: Page83 data not standards compliant COMPAQ   MSA1000          2.38
    Feb  1 08:30:25 testappl scsi: [ID 243001 kern.info] /pci@1d,700000/SUNW,emlxs@2/fp@0,0 (fcp7):
    Feb  1 08:30:25 testappl        ndi_devi_online: failed for array-controller: target=11000 lun=0 ffffffff
    Feb  1 08:47:27 testappl emlxs: [ID 349649 kern.info] [ 5.05F8]emlxs3: NOTICE: 730: Link reset.
    Feb  1 08:47:27 testappl emlxs: [ID 349649 kern.info] [ 5.0337]emlxs3: NOTICE: 710: Link down.
    Feb  1 08:47:30 testappl emlxs: [ID 349649 kern.info] [ 5.054D]emlxs3: NOTICE: 720: Link up. (2Gb, fabric, initiator)
    Feb  1 08:47:30 testappl genunix: [ID 599346 kern.warning] WARNING: Page83 data not standards compliant COMPAQ   MSA1000          2.38
    Feb  1 08:47:30 testappl scsi: [ID 243001 kern.info] /pci@1d,700000/SUNW,emlxs@2/fp@0,0 (fcp7):
    Feb  1 08:47:30 testappl        ndi_devi_online: failed for array-controller: target=11000 lun=0 ffffffff
    cfgadm -al output:
    bash-3.00# cfgadm -al
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c0                             scsi-bus     connected    configured   unknown
    c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
    c1                             scsi-bus     connected    configured   unknown
    c1::dsk/c1t0d0                 disk         connected    configured   unknown
    c1::dsk/c1t1d0                 disk         connected    configured   unknown
    c1::dsk/c1t2d0                 disk         connected    configured   unknown
    c2                             fc           connected    unconfigured unknown
    c5                             scsi-bus     connected    unconfigured unknown
    c6                             fc           connected    unconfigured unknown
    c7                             fc-fabric    connected    configured   unknown
    c7::500805f3000186d9           array-ctrl   connected    configured   unusable
    usb0/1                         usb-device   connected    configured   ok
    usb0/2                         unknown      empty        unconfigured ok
    usb1/1                         unknown      empty        unconfigured ok
    usb1/2                         unknown      empty        unconfigured okthanks in advance.

    Looks like the LUN is not configured correctly on the storage array. The kernel can't take the LUN online.
    I have seen this on various boxes, most of the time it was either that the LUN wasn't set online on the SAN box or there is a SCSI reservation on the LUN.
    Can you access the LUN from another host?

  • What are some of the advantages/disadvantage of using FC or FCoE with a storage array (EMC)? What is Cisco's recommendation and why?

                       What are some of the advantages/disadvantages of using FC or FCoE for a storage array? What does Cisco recommend?

    This is what I'm considering:
    Power:
    1050W Seasonic 80PLUS Gold Power Supply
    Motherboard:
    ASUS, Rampage IV Extreme, 2011, SATA6, True Quad SLI/XFIRE, Extreme OC Capable
    CPU:
    Intel Core i7-4960X 3.60GHz, 2133MHz DDR3, 15MB Cache, Hex Core Processor
    System Memory:
    16GB (4 x 4GB) , PC3-19200, 2400MHz (G.Skill - x79)
    Video Adapter 1:
    NVIDIA GeForce GTX 780ti 3GB GDDR5
    Video Adapter 2:
    None
    Optical 1
    16X Blu-ray Burner - 16xBD-R, 2xBD-RW/16xDVD-R, 8xDVD-RW/48xCD-R, 24xCD-RW
    Bay Accessories 1
    NZXT Aperture M Multi-media Hub
    RAID [Requires Identical Hard Drive Selections]
    RAID 0 | 2 Disk Min. Striped set, improved performance, additional storage drive highly recommended
    Hard Drive 1
    Crucial M550 1TB 2.5" SATA III 6GB/sec Solid State Drive
    Hard Drive 2
    Crucial M550 1TB 2.5" SATA III 6GB/sec Solid State Drive
    Hard Drive 3
    Crucial M550 1TB 2.5" SATA III 6GB/sec Solid State Drive
    Sound Card
    Creative Labs Sound Blaster Z PCI Express

  • Sparc Storag Array 102 needs to run under solaris 10

    I need to get a Sparc Storage Array to run under solaris 10. The soc driver that is needed is not supportted under solairs 10. I took the packages needed off of solari 9 cd and installed them but no luck. I tried to to do a modload and modinfo but it was not there.
    The package loaded a file into /kernel/drv/sparcv9/soc. I have solaris 9 onanother disk on the same system and looked at the system trying to copy and edit the files on 10 but not device paths show up in the system. I copyied a file called soc that was in /kernel/drv/ I edited the /etc/device_aliases and path_to_inst but no good.
    There are no entries in the /devices/ dir that reference the soc path. I do not if than can work but really would like to get this working.

    The fibre interface on that beast is NOT a Gigabit connection,
    such as is common on current systems.
    It was a 100Mbps connection through an array controller,
    running with a 40MHz microSPARC II control module.
    You'd exceed its throughput capabilities in any attempt
    to read or write data at or beyond that 100Mbps rate.
    I don't anticipate much assistance in these forums for a technology that goes back a decade-and-a-half.
    Contributors familiar with products from that era stopped posting in 2006
    when the previous supportforum.sun.com was merged into this one.
    We'll all look forward to your guidance after you succeed.
    You'll teach the rest of us.

  • Sparc storage Array woring under solaris 10.

    I need to get a Sparc Storage Array to run under solaris 10. The soc driver that is needed is not supportted under solairs 10. I took the packages needed off of solaris 9 cds and installed them but no luck. I tried to to do a modload and modinfo but it was not there.
    The package loaded a file into /kernel/drv/sparcv9/soc. I have solaris 9 onanother disk on the same system and looked at the system trying to copy and edit the files on 10 but not device paths show up in the system. I copyied a file called soc that was in /kernel/drv/ I edited the /etc/device_aliases and path_to_inst but no good.
    There are no entries in the /devices/ dir that reference the soc path. I do not if than can work but really would like to get this working.

    Don't cross-post.
    You already have an active thread on this topic in the other forum.
    Cross-posting fragments any responses all over the universe.
    As time passes and the threads get separated by time,
    the responses become irrelevant.
    That's why it's poor forum etiquette.
    You end up diluting the information.

  • Storage Array???

    Dear All,
    Yesterday I was given a Storage Array and I have no idea what it is.
    It is Grey with 7 HDD M2949ESP Drives (9 GB Fujitsu HDD). I have connected to my Solaris 9 box and it is showing lots of 2 GB drives. Can anyone tell me how I can identify the the unit and reconfigure the drive to be 7 X 9 GB drives?
    thanks in advance,
    Hugh

    Some of the early Sun storage arrays had 7 slots, but to be sure and to identify your item correctly we would need partnumbers.
    Here are a couple of examples of 7 slot SPARCstorage arrays:
    http://sunsolve.sun.com/handbook_pub/Systems/SSA_214_7Slot/S SA_214_7Slot.html
    http://sunsolve.sun.com/handbook_pub/Systems/SSA_219_7Slot/S SA_219_7Slot.html
    I think you have the 219 model as it supports disks that are similar to your Fujitsu 2949.

  • SVM - Storage array migration effort

    We have several Solaris servers which have both internal drives not under SVM control and Array based SVM devices built into filesystems.
    We need to do 2 things:
    1) Migrate internal drives to Array based luns
    2) Migrate existing SVM luns to new storage array.
    I found and read through the Admin guide for SVM, and it appears to address "migration" only in relation to creating a mirror for an existing volume.
    What I would like to do is replace (or mirror) the underlying devices of an existing filesystem, not the entire filesystem at the top level.
    Im much more familiar with Linux LVM, where I would add new PV's to an existing VG and use the pvmove command.
    I don't see a similiar capability in Solaris 10 LVM so wanted to check with the more experienced folks on how to proceed with this array migration effort.
    Thanks

    We are moving from an 10+ yr old HDS to a new VNX.
    While EMC can physically move the data, I don't have any way to change the config on the SVM (that I know of) to use the new devices.
    AFAIK I would have to do the migration at the SVM layer in order for the FS to be properly linked to it's underlying LUNS.
    As for the existing FS on the local disk, that can be done by umounting the FS and moving, but I don't see an easy way to migrate the root volume without a rebuild of the root filesystem...

  • Enabling deduplication on existing DPM storage

    Hi,
    We have a DPM server 2012 SP1 running on Windows 2012. We have two disks (locally attached) in the server for storing backups. They are 3.7 TB + 6.5 TB assigned to DPM storage pool. Currently 1.9 TB free space is left in DPM storage pool. So, we are
    thinking of to enable data deduplication on DPM storage. Please help me in answering the below queries.
    1) Is it possible to enable deduplication on DPM Storage pool?
    2)If yes, is there any benifit or impact?
    Thanks,
    Umesh.S.K

    Hi Umesh,
    thanks.
    I have been looking through the TechNet articles for DPM 2012 SP1 and it only talks about the ability to backup deduplicated volumes. I know DPM 2012 and 2012 R2 can backup deduplicated volumes but don't offer deduplication themselves.
    I have also looked at Server 2012 and 2012 R2 with regards to deduplication of DPM storage pools and as outlined in this TechNet post, it was not possible with either at the time of its writing.
    https://social.technet.microsoft.com/Forums/en-US/ae2dd0b6-27a1-4a5c-a10a-b751dd8c0ff8/dpm-2012-r2-storage-pool-deduplication?forum=dpmstorage
    However the newer blog post mentioned earlier this year is newer and announces the new ability to offer deduplication of DPM storage pools under very specific configurations. This scenario is enabled with the combination of DPM 2012 R2 UR4 and Windows 2012
    R2 host and file servers with latest KBs on them.
    Essentially it looks to me as though, allowing DPM to backup to VHDX files on SOFS just means you can use the same deduplication technology that is used when using it for VDI VHDX shared in a SOFS so still fairly specific config.
    http://blogs.technet.com/b/dpm/archive/2015/01/06/deduplication-of-dpm-storage-reduce-dpm-storage-consumption.aspx
    I have a couple of people hoping this changes but unless im wrong think this is the case as it currently stands.
    If this is not the case, it would be good to hear about it.
    Kind regards Michael

  • Is it possible to have a cluster Disk from cloud storage

    I have a use case where my destination array of my primary storage array I want to keep in the cloud. Using cloud storage controllers i believe it's possible to have data replication on the remote cloud storage.
    Now using microsoft clustering i want to create a cluster of two nodes - Node A and Node B. Both these nodes are in my org premises.
    With Node A i connect the local storage array and present a SAN disk to it. This SAN disk is imported into the microsoft failover cluster as a cluster disk.
    With Node B I want to connect a virtual disk obtained directly from cloud. This cloud disk is the destination disk (due to the data replication between the two disks they will always be trying to sync) of the primary disk which we connected to Node A.
    My Question is whether it's possible to have such a virtual disk carved out from cloud storage which can be included as a cluster disk into microsoft failover cluster. And at the backend act as the destination disk of my local storage array disk.
    sk

    so I think you have two issues here. The first is you want replication between your local SAN and cloud storage, which I haven't seen any commercially available options so far.
    The other is you want to use a disk on top of cloud storage. In Azure you have the option to use Azure Drive which implements an NTFS drive on top of Blob Storage. However this only works from inside Azure, you can't mount a drive from Azure Storage to a
    local machine.
    Another point about your setup is the cluster resource - typically you setup a cluster using a shared resource, read both of your nodes would access the same resource. In your outlined scenario though you're trying to replace the shared resource by a replicated
    resource, I'm not quite sure if this is a valid setup for the cluster. Otherwise you won't be able to use Azure Drive as Shared resource since both nodes would have R/W while Azure Drive allows only exclusive R/W for one node.
    Seems there are some issues with your concept outside of the Azure part which you may need to verify.

  • Oracle 10g install using storage array

    Hello,
    If I was to install Oracle 10g DB on a RH Linux server could I use a storage array
    for all my dbf, logs, control files etc ?
    Thanks.

    damorgan wrote:
    If by storage array you mean SAN or NAS yes.
    My recommendation, however, would be to put one control file on local disk.I hope the OP doesn't read that as a recommendation to have only one control file! ;-)
    Better:
    "My recommendation, however, would be to put one <i>copy of your multiplexed</i> control file on local disk."

  • Migrate zpools to new storage array

    I am in the process of migrating to a new EMC storage array. I used EMC SRDF to copy all tracks from the current disks to the new disks, so that all new disks look identical except for the actual device path since the WWN is different on the new array. I was hoping I could simply unmask/unmap the old SAN, mask and map the new SAN, and ZFS would be able to import the pools from the new SAN (like you can with veritas), but I didn't take into consideration that the device paths/names would be different and ZFS wouldn't like it. Is there any way to get solaris to see the new ZFS pools that now exist on the the new storage array so that I don't have to mirror and then remove the disks from the pool and pretty much wipeout all of the sync work I've already done with the EMC software?
    Hopefully this makes sense, but please let me know if I need to explain further.

    No issue, works as expected, I just didn't split the synchronization between the old and new devices, so the new disks were still write disabled in EMC land. Once I split the sync, everything imported without a hitch, so it was my stupid mistake.

  • Problem with recovering data from Bit Locker enabled hard disk with bad sectors

    Hi,
    I have Lenovo T430 laptop with Windows 7 and Bit Locker enabled hard disk. While working I encountered blue screen error multiple times. After some time, the laptop stopped to boot by itself and started showing error 'A disk read write error has occurred.
    Press Ctrl+Alt+Del to restart' message. I tried to connect the hard disk to a different PC as a secondary drive and tried to check the disk to recover the data. The 500 GB disk is showing as unallocated space and I am not sure how to recover the
    data from the hard disk. Appreciate your help to recover the data from corrupted hard disk.
    I used the Lenovo Diagnostics tools available in BIOS and it showed 48 bad sector errors on the hard disk. I also used Windows 7 CD and tried auto repair but it looks like it didn't do anything.
    Thanks in advance!

    Hi  SenneVL,
    Since there are 48 bad sectors on your hard disk, this means the system can not boot any more, the data might not be restored in a normal way, you'd better turn to data restore company for help.
    Regards
    Wade Liu
    TechNet Community Support

  • Error while Registering a iSCSI drive on to Storage array

    Hi,
    I've installed Oracle VM Manager 3.0.2 on Oracle Linux 6.1. When I tried to register a iSCSI drive on to Storage array, I go the following error:
    "java.lang.NullPointerException
    ADF_FACES-60097:For more information, please see the server's error log for an entry beginning with: ADF_FACES-60096:Server Exception during PPR, #4"
    Also there is no items to be selected in the Storage Plug-in.
    I read that Storage plug-in is in-built with version 3.0 and above.
    Any help in this regard will be greatly appreciated.
    Thanks,
    Prajeesh

    Prajeesh wrote:
    Also there is no items to be selected in the Storage Plug-in.Have you discovered at least one Oracle VM Server yet? You can't add any storage until you do that successfully.

  • I have a small production client looking to run 1 workstation running Mac Lion 10.7.3 and two work stations running Windows7 64 bit. They will all be talking to the same storage array through 8Gb FC. What is there most cost effective way to do this?

    I have a small production client looking to run 1 workstation running Mac Lion 10.7.3 and two work stations running Windows7 64 bit. They will all be talking to the same storage array through 8Gb FC. What is there most cost effective way to do this?

    Thank you for your help.
    The client has already made the jump to 8Gb including HBA's, switch and RAID Storage.
    The other question will be if they need a seperate Mac Server to run the Meta Data or are they able to use the current Mac they are running to do this?
    The Mac is a 201073 model they say with 12 Dual Core 2.66Mhz processors and 16 GB of Memory. This system is currently doing rendering. It has the XSAN Client but I understand for the solution to work they need to also run XSAN Server on a MDC.

Maybe you are looking for