Removing a LUN/disk in Solaris 10

What is the Correct(TM) way to remove a disk/LUN on a Solaris 10/SPARC box, with multipathing? Can this method be followed if the LUN was removed externally already (SAN)?
I have a server in this situation - LUN removed - which has 'phantom' entries in format and still shows up in cfgadm (as unusable). I have tried "cfgadm -c unconfigure <device>". The system still seems to be running fine, but it does not seem to be an ideal situation. And I need to shuffle around more LUNs.
Thanks.
PS I would like to be able to remove the LUN/disk (properly) without a reboot, since this is a 24x7 environment.
Edited by: DCSMidrange on Jul 20, 2008 6:18 PM

Error? No error, the machine froze and eventually rebooted.
Here are the relevant lines from /var/adm/messages. Looks like it started to work and then bit off more than it could chew?
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@7c0/p
ci@0/pci@8/SUNW,qlc@0/fp@0,0 (fp2) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@7c0/p
ci@0/pci@9/SUNW,qlc@0/fp@0,0 (fp0) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@780/p
ci@0/pci@8/SUNW,qlc@0/fp@0,0 (fp1) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@780/p
ci@0/pci@8/SUNW,qlc@0,1/fp@0,0 (fp3) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker genunix: [ID 834635 kern.info] /scsi_vhci/ssd@g600507630efe0a42000000000000100b (ssd10) multipath status: failed, path /pci@7c0/p
ci@0/pci@8/SUNW,qlc@0,1/fp@0,0 (fp4) to target address: w500507630e860a42,0 is offline Load balancing: round-robin
Jul 22 11:19:26 dcsnetworker unix: [ID 836849 kern.notice]
Jul 22 11:19:26 dcsnetworker ^Mpanic[cpu12]/thread=2a101789cc0:
Jul 22 11:19:26 dcsnetworker unix: [ID 799565 kern.notice] BAD TRAP: type=30 rp=2a101788d00 addr=736440672000 mmu_fsr=9
Jul 22 11:19:26 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:26 dcsnetworker unix: [ID 839527 kern.notice] sched:
Jul 22 11:19:26 dcsnetworker unix: [ID 756718 kern.notice] data access exception:
Jul 22 11:19:26 dcsnetworker unix: [ID 901159 kern.notice] MMU sfsr=9:
Jul 22 11:19:26 dcsnetworker unix: [ID 820745 kern.notice] Data or instruction address out of range
Jul 22 11:19:26 dcsnetworker unix: [ID 162203 kern.notice] context 0x0
Jul 22 11:19:26 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:26 dcsnetworker unix: [ID 101969 kern.notice] pid=0, pc=0x11c2460, sp=0x2a1017885a1, tstate=0x80001605, context=0x0
Jul 22 11:19:26 dcsnetworker unix: [ID 743441 kern.notice] g1-g7: 120a800, 1, 600051bbc70, 0, 78, 0, 2a101789cc0
Jul 22 11:19:26 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:26 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788a20 unix:die+9c (30, 2a101788d00, 736440672000, 9, 2a101788ae0, ffff)
Jul 22 11:19:26 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000000030 0000000000000030 0000000000000000
Jul 22 11:19:26 dcsnetworker %l4-7: 0000000000000000 0000000001062e74 000000000000000b 0000000001080c00
Jul 22 11:19:26 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788b00 unix:trap+754 (2a101788d00, 10000, 0, 9, 300015e2000, 2a101789cc0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 000000000183d180 0000000000000030 0000000000000000
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000000 0000000001062e74 000000000000000b 0000000000010200
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788c50 unix:ktl0+64 (2f73736440673630, 0, 60005385d40, 181a7c8, 0, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000300015e2000 0000000000000060 0000000080001605 000000000101dd08
Jul 22 11:19:27 dcsnetworker %l4-7: 000000000183d180 0000000001062e74 000000000000000b 000002a101788d00
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788da0 unix:mutex_vector_enter+478 (18b82b0, 2f737364406737e0, 0, 181a7c8, 2f737364406
73631, 2f73736440673630)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000000000 0000000000000000 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 000000000183d180 0000000001062e74 0000000001062e74 0000000000000080
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788e50 genunix:turnstile_block+1b8 (2f73736440673630, 0, 60005385d40, 181a7c8, 0, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000000000 0000000000000000 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 000000000183d180 0000000001062e74 0000000001062e74 0000000000000080
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788f00 unix:mutex_vector_enter+478 (300015e2000, 300015e2000, 60005385d40, 2f737364406
73630, 2f73736440673630, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000001 0000000000000000 0000000000000000 0000000001846c20
Jul 22 11:19:27 dcsnetworker %l4-7: 00000600053725c0 2f73736440673631 fffbb833b2219ac4 000000000181a400
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101788fc0 scsi_vhci:vhci_commoncap+b0 (ffffffffffffffff, 13, 0, 1, 1, 70491520)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005372a38 0000000000000000 0000000070495c80 000000000000000a
Jul 22 11:19:27 dcsnetworker %l4-7: 00000600053725c0 0000060005385d40 0000060005385980 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789070 ssd:ssd_unit_detach+464 (60005352a10, 60005351b00, 0, 70491530, 1, 3)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005372a38 0000000000000000 0000000070495c80 000000000000000a
Jul 22 11:19:27 dcsnetworker %l4-7: 00000600053725c0 00000300008e54d8 0000000000000000 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789120 genunix:devi_detach+a4 (60005352a10, 0, 40001, 0, 7b65ef58, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000fffdffff 0000000000020000 0000000000000000 0000060005352a78
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000004 00000000fffdfc00 000000000185d400 0000000000000005
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017891f0 genunix:detach_node+64 (60005352a10, 40001, 0, 50020000, 40001, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000fffdffff 0000000000020000 0000000000000000 0000060005352a78
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000004 00000000fffdfc00 000000000185d400 0000000000000005
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017892a0 genunix:i_ndi_unconfig_node+144 (60005352a10, 12c, 40001, 10cc844, 14, 185d790)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000fffdffff 0000000000020000 0000000000000000 0000060005352a78
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000004 00000000fffdfc00 000000000185d400 0000000000000005
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789350 genunix:i_ddi_detachchild+14 (60005352a10, 40001, 0, ffffffffffffffff, 300008e5
4d8, 60005385d40)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000040001 0000000000000003 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000000 0000000000040011 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789400 genunix:devi_detach_node+64 (60005352a10, 40001, 40001, 2a101789578, 80000, 400
01)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000040001 0000000000000003 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000000 0000000000040011 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017894c0 genunix:ndi_devi_offline+188 (60005352a10, 0, 0, ffffffffffffffff, 300008e54d8,
60005385d40)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000000000 0000000000040001 0000000000000003 000002a101789cc0
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000000 0000000000040011 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789580 genunix:___const_seg_900000101+15af4 (300008e54d8, 60005352a10, 0, 30002202bc8,
300015e2000, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005377500 00000300008f57c0 00000600064b0f68 00000300008f5820
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000020 0000000000000003 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789630 genunix:___const_seg_900000101+15f38 (300008f57c0, 60005377500, 0, 4, 0, 300008
e54d8)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005377500 00000300008f57c0 00000600064b0f68 00000300008f5820
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060005352a10 0000000000000020 0000000000000003 0000000000000000
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a1017896e0 genunix:mdi_pi_free+254 (600065c3680, 18cc058, 0, 300008f5820, 60005377540, 0)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000060005377500 00000300008f57c0 00000600064b0f68 00000000000fffff
Jul 22 11:19:27 dcsnetworker %l4-7: 00000000000ffc00 0000000000000000 0000000000000040 0000000001259b24
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789790 fcp:ssfcp_offline_child+198 (600064b8718, 600065c3680, 0, 0, 1, 2a10178994c)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000000700b8c08 0000000000000000 000006000647e698 0000060005352a10
Jul 22 11:19:27 dcsnetworker %l4-7: 0000000000000001 00000000700b8c00 0000000000000000 000006000534e800
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789860 fcp:ssfcp_trigger_lun+2f4 (600064b8718, 0, 60005352a10, 2, 2, 2a10178994c)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 00000600065c3680 0000000000000001 0000000000000002 0000000000000001
Jul 22 11:19:27 dcsnetworker %l4-7: 000006000534e800 00000600053d4e00 00000000700b8c08 00000300008e54d8
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789950 fcp:ssfcp_hp_task+64 (6002b1a5758, 1, 600064b8718, 2, 2, 2)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000010000 00000600052f2f70 00000600052f2f78 00000600052f2f7a
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060001414e48 0000000000000002 0000000000000000 000006000534e800
Jul 22 11:19:27 dcsnetworker genunix: [ID 723222 kern.notice] 000002a101789a00 genunix:taskq_thread+1a4 (600052f2fa0, 600052f2f48, 10001, 447cc4dce7954, 2a101
789aca, 2a101789ac8)
Jul 22 11:19:27 dcsnetworker genunix: [ID 179002 kern.notice] %l0-3: 0000000000010000 00000600052f2f70 00000600052f2f78 00000600052f2f7a
Jul 22 11:19:27 dcsnetworker %l4-7: 0000060001414e48 0000000000000002 0000000000000000 00000600052f2f68
Jul 22 11:19:27 dcsnetworker unix: [ID 100000 kern.notice]
Jul 22 11:19:27 dcsnetworker genunix: [ID 672855 kern.notice] syncing file systems...
Jul 22 11:19:27 dcsnetworker genunix: [ID 733762 kern.notice] 2
Btw, drifting a little on the original topic, but does anyone know how to reset the SC on a T2000 without power cycling the machine? Any way of doing a reset remotely (no scadm functions in CMT architecture, that I know of)? Barring a remote reset, should I be able to connect to the terminal using local serial mgmt as opposed to the usual net mgmt port?
Edited by: DCSMidrange on Jul 22, 2008 3:29 PM
Edited by: DCSMidrange on Jul 22, 2008 3:30 PM

Similar Messages

  • Removal of ASM Disks

    Hi I am using Oracle 10gR2 on Solaris 10. I have some 12 ASM disks in a Disk Group. I want to remove 3 ASM Disks. Can I do this operation online? Will my data be affected? How should I proceed about it? When I will remove the ASM Disks, can I safely detach those disks physically? please help.
    regards

    ahsen.javaid wrote:
    Hi I am using Oracle 10gR2 on Solaris 10. I have some 12 ASM disks in a Disk Group. I want to remove 3 ASM Disks. Can I do this operation online? As already commented, yes. I would like to give you a practical example of how utterly neat ASM is in this regard and why it is IMO a major mistake not to use ASM.
    We had to swap storage systems. This means that we have to migrate an entire Oracle database (over 1TB in size) from one set of LUNs to a completely different set of LUNs (on a different storage system).
    The new LUNs are added to the existing diskgroup. The old LUNs are dropped (the drop is of course not immediate). A rebalance is issued. The next morning the entire database has been moved and restriped on the new set of LUNs and the old LUNs can now be removed from the system. This while the database was up and running and normal processing continued uninterrupted (this is a 24x7 database).
    Then I get comments from some customers (like from a major local financial institution recently) that "+No, we do not want to use ASM as we rather use Veritas+". People like that are missing the point of what ASM is. Totally.

  • How to remove a hard disk file on storage

    Hello,
    In my process, I would like to remove an additional disk.
    To achieve that task, I have found "Remove VM device". It is a VMware box that I can use to remove any device (including Hard disks) and I have not found another box to do that.
    But file on storage is not deleted, only the logical disk on the VM is removed.
    So I was wondering if there could be another way to remove also the file on the storage?
    Is a “dedicated” remove Hard disk vmware box exists in Tidal versions greater to 2.3.1?
    thank you.
    Best regards,
    Nicolas

    There is no specific "Remove Hard Disk" activity in 2.3.1 or the upcoming 2.3.4.
    Have you tried the PowerCLI command "Remove-HardDisk"? If that works for you, in 2.3.1 you can use a Windows PowerShell activity to run PowerCLI and execute a simple script that runs this command against the VM. Unfortunately, this requires connecting to vCenter directly (not through the adapter) and you will need to supply credentials.
    Or, better, use the "Execute PowerCLI Script" activity in the upcoming 2.3.4 VMware Adapter.. where you can run PowerCLI activities that use your existing VMware vCenter Server target's connection and don't need supply credentials separately.

  • How do I remove the Hard disk from computer? I have an old iMac, model M5521 circa 2000.  I need to wipe the disk clean to DOD specs before donating it.

    I have an iMac Model M5521, circa 2000, which I must wipe the hard drive clean to DOD specs before donating it.
    I have the DOD software on another non-apple computer.  I need to remove the hard disk drive from the iMac to do this.
    How do I get to the hard drive?  I find only two screws on the back of the monitor/case and they do not help.  What am I missing?
    Thanks

    It's not too hard but you have to take off the bottom cover of the case. These instructions shoudl show what yo uneed to do.
    http://www.scribd.com/doc/103447/iMac-G3-Disassembly-Guide
    I pulled the original 10G hard drive from our M5521 and replaced it with a larger one. The whole thing took under 45 minutes, including the 15 minutes it took me to recover a screw that fell into the works.

  • Download boot disk of Solaris 2.5.1

    Hi, all.
    Where can I download a boot disk of Solaris 2.5.1?
    Thanks

    What do you mean? An ISO image of the boot CDROM? Or
    do you want to copy a boot disk from one system to
    another? I don't know about the former, my guess is
    that unless you are booting from already existing
    media from a CDROM on a system in your LAN, which is
    doable, the answer is "no".
    The latter idea, using dd to copy one disk to another
    is doable with cavaets. The disks must be of the
    same type and the system hardware configs must be
    the same. There are help documents on SunSolve about
    the details. There are many risks and even though
    you get a copy, it might fail to boot.

  • My Mac G-4 OS X will not image an icon of the zip disk with I inserted into my computer's built-in Zip Drive, therefore I can not remove the zip disk from my computer.  What can I do?

    My Mac G-4 OS X will not image an icon of the zip disk with I inserted into my computer's built-in Zip Drive, therefore I can not remove the zip disk from my computer.  What can I do?

    Thanks Old Comm Guy, BD Aquam and Texas Mac Man for youradvice and reply to my question
    and problem with my Zip Drive.  However:
        1.  Depressing mouse buttonon start up did not eject the zip disk.
        2.  To examine front of zipdrive, I had to remove many screws and several plastic case
             coverings.  Upon doing thatI discovered that, unfortunately, there is no whole in front
              of mybuilt-in zip hardware drive for me to insert a paper clip to manually eject zipdisk.
          3.  I went to the Utilities folder in myMac OS X Application folder, but I could not find
               theiomega zip drive in there.
    Also, I did go into my "9" System folder and thento the Extensions folder within it and did find an icon of an
    Iomega Driver. When double clicking on it a window came upstating I was opening the application
    "ColorSync Extension" for thefirst time, and asking if I was sure I wanted to open this application.
    Uponclicking open nothing happened - nothing opened.
    I also went into the "System X" folder>Libraryfolder>Extension folder>IomegaSAM.ket icon and double
    clicked on it and asmall window opened stating "Compiling file List", however nothingopened it just
    continued to compile, so I closed it.
    Within my Mac OS X HD>  Applications>Iomega folder>Iomega Tools.app a smallwindow opened up
    with several options (Erase, Protect, Disk Info and DriveInfo). Clicking on the Drive Info a message says:
    "No Iomega Drives or noIomega Driver found.  Therefore, Ihave gone to Mac, Iomega and other websites
    trying to find a Driver for thebuilt-in Zip Drive in my Mac G-4 OS X 10.4.11, but have not really found anythat work.
    CAN ANYONE TELL ME IF THERE IS A WEBSITE WHERE I CANDOWNLOAD A NEW DRIVER FOR MY ZIP DRIVE?
    Thanks,     Peterfromcrystallake

  • Problem encountered installing new disk on Solaris VMware

    Hi Guys,
    I'm trying my first attempt to create a new disk on solaris 10 filesystem but having a few issues mounting disk. i've been following instructions on google searches but now am stuck and really some expert advice.
    Details:
    Host OS: Windows Vista
    Gues OS: Solaris 10 64x UFS filesystem
    VMware Workstation: Ver 6:00
    Steps undertook:
    1) Shut down the VM; Edit the VMware configuration: Select Virtual Machines -> Settings; Added new hard disk device of 20GB (SCSI:0:0)
    2) Booted Solaris VM
    3)
    *# format*
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0
    1. c2t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
    /pci@0,0/pci1000,30@10/sd@0,0
    Specify disk (enter its number): 1
    selecting c2t0d0
    [disk formatted]
    4)
    format> p
    PARTITION MENU:
    0 - change `0' partition
    1 - change `1' partition
    2 - change `2' partition
    3 - change `3' partition
    4 - change `4' partition
    5 - change `5' partition
    6 - change `6' partition
    7 - change `7' partition
    select - select a predefined table
    modify - modify a predefined partition table
    name - name the current table
    print - display the current table
    label - write partition map and label to the disk
    !<cmd> - execute <cmd>, then return
    quit
    partition> p
    Current partition table (original):
    Total disk cylinders available: 2607 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    5)
    partition> 0
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 0 0 (0/0/0) 0
    Enter partition id tag[unassigned]:
    Enter partition permission flags[wm]:
    Enter new starting cyl[0]: 3
    Enter partition size[0b, 0c, 3e, 0.00mb, 0.00gb]: 2604c
    partition> p
    Current partition table (unnamed):
    Total disk cylinders available: 2607 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 unassigned wm 3 - 2606 19.95GB (2604/0/0) 41833260
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 2606 19.97GB (2607/0/0) 41881455
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
    8 boot wu 0 - 0 7.84MB (1/0/0) 16065
    9 unassigned wm 0 0 (0/0/0) 0
    partition> label
    Ready to label disk, continue? y
    partition> q
    format> q
    6)
    *# newfs /dev/dsk/c0d0s2*
    newfs: construct a new file system /dev/rdsk/c0d0s2: (y/n)? y
    Warning: inode blocks/cyl group (431) >= data blocks (246) in last
    cylinder group. This implies 3950 sector(s) cannot be allocated.
    /dev/rdsk/c0d0s2: 41877504 sectors in 6816 cylinders of 48 tracks, 128 sectors
    20448.0MB in 426 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
    super-block backups (for fsck -F ufs -o b=#) at:
    32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
    Initializing cylinder groups:
    super-block backups for last 10 cylinder groups at:
    40898592, 40997024, 41095456, 41193888, 41292320, 41390752, 41489184,
    41587616, 41686048, 41784480
    7)
    *# mountall*
    mount: /tmp is already mounted or swap is busy
    mount: /dev/dsk/c0d0s7 is already mounted or /export/home is busy
    8)
    *# df -h*
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c0d0s0 6.9G 5.6G 1.2G 83% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 1.1G 968K 1.1G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    sharefs 0K 0K 0K 0% /etc/dfs/sharetab
    fd 0K 0K 0K 0% /dev/fd
    swap 1.1G 40K 1.1G 1% /tmp
    swap 1.1G 28K 1.1G 1% /var/run
    /dev/dsk/c0d0s7 12G 1.5G 11G 13% /export/home
    Where am i going wrong? I dont see new mount for 20GB new disk that i created?
    Please advice/help
    Thanks!

    Thanks for your response but still no luck??
    # vi vfstab
    "vfstab" 13 lines, 457 characters
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c0d0s1 - - swap - no -
    /dev/dsk/c0d0s0 /dev/rdsk/c0d0s0 / ufs 1 no -
    */dev/dsk/c0d0s2 /dev/rdsk/c0d0s2 /u01 ufs 1 yes -*
    /dev/dsk/c0d0s7 /dev/rdsk/c0d0s7 /export/home ufs 2 yes
    /devices - /devices devfs - no -
    sharefs - /etc/dfs/sharetab sharefs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes -
    # mountall
    /dev/rdsk/c0d0s2 is clean
    mount: Nonexistent mount point: /u01
    mount: /tmp is already mounted or swap is busy
    mount: /dev/dsk/c0d0s7 is already mounted or /export/home is busy

  • Unable to remove virtual D-Disk after removing replication

    I am replicating servers from one cluster in one datacenter to a cluster in a second datacenter. One of te servers that I was replicating does no longer need to be replicated. What I did was to stop the replication of that server. Then I removed the server
    form the target cluster. Then I removed the server from Hyper-V. I thought that the only thing I needed to do after these steps is to remove the files that are still on the SAN. I was able to remove the C-disk (vhdx file) but I was unable to remove the D-disk.
    I don't have permission to delete that file. I am using Windows 2012R2. 

    Hi Edwin,
    Could you please post the ACL of that VHD file with the following command ?
    icacls D:\test.vhdx
    We still need the user name that you are using to delete the file .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • If i upgrade  Snow Leopard  to lion, all information will be removed? from hard disk ?, if i upgrade  Snow Leopard  to lion, all informatio will be removed from hard disk ?

    if i upgrade  Snow Leopard  to lion, all information will be removed? from hard disk ?, if i upgrade  Snow Leopard  to lion, all informatio will be removed from hard disk ?

    John Hammer said it best -- backup first and always. That's why Time Machine and periodic clones of your boot drive to an external disk are so valuable.
    That said – the Lion upgrade will retain all user data, files, settings, mail, and preferences. Mind you that a few settings and their location have changed.
    Before you actually use Lion, may I recommend spending the first evening looking at every single preference panel in System Preferences.  Many of the user interface choices can be reverted back to mimic Snow Leopard if you choose.
    Relax and enjoy Lion – it is amazing.  Please holler if we can help further.

  • How many LUNs does an Solaris Host can support ?

    Hi ,
    Anyone can share, how many storage LUNs does an Solaris Host can support? There is any limits on this.?
    Regards
    Siva

    It probably depends at least some on the type of HBAs, the driver, topology, and most likely other things as well. Instead of dealing with thousands of LUNs on the OS side, I would highly recommend you have your SAN team start provisioning meta devices instead, to cut down on the number of devices (if you are lucky enough to have a SAN team that is). My company allows me to be the EMC and Solaris engineer, and in return all they ask is that I only collect pay for one of the two....

  • How to find  LUN against   San disk on solaris 9 using EVA5000 stoarge

    Hi,
    I have persented may many disk of same size form EVA5000 using sun Emlux card ( single path ). I want to identify "World Wide LUN Name" against eache disk in format.Your help i requied in this case.
    In format i have output like this
    14. c4t50001FE15003769Dd1 <COMPAQ-HSV110(C)COMPAQ-3020 cyl 51198 alt 2 hd 128 sec 128>
    /ssm@0,0/pci@18,700000/SUNW,emlxs@2/fp@0,0/ssd@w50001fe15003769d,1
    15. c4t50001FE15003769Dd2 <COMPAQ-HSV110(C)COMPAQ-3020 cyl 51198 alt 2 hd 128 sec 128>
    /ssm@0,0/pci@18,700000/SUNW,emlxs@2/fp@0,0/ssd@w50001fe15003769d,2
    16. c4t50001FE15003769Dd3 <COMPAQ-HSV110(C)COMPAQ-3020 cyl 63998 alt 2 hd 128 sec 128>
    /ssm@0,0/pci@18,700000/SUNW,emlxs@2/fp@0,0/ssd@w50001fe15003769d,3
    17. c4t50001FE15003769Dd4 <COMPAQ-HSV110(C)COMPAQ-3020 cyl 63998 alt 2 hd 128 sec 128>
    /ssm@0,0/pci@18,700000/SUNW,emlxs@2/fp@0,0/ssd@w50001fe15003769d,4
    luxadm -v display /dev/rdsk/c2t50001FE15003769Cd1s0
    Displaying information for: /dev/rdsk/c2t50001FE15003769Cd1s0
    DEVICE PROPERTIES for disk: /dev/rdsk/c2t50001FE15003769Cd1s0
    Vendor: COMPAQ
    Product ID: HSV110 (C)COMPAQ
    Revision: 3110
    Serial Num: Unavailable
    Unformatted capacity: 8192.000 MBytes
    Read Cache: Enabled
    Minimum prefetch: 0x0
    Maximum prefetch: 0x0
    Device Type: Disk device
    Path(s):
    /dev/rdsk/c2t50001FE15003769Cd1s2
    /devices/pci@8,600000/SUNW,emlxs@1/fp@0,0/ssd@w50001fe15003769c,1:c,raw
    LUN path port WWN: 50001fe15003769c
    Host controller port WWN: 10000000c95598e6
    Path status: O.K.
    /dev/rdsk/c2t50001FE150037699d1s2
    /devices/pci@8,600000/SUNW,emlxs@1/fp@0,0/ssd@w50001fe150037699,1:c,raw
    LUN path port WWN: 50001fe150037699
    Host controller port WWN: 10000000c95598e6
    Path status: O.K.
    bash-2.05# luxadm -e dump_map /devices/pci@8,600000/SUNW,emlxs@1/fp@0,0/ssd@w50001fe15003769c,1:c,raw
    Pos Port_ID Hard_Addr Port WWN Node WWN Type
    0 20000 0 50001fe150029129 50001fe150029120 0xc (Array controller device)
    1 20001 0 50001fe15002912c 50001fe150029120 0xc (Array controller device)
    2 40001 0 50001fe15003769c 50001fe150037690 0xc (Array controller device)
    3 40002 0 50001fe150037699 50001fe150037690 0xc (Array controller device)
    4 4000a 0 10000000c95598e6 20000000c95598e6 0x1f (Unknown Type,Host Bus Adapter)
    bash-2.05#
    But on EVA5000 i have following identy.
    World Wide LUN Name:
    6005-08b4-0001-0525-0002-8000-05a5-0000
    UUID:
    6005-08b4-0001-0525-0002-8000-05a5-0000
    Is there any way i can identify LUN name on sun solaris 9 against stoarge disk.
    Thanks,
    Mazhar

    hi onedbaguru,
    thanks for the reply.
    my questions are still there.
    As per my plans: i shall use 200GB as ASM DG1 with external redudency, 80GB as ASM DG2 with external redudency and ASM 100GB and each 2GB for voting and OCR with external redudency.
    1. My Question is: how to do it command by command? what i need is, what do from current state and how to do so that my Disks are ready for OCR,VD and ASM assuming if i have no support for system/storage admin?
    2. Do i need to leave first MB for all disks including OCR, VD and ASM Disks as well?
    3. If this thread needs to be moved to some other group then please suggest how to do it?
    Point#3 is not important at all, i would love to see some expert DBA replying me here.
    Waiting for Expert guide.
    Regards,
    Adnan

  • Need to format the old ASM disks on solaris.10.

    Hello Gurus,
    we uninstalled the ASM on solaris, but while installing the ASM again it says that mount point already used by another instance, but there is no db and ASM running (this is the new server) so we need to use dd command or need to reformat the raw devices which already exists and used by the old ASM instance,here is the confusion...
    there are 6 Luns presented to the this host for ASM,its not used by anyone...
    # format
    Searching for disks...done
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0
    1. c0t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> solaris
    /pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0
    2. c2t60050768018E82BE98000000000007B2d0 <IBM-2145-0000-150.00GB>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b2
    3. c2t60050768018E82BE98000000000007B3d0 <IBM-2145-0000 cyl 44798 alt 2 hd 64 sec 256>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b3
    4. c2t60050768018E82BE98000000000007B4d0 <IBM-2145-0000 cyl 19198 alt 2 hd 64 sec 256>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b4
    5. c2t60050768018E82BE98000000000007B5d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b5
    6. c2t60050768018E82BE98000000000007B6d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b6
    7. c2t60050768018E82BE98000000000007B7d0 <IBM-2145-0000 cyl 5118 alt 2 hd 32 sec 64>
    /scsi_vhci/ssd@g60050768018e82be98000000000007b7
    but the thing is when we try to list the raw devices by ls -ltr on /etc/rdsk location all disk owned by root and other not in oracle:dba & oinstall.
    root@b2dslbmom3dbb3301 [dev/rdsk]
    # ls -ltr
    total 144
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t0d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@0,0:h,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t1d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@1,0:h,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s0 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:a,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s1 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:b,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s2 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:c,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s3 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:d,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s4 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:e,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s5 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:f,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s6 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:g,raw
    lrwxrwxrwx 1 root root 64 Jun 10 13:24 c0t3d0s7 -> ../../devices/pci@0,600000/pci@0/pci@8/pci@0/scsi@1/sd@3,0:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B7d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B6d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B5d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B4d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B3d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:g,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s1 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:b,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s2 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:c,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s3 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:d,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s4 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:e,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s5 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:f,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:13 c2t60050768018E82BE98000000000007B2d0s6 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:g,raw
    lrwxrwxrwx 1 root root 68 Jun 13 15:34 c2t60050768018E82BE98000000000007B2d0 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:wd,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:47 c2t60050768018E82BE98000000000007B3d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:48 c2t60050768018E82BE98000000000007B4d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:49 c2t60050768018E82BE98000000000007B5d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:51 c2t60050768018E82BE98000000000007B6d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:h,raw
    lrwxrwxrwx 1 root root 67 Jun 13 15:53 c2t60050768018E82BE98000000000007B7d0s7 -> ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:h,raw
    so we need to know where the raw devices located for oracle to do the dd command to remove the old asm header on the raw device inorder to start the fresh installation
    but when we use the command which already given by the unix person who is no longer works here now, we are able to see the following information
    root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
    crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    also we are having the information of the mkode, with minor and major number we used for making the softlinks for raw device with ASM.
    Cd dev/oraasm/
    /usr/sbin/mknod asm_disk_03 c 118 232
    /usr/sbin/mknod asm_disk_02 c 118 224
    /usr/sbin/mknod asm_disk_01 c 118 216
    /usr/sbin/mknod asm_ocrvote_03 c 118 208
    /usr/sbin/mknod asm_ocrvote_02 c 118 200
    /usr/sbin/mknod asm_ocrvote_01 c 118 192
    But the final thing is we need find out where the above configuration located on the host, i think this raw device present method is different than the normal method on solaris??
    please help me to proceed my installtion .... thanks in advance....
    i am really confused with the following command from where we are getting the oracle disk raw devices information,since there is no info there in /etc/rdsk location (Os is solaris 10)
    root@b2dslbmom3dbb3301 [dev/rdsk] # ls -l c2t600*d0s0|awk '{print $11}' |xargs ls -l
    crwxr-x--- 1 oracle oinstall 118, 232 Jun 14 13:29 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b2:a,raw
    crwxr-x--- 1 oracle oinstall 118, 224 Jun 14 13:31 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b3:a,raw
    crwxr-x--- 1 oracle oinstall 118, 216 Jun 14 13:32 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b4:a,raw
    crw-r----- 1 root sys 118, 208 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b5:a,raw
    crw-r----- 1 root sys 118, 200 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b6:a,raw
    crw-r----- 1 root sys 118, 192 Jul 18 13:19 ../../devices/scsi_vhci/ssd@g60050768018e82be98000000000007b7:a,raw
    please help....

    Hi Winner;
    For your issue i suggest close your thread here as changing thread status to answered and move it to Forum Home » Grid Computing » Automatic Storage Management which you can get more quick response
    Regard
    Helios

  • Disk Suite/ Solaris 8 Upgrade Problems

    Hello,
    I am trying to upgrade from Solaris Sparc 7 to 8 and I have Sun Disk Suite mirroring the boot device. When I try to upgrade the installation fails. Is there a way to upgrade from 7 to 8 without breaking the mirror and if not is there a utility that can remove the metadb info and rebuild them after the upgrade.
    Thanks,
    Bill Bradley

    Hello,
    I am trying to upgrade from Solaris Sparc 7 to 8 and I have Sun Disk Suite mirroring the boot device. When I try to upgrade the installation fails. Is there a way to upgrade from 7 to 8 without breaking the mirror and if not is there a utility that can remove the metadb info and rebuild them after the upgrade.
    Thanks,
    Bill Bradley

  • What is the  limit of  the size of a disk under Solaris 8?

    Hello,
    I have a problem when I try to run format command to label a disk of 1TB under solaris 8.
    # format.......
    114. c9t14d3 <DGC-RAID3-0322 cyl 32766 alt 2 hd 1 sec 0>
    /pci@8,700000/lpfc@4/sd@e,3
    Specify disk (enter its number)[115]: 114
    selecting c9t14d3
    [disk formatted]
    Disk not labeled. Label it now? y
    Warning: error writing VTOC.
    Warning: no backup labels
    Write label failed
    format> print
    PARTITION MENU:
    0 - change `0' partition
    1 - change `1' partition
    2 - change `2' partition
    3 - change `3' partition
    4 - change `4' partition
    5 - change `5' partition
    6 - change `6' partition
    7 - change `7' partition
    select - select a predefined table
    modify - modify a predefined partition table
    name - name the current table
    print - display the current table
    label - write partition map and label to the disk
    !<cmd> - execute <cmd>, then return
    quit
    partition> p
    Current partition table (default):
    Total disk cylinders available: 32766 + 2 (reserved cylinders)
    Arithmetic Exception - core dumped

    I think maybe if you split it into two luns, you can
    stitch them back together with svm.
    But even thats not certain.Depends on what you're going to do with the device at that point. UFS on Solaris also does not support volumes 1TB or larger. So you'd have to use it as a raw slice space or a filesystem that did support larger sizes.
    You need later versions of Solaris 9 to get mutli-terabyte UFS support. (a separate issue from multi-terabyte LUN support).
    Darren

  • 7420 & iSCSI or FC lun disk space?

    I noticed an issue on iSCSI or FC luns.
    We have many configured iSCSI and FC lun's, mostly used with windows servers. by removing files or directory's in windows no disk space gets freed up.
    I tried goggling this issue and it appears that blocks are freed but the storage doesn't know about this, causing to never get back the free blocks (disk space), I also found in one of the forums talking about something called "iSCSI unmap" which apparently fixes this issue.
    Question: Is this a known issue? What can we do not to run to this issue? or will this issue be addressed at a latter update?
    Thanks,
    -Eli

    Hi.
    It's not bug, it's normal behavior.
    File system generaly do not say to device that this block now free and can be cleare, so LUN ( array) must store all information that was writen on it.
    Some FS can do it. For this use SCSI UNMAP commands. Example of this FS: ext4, NTFS ( from WIN2008 R2).
    Latest revision of Vmware.
    Comstar have this features under ZFS
    http://gdamore.blogspot.com/2011/03/comstar-and-scsi-unmap.html
    I can't find information that Oracle introduce this features.
    Regards.
    Edited by: Nik on 16.05.2012 11:46

Maybe you are looking for

  • ITunes won't open - uh oh

    I was doing several things at once on my iMac (ripping a DVD, on iChat video, etc) and my mac crashed. Now it won't open iTunes. I have a 500 gig music library and hundreds of playlists etc, so I'd hate to have to reinstall iTunes and lose all that.

  • Clearing of open items of Vendor or Customer by FIFO method

    Dear Experts All debit and credit entries on a customers or vendor account has to be cleared automatically.  Now i want to clear the open items using FIFO procedure, that is credit entries on a customers account will be cleared against the oldest ope

  • Embedded Labview exe in a LabView program

    Is it possible to integrate an exisiting LabView *.exe file in another LabView vi or LabView project? The existing LabView *.exe does output some measurement data on the front panel, but I would need this data written e.g. to an Excel sheet. So I tho

  • Call Server not coming up

    ICM 7.5 CVP 7.0 The call server is not coming up. here is the log from voice browser. 13:15:08 Initializing Event Management System (EMS) Library. 13:15:08 Trace: EMS Server pipe Customer\Voice Browser\VBEMSPipe enabled for Customer\Voice Browser\VB

  • Phone was lost.  Requested info be erased.  Phone is found.  how do I cancel request?

    My phone was lost in the airport.  I cancelled telephone service and contacted Apple.  I was advised to request that all information on the phone be erased if it were connected to the internet.  I put in the request.  Since that time, I was contacted