StorEdge 3300 RAID

in fact im not very good in raids server. there are some errors i found in StorEdge 3300 RAID as follow :
DEVICE       TYPE      DISK         GROUP        STATUS
c1t0d0s2     sliced    rootdisk_1   rootdg       online
c1t1d0s2     sliced    mirrdisk_1   rootdg       online
c2t0d0s2     sliced    -            -            error
c2t0d1s2     sliced    -            -            online
c3t0d0s2     sliced    -            -            error
c3t0d1s2     sliced    igwbdg02     igwbdg       online
-            -         igwbdg01     igwbdg       failed was:c2t0d1s2
Disk group: rootdg
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg rootdg       rootdg       -        -        -        -        -       -
dm mirrdisk_1   c1t1d0s2     -        143339136 -       -        -       -
dm rootdisk_1   c1t0d0s2     -        143339136 -       -        -       -
v  rootdisk_15vol gen        ENABLED  1027776  -        ACTIVE   -       -
pl rootdisk_15vol-01 rootdisk_15vol ENABLED 1027776 -   ACTIVE   -       -
sd rootdisk_1-04 rootdisk_15vol-01 ENABLED 1027776 0    -        -       -
pl rootdisk_15vol-02 rootdisk_15vol ENABLED 1027776 -   ACTIVE   -       -
sd mirrdisk_1-04 rootdisk_15vol-02 ENABLED 1027776 0    -        -       -
v  rootvol      root         ENABLED  114825984 -       ACTIVE   -       -
pl rootvol-01   rootvol      ENABLED  114825984 -       ACTIVE   -       -
sd rootdisk_1-B0 rootvol-01  ENABLED  1        0        -        -       Block0
sd rootdisk_1-02 rootvol-01  ENABLED  114825983 1       -        -       -
pl rootvol-02   rootvol      ENABLED  114825984 -       ACTIVE   -       -
sd mirrdisk_1-01 rootvol-02  ENABLED  114825984 0       -        -       -
v  swapvol      swap         ENABLED  8395200  -        ACTIVE   -       -
pl swapvol-01   swapvol      ENABLED  8395200  -        ACTIVE   -       -
sd rootdisk_1-01 swapvol-01  ENABLED  8395200  0        -        -       -
pl swapvol-02   swapvol      ENABLED  8395200  -        ACTIVE   -       -
sd mirrdisk_1-02 swapvol-02  ENABLED  8395200  0        -        -       -
v  var          gen          ENABLED  16780224 -        ACTIVE   -       -
pl var-01       var          ENABLED  16780224 -        ACTIVE   -       -
sd rootdisk_1-03 var-01      ENABLED  16780224 0        -        -       -
pl var-02       var          ENABLED  16780224 -        ACTIVE   -       -
sd mirrdisk_1-03 var-02      ENABLED  16780224 0        -        -       -
Disk group: igwbdg
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg igwbdg       igwbdg       -        -        -        -        -       -
dm igwbdg01     -            -        -        -        NODEVICE -       -
dm igwbdg02     c3t0d1s2     -        211394560 -       -        -       -
v  back         fsgen        ENABLED  98304000 -        ACTIVE   -       -
pl back-01      back         DISABLED 98304000 -        NODEVICE -       -
sd igwbdg01-04  back-01      DISABLED 98304000 0        NODEVICE -       -
pl back-02      back         ENABLED  98304000 -        ACTIVE   -       -
sd igwbdg02-04  back-02      ENABLED  98304000 0        -        -       -
v  front        fsgen        ENABLED  98304000 -        ACTIVE   -       -
pl front-01     front        DISABLED 98304000 -        NODEVICE -       -
sd igwbdg01-03  front-01     DISABLED 98304000 0        NODEVICE -       -
pl front-02     front        ENABLED  98304000 -        ACTIVE   -       -
sd igwbdg02-03  front-02     ENABLED  98304000 0        -        -       -
v  igwbdg-stat  fsgen        ENABLED  1048576  -        ACTIVE   -       -
pl igwbdg-stat-01 igwbdg-stat DISABLED 1048576 -        NODEVICE -       -
sd igwbdg01-01  igwbdg-stat-01 DISABLED 1048576 0       NODEVICE -       -
pl igwbdg-stat-02 igwbdg-stat ENABLED 1048576  -        ACTIVE   -       -
sd igwbdg02-01  igwbdg-stat-02 ENABLED 1048576 0        -        -       -
v  log-alarm    fsgen        ENABLED  8388608  -        ACTIVE   -       -
pl log-alarm-01 log-alarm    DISABLED 8388608  -        NODEVICE -       -
sd igwbdg01-02  log-alarm-01 DISABLED 8388608  0        NODEVICE -       -
pl log-alarm-02 log-alarm    ENABLED  8388608  -        ACTIVE   -       -
sd igwbdg02-02  log-alarm-02 ENABLED  8388608  0        -        -       -
*Jan 21 14:04:14 igwb1 lomv: [ID 702911 kern.notice] 1/21/2008 14:4:14 GMT LOM time reference*
*Jan 22 05:53:10 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:10 igwb1   disk not responding to selection
*Jan 22 05:53:11 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:11 igwb1   disk not responding to selection
*Jan 22 05:53:24 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:24 igwb1   disk not responding to selection
*Jan 22 05:53:30 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:30 igwb1   disk not responding to selection
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:31 igwb1   Error for Command: write(10)               Error Level: Fatal
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] Requested Block: 108158864 Error Block: 10815886*
*4*
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] Vendor: SUN Serial Number: 276710*
C4-01
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] Sense Key: Not Ready*
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] ASC: 0x4 (<vendor unique code 0x4>), ASCQ: 0x1, FRU: 0x0*
*Jan 22 05:53:31 igwb1 vxdmp: [ID 619769 kern.notice] NOTICE: vxdmp: Path failure on 32/0x1ec*
*Jan 22 05:53:31 igwb1 vxdmp: [ID 997040 kern.notice] NOTICE: vxvm:vxdmp: disabled path 32/0x1e8 belonging to the dmpnode*
*274/0x28*
*Jan 22 05:53:31 igwb1 vxdmp: [ID 148046 kern.notice] NOTICE: vxvm:vxdmp: disabled dmpnode 274/0x28*
*Jan 22 05:53:31 igwb1 vxio: [ID 238951 kern.warning] WARNING: vxvm:vxio: error on Plex log-alarm-01 while writing volume*
log-alarm offset 1605776 length 16
*Jan 22 05:53:31 igwb1 vxio: [ID 751920 kern.warning] WARNING: vxvm:vxio: Plex log-alarm-01 detached from volume log-alar*
m
*Jan 22 05:53:31 igwb1 vxio: [ID 838567 kern.warning] WARNING: vxvm:vxio: igwbdg01-02 Subdisk failed in plex log-alarm-01*
in vol log-alarm
*Jan 22 05:53:31 igwb1 vxio: [ID 238951 kern.warning] WARNING: vxvm:vxio: error on Plex back-01 while writing volume back*
offset 409488 length 16
*Jan 22 05:53:31 igwb1 vxio: [ID 751920 kern.warning] WARNING: vxvm:vxio: Plex back-01 detached from volume back*
*Jan 22 05:53:31 igwb1 vxio: [ID 838567 kern.warning] WARNING: vxvm:vxio: igwbdg01-04 Subdisk failed in plex back-01 in v*
ol back
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:31 igwb1   Error for Command: write(10)               Error Level: Fatal
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] Requested Block: 9650304 Error Block: 9650304*
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] Vendor: SUN Serial Number: 276710*
C4-01
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] Sense Key: Not Ready*
*Jan 22 05:53:31 igwb1 scsi: [ID 107833 kern.notice] ASC: 0x4 (<vendor unique code 0x4>), ASCQ: 0x1, FRU: 0x0*
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:41 igwb1   Error for Command: write(10)               Error Level: Fatal
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] Requested Block: 9649406 Error Block: 9649406*
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] Vendor: SUN Serial Number: 276710*
C4-01
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] Sense Key: Not Ready*
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] ASC: 0x4 (<vendor unique code 0x4>), ASCQ: 0x1, FRU: 0x0*
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.warning] WARNING: /pci@8,700000/pci@2/scsi@4/sd@0,1 (sd61):*
Jan 22 05:53:41 igwb1   Error for Command: write(10)               Error Level: Fatal
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] Requested Block: 9649284 Error Block: 9649284*
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] Vendor: SUN Serial Number: 276710*
C4-01
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] Sense Key: Not Ready*
*Jan 22 05:53:41 igwb1 scsi: [ID 107833 kern.notice] ASC: 0x4 (<vendor unique code 0x4>), ASCQ: 0x1, FRU: 0x0*
*Jan 22 05:53:41 igwb1 vxio: [ID 238951 kern.warning] WARNING: vxvm:vxio: error on Plex front-01 while writing volume fro*
nt offset 203908 length 2
*Jan 22 05:53:41 igwb1 vxio: [ID 751920 kern.warning] WARNING: vxvm:vxio: Plex front-01 detached from volume front*
*Jan 22 05:53:41 igwb1 vxio: [ID 838567 kern.warning] WARNING: vxvm:vxio: igwbdg01-03 Subdisk failed in plex front-01 in*
vol front
*Jan 22 05:55:47 igwb1 vxdmp: [ID 912507 kern.notice] NOTICE: vxvm:vxdmp: enabled path 32/0x1e8 belonging to the dmpnode*
*274/0x28*
*Jan 22 05:55:47 igwb1 vxdmp: [ID 205910 kern.notice] NOTICE: vxvm:vxdmp: enabled dmpnode 274/0x28*
finally i tried to fsck the bad slid but it ask for super block
but it is not accept any super block in newfs -N command
moreover, i tried to repair the bad block it seem ok..but thing come up
any help please..thanks

if feel this information are not enough and you need more please show how can get it to post it.

Similar Messages

  • Problem in configuring Storedge 3300

    Hi,
    We are facing problem in configuring StorEdge 3300 in
    SunFire 280R. We have a StorEdge 3300 (JBD) containing 5 X
    36GB disks. We do not have any controller in this. We would
    like make these disks accessible at OS level using format
    utility. When we execute probe-scsi-all at Boot prompt we
    get the following output
    /pci@8,600000/SunW, qlc@4
    LiD HA LUN Port Disk
    0 0 0 some number Fujitsu
    1 1 0 some number Fujitsu
    /pci@8,700000/scsi@6,1
    Target 4
    Unit 0 Removable Tape Drive
    /pci@8,700000/scsi@6
    Unit 0 Removable Read-only Drive
    /pci@8,700000/pci@3/scsi@5
    /pci@8,700000/pci@3/scsi@4
    Target 8
    Unit 0 Disk Seagate
    Target 9
    Unit 0 Disk Seagate
    Target a
    Unit 0 Disk Seagate
    Target b
    Unit 0 Disk Seagate
    Target c
    Unit 0 Disk Seagate
    Target f
    Unit 0 Processor SUN Storedge 3310 AD000
    Please help, Thanks
    Regards
    Krishna

    I too have the same problem. Have a v440 and Storedge 3310, the first time I used the server it was all ok, after a week I booted it up again and when I use the command sccli i get the following error:
    [root@server1 /]# sccli
    sccli: no manageable devices found
    sccli: Type "sccli help" for valid commands.
    In the first day I used the server I remember I could access it in the format command, since I connected everything again, I can't see it no longer on the OS.
    I did then following sequence off comands:
    stop-a
    setenv auto-boot? false (I don't won't it to boot again)
    setenv boot-device false (I wan't to do a clean probe-scsi-all)
    reset-all
    I then used the probe-scsi-all comand which had the following output:
    {1} ok probe-scsi-all
    /pci@1f,700000/scsi@2,1
    /pci@1f,700000/scsi@2
    Target 0
    Unit 0 Disk HITACHI DK32EJ36NSUN36G PQ0B 71132959 Blocks, 34732 MB
    Target 1
    Unit 0 Disk HITACHI DK32EJ36NSUN36G PQ0B 71132959 Blocks, 34732 MB
    Target 2
    Unit 0 Disk HITACHI DK32EJ36NSUN36G PQ0B 71132959 Blocks, 34732 MB
    Target 3
    Unit 0 Disk HITACHI DK32EJ36NSUN36G PQ0B 71132959 Blocks, 34732 MB
    /pci@1d,700000/pci@2/scsi@5
    /pci@1d,700000/pci@2/scsi@4
    Target 0
    Unit 0 Disk SUN StorEdge 3310 0325
    Before I did that, I checked if the cables where in the correct configuration I wanted, it's a dual buffer configuration that is going to be used on raid 1+0 configuration.
    After that I did:
    setenv boot-device disk0
    boot -r (to reconfigure)
    I also download the pci scsi lvd controller driver and patches, nothing seems to work, I'm trying to recreate my first working day with server, until now, with no success.
    The last thing I installed in that host was Disksuite? Might there be any kind of incompatility?
    Orlando

  • StorEdge 3300 + SUN Fire V440 Hardware connection

    1. How is the Hardware connection done b/n a single node SUN Fire V440 and a disk array StorEdge 3300 the equipment is used for CDMA product and the SUN Fire have PCI4 and PCI5 SCSI ports, also
    2. during first login it shows me as below
    AAAserver sendmail[322]: My unqualified host name (AAAserver) unknown; sleeping for retry
    Oct 31 13:50:52 AAAserver sendmail[321]: unable to qualify my own domain name (AAAserver) -- using short name
    Oct 31 13:50:52 AAAserver sendmail[322]: [ID 702911 mail.alert] unable to qualify my own domain name (AAAserver) -- using short name
    Oct 31 13:50:52 AAAserver sendmail[321]: [ID 702911 mail.alert] unable to qualify my own domain name (AAAserver) -- using short name
    I want u guys to help me
    Edited by: anan on Oct 31, 2007 9:33 AM

    I think the answer for #2 is to add your fully qualified host name to /etc/hosts
    e.g. �AAAserver.<your domain>.com� or just �AAAserver.�
    The second example is just the hostname followed by a period "."

  • Converting Storedge 3310 RAID unit to act as JBOD?

    Hello, I have been struggling with this configuration, any advice is appreciated.
    I have 4 x Storedge 3310 RAID units each with it own controller. I have 1 x Linux host and 1 x HBA in the host, with ch0 and ch1. I want to be able access to all 4 Storedge 3310s. Normally if this were 2 x Storedge 3310 RAID + 2 x JBOD then I just connect each RAID unit to a JBOD then each RAID to ch0 and ch1 respectively to the host. Instead what I have is 4 x Storedge 3310 RAID units, is there a way to convert two of the Storedge 3310 unit to make it act exactly like a JBOD? So that I can daisy-chain them to make them all accessible to the host?
    Or adding another HBA card to the host is the only option?
    thank you.

    What is the difference between the host/drive channel? Besides the fact that they are their for connecting to disk drive or to host. Because lets say I connect the first RAID controller to the second one, host-to-host configuration from one RAID to another, will the host see this transparently?
    By the way, I tried the JBOD method you mention so basically just connect one of the RAID unit to the other one just as a JBOD and the RAID controller can see the disks on a difference channel but only half of the disk because the RAID I/O actually have internal conneciton to half of the disk, while the other half is jumper.

  • StorEdge 3310 RAID hung.

    During the billing run on a Sun Fire V440, the attached 3310 hung. Reboots of the server did nothing to improve the situation, but once the RAID was powercycled everything was alright again.
    sccli version 2.1.1
    built 2005.09.16.23.10
    build 1 for solaris-sparc
    Vendor: SUN
    Product: StorEdge 3310
    Revision: 0325
    Peripheral Device Type: 0x0
    NVRAM Defaults: 325S 3310 v1.37
    Bootrecord version: 1.31G
    Serial Number: 07CB61
    There are no "event" entries for this trouble listed in sscli, and everything comes up green. The only errors we received at the time were that of SCSI errors:
    Aug 10 10:49:30 InfranetDB01 scsi: [ID 107833 kern.warning] WARNING: /pci@1f,700
    000/scsi@2,1/sd@0,0 (sd16):
    Aug 10 10:49:30 InfranetDB01 Error for Command: write(10) Error
    Level: Retryable
    Aug 10 10:49:30 InfranetDB01 scsi: [ID 107833 kern.notice] Requested Block:
    289557696 Error Block: 289557696
    Aug 10 10:49:30 InfranetDB01 scsi: [ID 107833 kern.notice] Vendor: SUN
    Serial Number: 29C8306F-00
    Aug 10 10:49:30 InfranetDB01 scsi: [ID 107833 kern.notice] Sense Key: Unit
    Attention
    Aug 10 10:49:30 InfranetDB01 scsi: [ID 107833 kern.notice] ASC: 0x29 (power
    on, reset, or bus reset occurred), ASCQ: 0x0, FRU: 0x0
    Aug 10 11:01:12 InfranetDB01 scsi: [ID 107833 kern.notice] /pci@1f,700000/scsi@2
    ,1 (mpt1):
    Aug 10 11:01:12 InfranetDB01 got external SCSI bus reset.
    Aug 10 11:01:12 InfranetDB01 scsi: [ID 365881 kern.info] /pci@1f,700000/scsi@2,1
    (mpt1):
    Aug 10 11:01:12 InfranetDB01 Log info 11030000 received for target 0.
    Aug 10 11:01:12 InfranetDB01 scsi_status=0, ioc_status=804b, scsi_state=8
    Aug 10 11:01:12 InfranetDB01 scsi: [ID 107833 kern.notice] /pci@1f,700000/scsi@2
    ,1 (mpt1):
    Aug 10 11:01:12 InfranetDB01 got external SCSI bus reset.
    [repeat]
    The question is whether or not to upgrade the firmware. Since the current firmware is so old (version 3.x) there seems to be special instructions to bring it to 4.x, and eventually 4.15G. Which would include OS patches as well. (117171-12).
    Or, just carry on, since this has only happened once (during the last billing run) in the last 9 runs/months.

    What is the difference between the host/drive channel? Besides the fact that they are their for connecting to disk drive or to host. Because lets say I connect the first RAID controller to the second one, host-to-host configuration from one RAID to another, will the host see this transparently?
    By the way, I tried the JBOD method you mention so basically just connect one of the RAID unit to the other one just as a JBOD and the RAID controller can see the disks on a difference channel but only half of the disk because the RAID I/O actually have internal conneciton to half of the disk, while the other half is jumper.

  • Need to Install 2 new HDD in Storedge 3320 (Raid 0+1)

    Hello,
    I have ST3320 connected to V440
    Actual configuration of ST3320= 3 x 3 HDD / RAID 0+1
    I have to ad 2 new disks and keep the RAID 0+1
    How should I proceed?
    PS.
    * Network interface: YES
    ip-address: 192.168.1.1
    netmask: 255.255.255.0
    gateway: 0.0.0.0
    mode: static
    * SCCLI software: PRESENT
    * Old volume configuration:
    LD LD-ID Size Assigned Type Disks Spare Failed Status
    ld0 0B4672BB 409.43GB Primary RAID1 6 0 0 Good
    Write-Policy Default StripeSize 128KB
    *Old disks configuration
    [15:57:49] aliidb: Ch Id Size Speed LD Status IDs Rev
    0 8 136.73GB 320MB ld0 ONLINE SEAGATE ST314670LSUN146G 045A
    S/N 3445AYS5
    0 9 136.73GB 320MB ld0 ONLINE FUJITSU MAW3147NCSUN146G 1703
    S/N 000724C0C391
    0 10 136.73GB 320MB ld0 ONLINE SEAGATE ST314670LSUN146G 045A
    S/N 3445AXXD
    0 11 136.73GB 320MB ld0 ONLINE SEAGATE ST314670LSUN146G 045A
    S/N 3445BEX6
    0 12 136.73GB 320MB ld0 ONLINE SEAGATE ST314670LSUN146G 045A
    S/N 3445AX7K
    0 13 136.73GB 320MB ld0 ONLINE FUJITSU MAW3147NCSUN146G 1703
    S/N 000641C09VJ9
    * File system usage
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d0 20G 12G 8.2G 59% /
    /dev/dsk/c1t3d0s0 67G 44G 23G 67% /Oracle_recovery
    /dev/dsk/c1t2d0s0 20G 9.9G 9.6G 51% /oracle
    /dev/md/dsk/d3 20G 13G 6.4G 67% /opt
    /dev/dsk/c3t0d0s3 134G 55G 78G 42% /data01
    /dev/dsk/c3t0d0s4 134G 128G 4.7G 97% /data02
    /dev/dsk/c3t0d0s5 134G 70G 63G 53% /data03
    /dev/dsk/c1t2d0s1 48G 34G 13G 74% /Rating
    /dev/md/dsk/d2 20G 9.4G 10G 49% /Oracle_backup
    * OS: Solaris 10
    * Data: Oracle database.

    Do you want a fresh copy of the OS or do you want to move from another drive? If it's the former, then you just run the installation from the disc. If it's the latter, it depends on what you have on the old drive (e.g. Time Machine or a regular OS X install).

  • StorEdge A1000 Raid Level Question

    I'm a bit confused with what RAID levels the A1000 supports. On:
    http://www.sun.com/storage/workgroup/a1000/specs.html
    It says that it supporst 1+0. In the raidutil manpage it say 1 is
    equivalent to 0+1 and there is no mention of 1+0. So what gives?
    I used raid manager 6.22.
    Thanks!

    What raid level the model supports has to be checked specific to its specification.
    raid level.
    o --> sriping
    1---> mirroring
    2-->emc (emming code conversion)
    3---> srtiping with parity
    4---> writing entire datablock to the disk with parity
    5--->striping the data with two independent parity
    6---> striping the data with 2 distributed parity

  • Disks not visible on StoreEdge 3300

    Hello everyone,
    I have a problem with the new disks mounted to our StorEdge 3300.
    "sccli> show enclosure-status" lists my new disks (DiskSlot3,DiskSlot4)
    But "sccli> show disks" don't .
    And of course i cannot use them. Do you have any idea ?
    sccli> show enclosure-status
    Ch Id Chassis Vendor Product ID Rev Package Status
    0 14 0941D2 SUN StorEdge 3310 A 1180 1180 OK
    Enclosure Component Status:
    Type Unit Status FRU P/N FRU S/N Add'l Data
    cut
    DiskSlot 0 Absent 370-5524 0941D2 addr=0,led=off
    DiskSlot 1 Absent 370-5524 0941D2 addr=1,led=off
    DiskSlot 2 Absent 370-5524 0941D2 addr=2,led=off
    DiskSlot 3    OK       370-5524  0941D2     addr=3,led=off
    DiskSlot 4    OK       370-5524  0941D2     addr=4,led=off
    DiskSlot 5 Absent 370-5524 0941D2 addr=5,led=off
    DiskSlot 6 OK 370-5524 0941D2 addr=8,led=off
    DiskSlot 7 OK 370-5524 0941D2 addr=9,led=off
    DiskSlot 8 OK 370-5524 0941D2 addr=10,led=off
    DiskSlot 9 OK 370-5524 0941D2 addr=11,led=off
    DiskSlot 10 OK 370-5524 0941D2 addr=12,led=off
    DiskSlot 11 Absent 370-5524 0941D2 addr=13,led=off
    Enclosure SCSI Channel Type: single-bus
    sccli> show disks
    Ch Id Size Speed LD Status IDs Rev-----
    0 8 68.37GB 160MB ld0 ONLINE M SEAGATE ST373453LSUN72G 0449
    S/N 3HW36GSS00007548
    0 9 68.37GB 160MB ld0 ONLINE M SEAGATE ST373453LSUN72G 0449
    S/N 3HW36FM000007548
    0 10 68.37GB 160MB ld0 ONLINE M SEAGATE ST373453LSUN72G 0449
    S/N 3HW36AY200007548
    0 11 68.37GB 160MB ld0 ONLINE M SEAGATE ST373453LSUN72G 0449
    S/N 3HW36B0D00007548
    0 12 68.37GB 160MB ld0 STAND-BY M SEAGATE ST373453LSUN72G 0449
    S/N 3HW36NM600007548
    Here Id 3 and 4 are missing :(
    Thanks and regards..

    Check the settings in the General tab of the Finder's preferences.
    (63695)

  • Raid Manager Unable to Scan Module

    I was expanding a Sun Storedge A1000 raid from 9 drives to 11 drives. After that, I wanted to create one more lun, but end up rm not able to scan the module.
    I think this should be caused by minimum raid 5 requires 3 drives, while there are only 2 drives new added. I can see the raid controller.
    So how can I recreate a new drive group and start all from scratch, or how can I delete the new LUN I intended to create?
    Your help is greatly appreciated,
    Wei

    common problem is that the you may have added a lun number that isn't in the /kernel/drv/sd.conf. The software relies on the ability to contact all luns on the controller, so if you haven't configured your driver to build a path to the new lun, you'll run into this problem.
    try running /usr/lib/osa/bin/lad
    the lun numbers will tell you which ones need to be defined for the
    target id's in the sd.conf. The target id's are in the cNtNdN from lad after the "t"

  • Disk Utility will not format/erase/partition an unformatted scsi disk

    Well, I'm back with a RAID problem again. This time the powers that be sent me a Sun Microsystems StorEDGE A5200 RAID box (22 33gig disks). It's a fibre channel system, and is full of Seagate FC disks. The box only hooks up to my G5 Quad via the Apple Fibre channel card that I have hooked up. It's a dual channel card and both are plugged into the back of the RAID array.
    Now, here's what I have.....
    The Drives show up in both Disk Utility and Anubis
    The is only one drive formatted and it works fine.
    All the rest of the drives are unformatted (not just blank, but without any operating system on them, just like they were fresh out of the box)
    Attempting to Erase a disk results in the message
    Disk erase failed
    Disk erase failed with the error:
    Invalid Argument
    Attempting to Partition a disk results in the message
    Partition failed with the error:
    Invalid Argument
    Attempting to create a RAID set results in the message
    Creating RAID set failed with the error:
    Could not add a RAID disk to a RAID. (note there was no other RAID set)
    Only one disk in the array is formatted and it will respond to all of the functions in Disk Utility.
    I was told that I needed to do something using the command line, but that was pretty vague as to what I needed to do. Apparently I need some sort of superuser authority to get these drives formatted, and that's it's not documented any where.
    So folks, I leave it up to you again, what do I do?

    OK, I have found out much in the last few weeks.
    1 - Apple no longer supports SCSI, you may not format disks in any way using OS X. OS 9 also does not work on any of the G5 machines at least.
    2 - You can use Linux to do most of the work with SCSI, it is still supported, although there is little clear documentation on how to get it done. Linux is free and works very well with my G5 Quad. But it has a steep learning curve and it's not worth it for just formatting SCSI drives.
    What I found out about my SCSI drives turned out to be the core of all the problems I have had, and it was Linux that found the problem. Seems all of the drives I have but one have been formatted in such a way that only the proprietary operating system that formatted them would read them. 520k data blocks. I reformatted several of the drives to 256k blocks and they work fine now.
    Before I go out and start going though this for a few hundred more of these drives, my academic supplier found just as many drives that were specifically formatted for use in Sun RAIDs and are properly recognized by OS X, and with twice the drive space (current is 36gig, new ones are 73gig). He is sending me 44 of them (and a second RAID box)
    I'm hoping that this finally solves the problem and hope all will be working soon.

  • Raid 1 error message with ASC: 0x11, ASCQ: 0x0, FRU: 0xf

    Dear All,
    There is a error message with ASC: 0x11 (unrecovered read error), ASCQ: 0x0, FRU: 0xf
    And I reference the documentation at http://docs.oracle.com/cd/E19105-01/storedge.6320/817-5918-10/817-5918-10.pdf The error messages means:
    "The RAID controller firmware, during a normal read or volumes verification operation, corrects the bad sector on the drive by reconstructing the
    data (assuming a RAID 1+0 or RAID 5 configuration) and writing it back to the drive. The drive, in turn, writes the data to a spare sector.
    Ensure that the volume is scrubbed on a regular basis. If the volumes in the RAID device are configured as RAID 0,
    then the data is lost and drive replacement is required."
    But I am not understand if the harddisk in RAID 1. Is that the harddisk with problem will write the data to a spare sector but the other mirrored
    harddisk will not write to the spared block due to mirroring or they both write the data in the bad block to the spared block?
    If the data in the bad block moved to the spare sector in both mirrored harddisks.
    Is that the whole mirrored group will reuse the bad sector again after replacement?
    If the Mirror is built up by Veritas not the Solaris Volume Manager. Is there any difference?
    Thank You!
    Edited by: 960831 on 2012/9/23 下午 9:03
    Edited by: 960831 on 2012/9/23 下午 9:04

    It appears that SCSI disk #178, in an external array, has a bad data block
    and that the specific data block is #7564160. That data block
    additionally translates into sector #7560064 of LUN#1.
    Your computer system is finding it difficult to read whatever is in that part of the drive.
    Repair that disk by marking the block as unusable,
    or you can just replace the disk and rebuild the LUN.
    I've never had to do that, personally.
    That's what service contracts are for. The vendor does it for me.
    It seems that it is time for you to open a service case with the array vendor.

  • Sun StorEdge 3510

    Dears,
    Good day,
    I need your support to guide me to configure Sun StorEdge 3510. I have 24 Disks and single controller. I is connected to one host. I want to configure it with the most biggest storage. I want to implement RAID 5 on 11 disks on the first enclosure and the same to the second enclosure with two disks spare each one in the same enclosure.
    Can you guide me throw this?
    Thanks a lot.

    Wow, it took me about 5 hours to set mine up from scratch and I'm a pretty strong storage guy. Here's an overview of what I did. Most of the info is in the 3510 documentation.
    Step 1 is connectivity. Get your host to see the 3510 (FC).
    Step 2 - use the cli utilities to setup the IP address on the controller (in-band).
    Step 3 - ssh in, setup the RAID and presentation of the luns
    Step 4 - rescan on the hosts to see the new luns

  • Storedge 3510 failover configuration with 1 host

    I people.
    I'm new to the storedge configuration.
    I have a Sun storedge 3510 - 2 controllers with 2 x host port fc and 1x drive port each.
    I want to do a simple configuration - connect 1 host to the storedge with failover.
    It's correct to connect the 2 controllers using the drive port - because i want failover?
    It's possible to use only 1 pci single FC adapter in the machine?
    I will connect the machine with the storedge usinge 1 fibre cable,
    I will use the host port FC1. And to do the failover I will connect the 2 controllers
    using the drive port fc3 e fc2. - THIS IS CORRECT?
    My problem is who to connect the cables and who to configure the storedge. I'm
    already connected to the COM Port.
    Another thing i have in the first controller amber light - this is a hardware problem?
    And wath is the best configuration to use with 3510 storedge, one host, and failover?
    Thank you. I need this help for now.. Please.

    Isn't it wonderful when people respond?
    I, too, am running into the same scenario. We have a single 3510FC that is connected to a single host through two controller cards. The drives are configured as a single logical drive under RAID-5. We want to have this configuration multi-pathed for redundancy, not throughput, even though controller cards NEVER fail. [sarcasm] We will be using Veritas VxVM DMP for redundancy.
    Unfortunately, I can only ever see the logical drive/LUN on one controller. The main connection is channel 0 of the primary controller. Whenever I try to configure it to simultaneously be on channel 5 of the secondary controller, the 3510 won't let me do it. I can't figure out how to get the LUN to be assigned to two host channels when one is on the primary controller and one is on the secondary controller.
    I find this to be absurd. Controllers fail. That's all that there is to it. Yet the design of the 3510 (and the 3310 as well) seem to fight like hell whenever you want to spread the logical drives across physical controllers.
    What's the solution to this one, guys?

  • Raid manager and format utility inconsistency

    Hi,
    I have two A1000 module attached to Sun Fire 280 R with X6541A through both channels. Here is raidutil info for (c3t4d0 and c4t8d0)
    # raidutil -c c3t4d0 -i
    LUNs found on c3t4d0.
    LUN 0 RAID 3 381620 MB
    Vendor ID Symbios
    ProductID StorEDGE A1000
    Product Revision 0301
    Boot Level 03.01.02.33
    Boot Level Date 06/16/99
    Firmware Level 03.01.02.35
    Firmware Date 08/12/99
    raidutil succeeded
    # raidutil -c c4t8d0 -i
    LUNs found on c4t8d0.
    LUN 0 RAID 5 172345 MB
    Vendor ID Symbios
    ProductID StorEDGE A1000
    Product Revision 0205
    Boot Level 03.01.04.00
    Boot Level Date 04/05/01
    Firmware Level 03.01.04.68
    Firmware Date 06/22/01
    raidutil succeeded!
    # lad
    c3t4d0 1T04469376 LUNS: 0
    c4t8d0 1T03843281 LUNS: 0
    Those output actually reflecting the settings,
    But what format command gave me is inconsistent with raid manager:
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    /pci&#64;8,600000/SUNW,qlc&#64;4/fp&#64;0,0/ssd&#64;w21000004cf092afe,0
    1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    /pci&#64;8,600000/SUNW,qlc&#64;4/fp&#64;0,0/ssd&#64;w2100002037cd9c32,0
    2. c3t4d0 <Symbios-StorEDGEA1000-0205 cyl 43084 alt 2 hd 128 sec 64>
    /pseudo/rdnexus&#64;3/rdriver&#64;4,0
    Select 2 -->partition-->print
    Part Tag Flag Cylinders Size Blocks
    0 root wm 0 - 43083 168.30GB (43084/0/0) 352944128
    1 swap wu 0 0 (0/0/0) 0
    2 backup wu 0 - 43083 168.30GB (43084/0/0) 352944128
    so in format, c3t4d0 is actually reflecting c4t8d0
    Anyone can give me a hint,
    Thanks,
    Wei

    Yeah, you probably want to go through rebuilding the device paths. This amounts to removing everything in /dev/osa/dev/dsk, /dev/osa/dev/rdsk, and /devices/pseudo/rdnexus. Then performing
    a reconfiguration boot.
    If you're uncomfortable with this, open a service call, or look for
    the string hot_add, under Infodocs and Symptoms and Resolutions in the Sunsolve search engine.
    Another string is "Syncing lad and format".
    Good luck!

  • StorEge A1000 + PC (with Mylex960 Raid Controller)

    PC = PIII / 800EB / 256MB / 20GB
    Raid card = Mylex DAC960
    StorEdge = A1000 with 3 x 18Gb drives loaded.
    Wehn Mylex scans for a new scsi device, Storedge A1000 can't be traced/probed. I tried this with HP scsi box and were able to probe the external devices but for Sun Storedge A1000 it was not successfull.
    Any advise on how I can probe/trace this external device (Storedge A1000).
    I am planning also to install solaris 10 (x86) and hopefully Solaris could detect the external storage (series of drives) at the application level. (cross finger)
    For your advise Gurus.
    Petalio

    Can you describe the features and specifications of that SCSI card ?
    The A1000 array has a High-Voltage-Differential interface.
    (See its link in the Sun System Handbook)
    HVD is not common in the PeeCee universe.
    The array already has a RAID controller in its chassis,
    and will not work with a RAID controller SCSI card.
    Any attempts to use a LVD card or a S/E card will just not work, either.
    It would be invisible to the SCSI chain.
    ... then, additionally, you're going to need some sort of RAID control software
    to administer the A1000 and its internal RAID controller.
    if you do eventually get a compatible HBA, you also need to be aware
    that functional support for the array was specifically dropped from Solaris 10.
    You'd need to run Sol8 or Sol9 with RM6 software, and I cannot remember
    whether RM6 was ever ported to x86 Solaris.
    I fear you're just going to be out of luck,
    and may need to get rid of the array (e.g. Ebay ?).

Maybe you are looking for