X4200 + RHEL4 32bit  + SG-XPCI1FC-QLC + STOREDGE 3510

Hi,
this is my configuration :
- sun x4200
+ SG-XPCI1FC-QLC
+ STOREDGE 3510
- RHEL 4 32bit (CENTOS)
This is an application server that have to run cobol applications connected to oracle
and for this i must use a 32 bit system (rtsora for cobol don't build on 64).
But i have a problem with the storedge i can't see any partition mapped to this host!
If i use the 64 bit os version with kernel 2.6.9-22.ELsmp all work fine on the same at 32
the qla6312 kernel driver load correctly but i can't see any attached scsi device.
Any ideas ?

Hi, thanks for response,
no i've not compiled a new kernel i installed the 32bit release of RHEL-4 (because i have a cobol application that must work a 32).
I have 2 server identical: 2 x4200 the first with RHEL-4 64bit the second RHEL-4 32bit the second don't work but if i made a kernel update on RHEL-4 64bit with the new one the controller stop to work!
I havo no idea.

Similar Messages

  • StorEdge 3510 FC Array Transport/Hard Errors

    We have a StorEdge 3510 connected to a V440 running Solaris 9. /var/messages indicates no errors however iostat -e shows hard and transport errors that increment continuously. If I access the controller on the 3510 I can see transport errors when I look at the Fibre Chanel errors on the drive side. The system contains 2 controllers in redundant mode, however we only have one FC cable path to the HBA on the V440. I have replaced the fibre cable to the HBA, the GBIC and swapped the controllers all of which have not resolved the issue. I have also had a case open with Sun for 2 months and they have no idea.
    What could be causing these errors?

    I presume that by now you have sorted this problem, but in case not, 512Gb is the maximum logical drive size when the i/o optimisation is random.
    On the max disk size, I read somewhere that VxFS might help.
    I found your query when looking for help on directing StorEdge events to the syslogd on our central loghost - which I still haven't found out how to do!?

  • Sun StorEdge 3510

    Dears,
    Good day,
    I need your support to guide me to configure Sun StorEdge 3510. I have 24 Disks and single controller. I is connected to one host. I want to configure it with the most biggest storage. I want to implement RAID 5 on 11 disks on the first enclosure and the same to the second enclosure with two disks spare each one in the same enclosure.
    Can you guide me throw this?
    Thanks a lot.

    Wow, it took me about 5 hours to set mine up from scratch and I'm a pretty strong storage guy. Here's an overview of what I did. Most of the info is in the 3510 documentation.
    Step 1 is connectivity. Get your host to see the 3510 (FC).
    Step 2 - use the cli utilities to setup the IP address on the controller (in-band).
    Step 3 - ssh in, setup the RAID and presentation of the luns
    Step 4 - rescan on the hosts to see the new luns

  • Storedge 3510 failover configuration with 1 host

    I people.
    I'm new to the storedge configuration.
    I have a Sun storedge 3510 - 2 controllers with 2 x host port fc and 1x drive port each.
    I want to do a simple configuration - connect 1 host to the storedge with failover.
    It's correct to connect the 2 controllers using the drive port - because i want failover?
    It's possible to use only 1 pci single FC adapter in the machine?
    I will connect the machine with the storedge usinge 1 fibre cable,
    I will use the host port FC1. And to do the failover I will connect the 2 controllers
    using the drive port fc3 e fc2. - THIS IS CORRECT?
    My problem is who to connect the cables and who to configure the storedge. I'm
    already connected to the COM Port.
    Another thing i have in the first controller amber light - this is a hardware problem?
    And wath is the best configuration to use with 3510 storedge, one host, and failover?
    Thank you. I need this help for now.. Please.

    Isn't it wonderful when people respond?
    I, too, am running into the same scenario. We have a single 3510FC that is connected to a single host through two controller cards. The drives are configured as a single logical drive under RAID-5. We want to have this configuration multi-pathed for redundancy, not throughput, even though controller cards NEVER fail. [sarcasm] We will be using Veritas VxVM DMP for redundancy.
    Unfortunately, I can only ever see the logical drive/LUN on one controller. The main connection is channel 0 of the primary controller. Whenever I try to configure it to simultaneously be on channel 5 of the secondary controller, the 3510 won't let me do it. I can't figure out how to get the LUN to be assigned to two host channels when one is on the primary controller and one is on the secondary controller.
    I find this to be absurd. Controllers fail. That's all that there is to it. Yet the design of the 3510 (and the 3310 as well) seem to fight like hell whenever you want to spread the logical drives across physical controllers.
    What's the solution to this one, guys?

  • Storedge 3510 changed logical path after installation

    Hi everyone!
    I have a E450 with 2 HBA conneted to a Storedge 3510, and I installed Solaris 10, but after that my storedge 3510 changed the original LUN's logical path. For example:
    Before installation: c4t600601608ED80800C109FE8C4652DB11d0s2
    After installation: c4t45d0
    Does anybody know why I see this in a different way???
    I need the old name!!!
    THAHKS!!

    Hi.
    fcinfo hba-port Speed information etc.
    luxadm -e port Show available FC-ports and him status.
    luxadm -e dump_map <controller address from previos comand>
    cfgadm -al
    cfgadm -al -o show_FCP_dev
    Try reconnect array.
    You use direct connections or FC-switch?
    I not sure but some-times can be damaged /kernel/drv/fp.conf
    /kernel/drv/scsi_vhci.conf
    Check it:
    # grep -v "#" fp.conf
    name="fp" class="fibre-channel" port=0;
    name="fp" class="fibre-channel" port=1;
    scsi-binding-set="fcp";
    load-ulp-list="1","fcp";
    ddi-forceattach=1;
    mpxio-disable="yes";
    # grep -v "#" scsi_vhci.conf
    name="scsi_vhci" class="root";
    load-balance="round-robin";
    auto-failback="enable";
    #Regards.

  • How to configure StorEdge 3510 as a SAN?

    Hi all, just joined!
    We've got a StorEdge 3510 set up as a DAS (private loop mode) connected to three servers. We've gone through hell and high water getting things configured (firmware upgrades were particularly troublesome), but now that we've got everything running, we're starting to think about future expansion. However, despite my experimentation, I can't work out how to set them up in a switched fabric.
    We have a test 3510 set up as Point-to-Point, connected to an Emulex 355 FC switch. In Loop mode, the device shows up as a SCSI device, but I can't get Windows to see anything on the end of the HBA when it's in PTP. Half the time, the HBA won't even acknowledge a link.
    I can't work out where to start with this. The switch, admittedly, came from Apple with an Xserve RAID, but we've had the 3510 plugged into it in Loop mode and visible, so I don't think there's any problem with Apple ROM-ing the switch.
    I've tried Zoning the switch, and Smart-Setting the ports to be Switched Fabrics, but nothing seems to make the StorEdge visible to the host. I thought that the test box might be at fault initially, since it has a Dell-EMC FC HBA and it wouldn't surprise me if it's ROM'd to EMC boxes only, but it seems that another box with a generic QLogic card won't see the StorEdge either.
    Full tech specs of the system:
    -3510 dual-controller, firmware 4.23A, 12 146GB 10K disks
    -Emulex 355 12-port FC SAN Switch
    -Dell PowerEdge 1850 with Emulex LP1050Ex-E PCI-E HBA
    -Dell PowerEdge 2850 with QLogic QLA210 PCI HBA
    -Windows Server 2003 x64
    Any advice or starting points would be massively appreciated!
    Thanks,
    Rob

    You are right in changing the A3510 to point to point
    From:
    sccli> sho host
    max-luns-per-id: 32
    queue-depth: 1024
    fibre-connection-mode: loop
    inband-management: enabled
    To:
    sccli> sho host
    max-luns-per-id: 32
    queue-depth: 1024
    fibre-connection-mode: point-to-point
    inband-management: enabled
    The 3510 is now configured to work in a SAN.
    In loop mode you will see the enclosure as an ses device. In point to point you won't, you will just see the available LUNs.
    In P2P you should see all your configured logical volumes or disks with there WWN names on the hosts side
    From:
    sccli> show logical
    LD LD-ID Size Assigned Type Disks Spare Failed Status
    ld0 5E714957 545.91GB Primary RAID5 5 2 0 Good
    Write-Policy Default StripeSize 128KB
    ld1 1F4ABC1D 545.91GB Secondary RAID5 5 2 0 Good
    Write-Policy Default StripeSize 128KB
    with
    Serial Number: 07FE18
    You should see something like (these names are taken from a Solaris luxadm probe)
    Node WWN:206000c0ff07fe18 Device Type:Disk device
    Logical Path:/dev/rdsk/c7t600C0FF00000000007FE181F4ABC1D00d0s2
    Node WWN:206000c0ff07fe18 Device Type:Disk device
    Logical Path:/dev/rdsk/c7t600C0FF00000000007FE185E71495700d0s2
    I don't have any clue how this would show up under windows. Putting a fabric on your switch should make things more tidy, but shouldn't fundamentally change anything.

  • Performance to expect from a StorEdge 3510

    Hi all,
    Really need some advice regarding a 3510FC array. We currently have a dual-controller setup with a RAID array + JBOD expansion. Each array has 10 drives, which we've setup as two RAID5 LD's as 8 data, 1 parity + 1 hotspare. The array firmware is 4.15.
    Performance isn't good. We can't seem to get anything better than 65Mb/s, (based upon "mkfile 2g testfile"). Whilst I appreciate this is a far from scientific test, our HDS AMS200 performs this little task in 18secs, as opposed to 32secs on the 3510.
    In the real world, Oracle is running like a dog on it :-(
    I hoping that it's just our RAID5 setup that is not good, but before I consider the pain of -- backup data, reconfigure LD's, restore data -- I want to know if really the 3510 should be performing better.
    If I re-jig the array with RAID1 Luns, + perhaps sequential for redo logs, should I be able to get some decent throughput from this array?
    All help appreciated!

    Here's our config ...
    Sun StorEdge 3000 Family CLI
    Copyright 2002-2005 Dot Hill Systems Corporation.
    All rights reserved. Use is subject to license terms.
    sccli version 2.3.0
    built 2006.03.15.09.49
    build 12 for solaris-sparc
    * inquiry-data
    Vendor: SUN
    Product: StorEdge 3510
    Revision: 415F
    Peripheral Device Type: 0x0
    NVRAM Defaults: 415F 3510 S470F
    Bootrecord version: 1.31H
    Serial Number: 0A7805
    Page 80 Serial Number: 0A78055F4B554305
    Page 83 Logical Unit Device ID: 600C0FF0000000000A78055F4B554305
    Page 83 Target Device ID: 206000C0FF0A7805
    IP Address:
    Page D0 Fibre Channel Address: 05 (id 255)
    Page D0 Node Name: 206000C0FF0A7805
    Page D0 Port Name: 256000C0FFCA7805
    Ethernet Address: 00:C0:FF:0A:78:05
    Device Type: Primary
    unique-identifier: A7805
    controller-name: "R2 3510"
    * network-parameters
    ip-address:
    netmask: 255.255.255.224
    gateway:
    mode: static
    * host-parameters
    max-luns-per-id: 32
    queue-depth: 1024
    fibre-connection-mode: point-to-point
    inband-management: enabled
    * drive-parameters
    spin-up: disabled
    disk-access-delay: 15s
    scsi-io-timeout: 30s
    queue-depth: 32
    polling-interval: 30s
    enclosure-polling-interval: 30s
    auto-detect-swap-interval: disabled
    smart: detect-clone-replace
    auto-global-spare: disabled
    * redundant-controller-configuration
    Redundant Controller Configuration: primary
    Cache Synchronization: enabled
    Host Channel Failover Mode: shared
    Local/Remote Redundant Mode: local
    Write-Through Data Synchronization: disabled
    Secondary RS-232 Port Status: disabled
    Communication Channel Type: Fibre
    * redundancy-mode
    Primary controller serial number: 8104488
    Primary controller location: Lower
    Redundancy mode: Active-Active
    Redundancy status: Enabled
    Secondary controller serial number: 8103724
    * cache-parameters
    mode: write-back
    optimization: random
    sync-period: disabled
    current-global-write-policy: write-back
    * RS232-configuration
    COM1 speed: 9600bps
    * channels
    Ch Type Media Speed Width PID / SID
    0 Host FC(P) 2G Serial 40 / N/A
    1 Host FC(P) N/A Serial N/A / 42
    2 DRV+RCC FC(L) 2G Serial 14 / 15
    3 DRV+RCC FC(L) 2G Serial 14 / 15
    4 Host FC(P) 2G Serial 44 / N/A
    5 Host FC(P) N/A Serial N/A / 46
    6 Host LAN N/A Serial N/A / N/A
    * disks
    Ch Id Size Speed LD Status IDs Rev
    2(3) 0 279.40GB 200MB ld1 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 38528TGW
    WWNN 2000001862363F9C
    2(3) 1 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 0751KPTT
    WWNN 20000014C38511B6
    2(3) 2 279.40GB 200MB GLOBAL STAND-BY SEAGATE ST330000FSUN300G 055A
    S/N 38528FH8
    WWNN 2000001862364997
    2(3) 3 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 385263B5
    WWNN 20000018623649A3
    2(3) 4 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 38528TS7
    WWNN 20000018623647DD
    2(3) 5 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 3852GAAM
    WWNN 20000018623644A8
    2(3) 6 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 38525B67
    WWNN 200000186236463E
    2(3) 7 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 385293LK
    WWNN 20000018623647D4
    2(3) 8 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 38528E2E
    WWNN 2000001862364A4A
    2(3) 9 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 3852FPF2
    WWNN 2000001862364137
    2(3) 10 279.40GB 200MB ld0 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 38525BZE
    WWNN 200000186236438E
    2(3) 11 279.40GB 200MB ld1 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 3852F9C1
    WWNN 2000001862364519
    2(3) 12 279.40GB 200MB ld1 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 22526XFP
    WWNN 20000014C3D8B5CD
    2(3) 13 279.40GB 200MB ld1 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 225280ZR
    WWNN 20000014C3D8B9F4
    2(3) 14 279.40GB 200MB ld1 ONLINE SEAGATE ST330000FSUN300G 055A
    S/N 38527GP1
    WWNN 2000001862364001
    2(3) 15 279.40GB 200MB ld1 ONLINE FUJITSU MAW3300FCSUN300G 1303
    S/N 000629D01NGJ
    WWNN 500000E0126502D0
    2(3) 16 279.40GB 200MB NONE FRMT FUJITSU MAW3300FCSUN300G 1303
    S/N 000629D01NG9
    WWNN 500000E0126501A0
    2(3) 17 279.40GB 200MB ld1 ONLINE FUJITSU MAW3300FCSUN300G 1303
    S/N 000629D01NG0
    WWNN 500000E0126500F0
    2(3) 18 279.40GB 200MB ld1 ONLINE FUJITSU MAW3300FCSUN300G 1303
    S/N 000629D01NGH
    WWNN 500000E0126502B0
    2(3) 19 279.40GB 200MB ld1 ONLINE FUJITSU MAW3300FCSUN300G 1303
    S/N 000629D01NGE
    WWNN 500000E012650210
    * logical-drives
    LD LD-ID Size Assigned Type Disks Spare Failed Status
    ld0 5F4B5543 2.18TB Primary RAID5 9 1 0 Good
    Write-Policy Default StripeSize 32KB
    ld1 07190DF2 2.18TB Secondary RAID5 9 1 0 Good
    Write-Policy Default StripeSize 32KB
    * logical-volumes
    * partitions
    LD/LV ID-Partition Size
    ld0-00 5F4B5543-00 5.00GB
    ld0-01 5F4B5543-01 5.00GB
    ld0-02 5F4B5543-02 5.00GB
    ld0-03 5F4B5543-03 45.00GB
    ld0-04 5F4B5543-04 45.00GB
    ld0-05 5F4B5543-05 300.00GB
    ld0-06 5F4B5543-06 1.79TB
    ld1-00 07190DF2-00 5.00GB
    ld1-01 07190DF2-01 5.00GB
    ld1-02 07190DF2-02 5.00GB
    ld1-03 07190DF2-03 515.00GB
    ld1-04 07190DF2-04 500.00GB
    ld1-05 07190DF2-05 180.00GB
    ld1-06 07190DF2-06 1023.17GB
    * lun-maps
    Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map
    0 40 0 ld0 5F4B5543-00 Primary 210000E08B929600 {tvlp-node-n01p01}
    0 40 0 ld0 5F4B5543-00 Primary 210000E08B92B831 {tvlp-node-n02p01}
    0 40 1 ld0 5F4B5543-01 Primary 210000E08B927C31 {tvlp-node-n03p01}
    0 40 1 ld0 5F4B5543-01 Primary 210000E08B925A02 {tvlp-node-n04p01}
    0 40 2 ld0 5F4B5543-02 Primary 210000E08B927C31 {tvlp-node-n03p01}
    0 40 2 ld0 5F4B5543-02 Primary 210000E08B925A02 {tvlp-node-n04p01}
    0 40 3 ld0 5F4B5543-03 Primary 210000E08B927C31 {tvlp-node-n03p01}
    0 40 3 ld0 5F4B5543-03 Primary 210000E08B925A02 {tvlp-node-n04p01}
    0 40 4 ld0 5F4B5543-04 Primary 210000E08B927C31 {tvlp-node-n03p01}
    0 40 4 ld0 5F4B5543-04 Primary 210000E08B925A02 {tvlp-node-n04p01}
    0 40 5 ld0 5F4B5543-05 Primary 210000E08B927C31 {tvlp-node-n03p01}
    0 40 5 ld0 5F4B5543-05 Primary 210000E08B925A02 {tvlp-node-n04p01}
    1 42 0 ld1 07190DF2-00 Secondary 210000E08B920100 {tvlp-node-n05p01}
    1 42 1 ld1 07190DF2-01 Secondary 210000E08B920100 {tvlp-node-n05p01}
    1 42 2 ld1 07190DF2-02 Secondary 210000E08B920100 {tvlp-node-n05p01}
    1 42 3 ld1 07190DF2-03 Secondary 210000E08B920100 {tvlp-node-n05p01}
    1 42 4 ld1 07190DF2-04 Secondary 210000E08B920100 {tvlp-node-n05p01}
    1 42 5 ld1 07190DF2-05 Secondary 210000E08B920100 {tvlp-node-n05p01}
    4 44 0 ld0 5F4B5543-00 Primary 210000E08B92BC01 {tvlp-node-n01p02}
    4 44 0 ld0 5F4B5543-00 Primary 210000E08B92DB02 {tvlp-node-n02p02}
    4 44 1 ld0 5F4B5543-01 Primary 210000E08B91FDFF {tvlp-node-n04p02}
    4 44 1 ld0 5F4B5543-01 Primary 210000E08B924A03 {tvlp-node-n03p02}
    4 44 2 ld0 5F4B5543-02 Primary 210000E08B924A03 {tvlp-node-n03p02}
    4 44 2 ld0 5F4B5543-02 Primary 210000E08B91FDFF {tvlp-node-n04p02}
    4 44 3 ld0 5F4B5543-03 Primary 210000E08B924A03 {tvlp-node-n03p02}
    4 44 3 ld0 5F4B5543-03 Primary 210000E08B91FDFF {tvlp-node-n04p02}
    4 44 4 ld0 5F4B5543-04 Primary 210000E08B924A03 {tvlp-node-n03p02}
    4 44 4 ld0 5F4B5543-04 Primary 210000E08B91FDFF {tvlp-node-n04p02}
    4 44 5 ld0 5F4B5543-05 Primary 210000E08B924A03 {tvlp-node-n03p02}
    4 44 5 ld0 5F4B5543-05 Primary 210000E08B91FDFF {tvlp-node-n04p02}
    5 46 0 ld1 07190DF2-00 Secondary 210000E08B91EDFF {tvlp-node-n05p02}
    5 46 1 ld1 07190DF2-01 Secondary 210000E08B91EDFF {tvlp-node-n05p02}
    5 46 2 ld1 07190DF2-02 Secondary 210000E08B91EDFF {tvlp-node-n05p02}
    5 46 3 ld1 07190DF2-03 Secondary 210000E08B91EDFF {tvlp-node-n05p02}
    5 46 4 ld1 07190DF2-04 Secondary 210000E08B91EDFF {tvlp-node-n05p02}
    5 46 5 ld1 07190DF2-05 Secondary 210000E08B91EDFF {tvlp-node-n05p02}
    * protocol
    Identifier Status     Port Parameters
    telnet enabled 23 inactivity-timeout=disabled
    http enabled 80 n/a
    https disabled n/a n/a
    ftp enabled 21 n/a
    ssh enabled 22 n/a
    priagent enabled 58632 n/a
    snmp disabled n/a n/a
    dhcp enabled 68 n/a
    ping enabled n/a n/a
    * auto-write-through-trigger
    controller-failure: enabled
    battery-backup-failure: enabled
    ups-ac-power-loss: disabled
    power-supply-failure: enabled
    fan-failure: enabled
    temperature-exceeded-delay: 30min
    * peripheral-device-status
    Item Value status
    CPU Temp Sensor(primary) 52.50C within safety range
    Board1 Temp Sensor(primary) 55.50C within safety range
    Board2 Temp Sensor(primary) 64.00C within safety range
    +3.3V Value(primary) 3.352V within safety range
    +5V Value(primary) 5.019V within safety range
    +12V Value(primary) 12.199V within safety range
    Battery-Backup Battery(primary) 00 Hardware:OK
    CPU Temp Sensor(secondary) 52.50C within safety range
    Board1 Temp Sensor(secondary) 57.50C within safety range
    Board2 Temp Sensor(secondary) 62.00C within safety range
    +3.3V Value(secondary) 3.384V within safety range
    +5V Value(secondary) 5.099V within safety range
    +12V Value(secondary) 12.381V within safety range
    Battery-Backup Battery(secondary) 00 Hardware:OK
    * enclosure-status
    Ch Id Chassis Vendor/Product ID Rev PLD WWNN WWPN
    2 124 0A7805 SUN StorEdge 3510F A 1080 1000 204000C0FF0A7805 214000C0FF0A7805
    Topology: loop(a) Status: OK
    3 124 0A7805 SUN StorEdge 3510F A 1080 1000 204000C0FF0A7805 224000C0FF0A7805
    Topology: loop(b) Status: OK
    Enclosure Component Status:
    Type Unit Status FRU P/N FRU S/N Add'l Data
    Fan 0 OK 371-0108 GK0XC2 --
    Fan 1 OK 371-0108 GK0XC2 --
    Fan 2 OK 371-0108 GK0XC5 --
    Fan 3 OK 371-0108 GK0XC5 --
    PS 0 OK 371-0108 GK0XC2 --
    PS 1 OK 371-0108 GK0XC5 --
    Temp 0 OK 371-0531 0A7805 temp=30
    Temp 1 OK 371-0531 0A7805 temp=28
    Temp 2 OK 371-0531 0A7805 temp=31
    Temp 3 OK 371-0531 0A7805 temp=30
    Temp 4 OK 371-0531 0A7805 temp=31
    Temp 5 OK 371-0531 0A7805 temp=30
    Temp 6 OK 371-0532 HL12LM temp=37
    Temp 7 OK 371-0532 HL12LM temp=41
    Temp 8 OK 371-0532 HL12QD temp=37
    Temp 9 OK 371-0532 HL12QD temp=38
    Temp 10 OK 371-0108 GK0XC2 temp=30
    Temp 11 OK 371-0108 GK0XC5 temp=25
    Voltage 0 OK 371-0108 GK0XC2 voltage=5.110V
    Voltage 1 OK 371-0108 GK0XC2 voltage=11.750V
    Voltage 2 OK 371-0108 GK0XC5 voltage=5.020V
    Voltage 3 OK 371-0108 GK0XC5 voltage=11.520V
    Voltage 4 OK 371-0532 HL12LM voltage=2.480V
    Voltage 5 OK 371-0532 HL12LM voltage=3.250V
    Voltage 6 OK 371-0532 HL12LM voltage=5.000V
    Voltage 7 OK 371-0532 HL12LM voltage=12.120V
    Voltage 8 OK 371-0532 HL12QD voltage=2.500V
    Voltage 9 OK 371-0532 HL12QD voltage=3.300V
    Voltage 10 OK 371-0532 HL12QD voltage=5.050V
    Voltage 11 OK 371-0532 HL12QD voltage=12.240V
    DiskSlot 0 OK 371-0531 0A7805 addr=0,led=off
    DiskSlot 1 Absent 371-0531 0A7805 addr=1,led=off
    DiskSlot 2 Absent 371-0531 0A7805 addr=2,led=off
    DiskSlot 3 OK 371-0531 0A7805 addr=3,led=off
    DiskSlot 4 OK 371-0531 0A7805 addr=4,led=off
    DiskSlot 5 OK 371-0531 0A7805 addr=5,led=off
    DiskSlot 6 OK 371-0531 0A7805 addr=6,led=off
    DiskSlot 7 OK 371-0531 0A7805 addr=7,led=off
    DiskSlot 8 OK 371-0531 0A7805 addr=8,led=off
    DiskSlot 9 OK 371-0531 0A7805 addr=9,led=off
    DiskSlot 10 OK 371-0531 0A7805 addr=10,led=off
    DiskSlot 11 OK 371-0531 0A7805 addr=11,led=off
    * SES
    Ch Id Chassis Vendor/Product ID Rev PLD WWNN WWPN
    2 124 0A7805 SUN StorEdge 3510F A 1080 1000 204000C0FF0A7805 214000C0FF0A7805
    Topology: loop(a)
    3 124 0A7805 SUN StorEdge 3510F A 1080 1000 204000C0FF0A7805 224000C0FF0A7805
    Topology: loop(b)
    * port-WWNs
    Ch Id WWPN
    0 40 216000C0FF8A7805
    1 42 226000C0FFAA7805
    4 44 256000C0FFCA7805
    5 46 266000C0FFEA7805
    * inter-controller-link
    inter-controller-link upper channel 0: connected
    inter-controller-link lower channel 0: connected
    inter-controller-link upper channel 1: connected
    inter-controller-link lower channel 1: connected
    inter-controller-link upper channel 4: connected
    inter-controller-link lower channel 4: connected
    inter-controller-link upper channel 5: connected
    inter-controller-link lower channel 5: connected
    * battery-status
    Upper Battery Type: 1
    Upper Battery Manufacturing Date: Fri Jun 16 00:00:00 2006
    Upper Battery Placed In Service: Tue Aug 8 09:27:53 2006
    Upper Battery Expiration Date: Thu Aug 7 09:27:53 2008
    Upper Battery Expiration Status: OK
    Lower Battery Type: 1
    Lower Battery Manufacturing Date: Fri Jun 16 00:00:00 2006
    Lower Battery Placed In Service: Tue Aug 8 09:27:52 2006
    Lower Battery Expiration Date: Thu Aug 7 09:27:52 2008
    Lower Battery Expiration Status: OK
    Upper Battery Hardware Status: OK
    Lower Battery Hardware Status: OK
    * sata-router
    no sata routers found
    * sata-mux
    0 mux boards found
    * host-wwn-names
    Host-ID/WWN Name
    210000E08B91EDFF tvlp-node-n05p02
    210000E08B91FDFF tvlp-node-n04p02
    210000E08B925A02 tvlp-node-n04p01
    210000E08B927C31 tvlp-node-n03p01
    210000E08B924A03 tvlp-node-n03p02
    210000E08B92DB02 tvlp-node-n02p02
    210000E08B92BC01 tvlp-node-n01p02
    210000E08B92B831 tvlp-node-n02p01
    210000E08B920100 tvlp-node-n05p01
    210000E08B929600 tvlp-node-n01p01
    * FRUs
    7 FRUs found in chassis SN#0A7805 at ch 2 id 124
    Name: FC_CHASSIS_BKPLN
    Description: SE3510 FC Chassis/backplane
    Part Number: 371-0531
    Serial Number: 0A7805
    Revision: 01
    Initial Hardware Dash Level: 01
    FRU Shortname:
    Manufacturing Date: Sun Jul 16 05:25:36 2006
    Manufacturing Location: Suzhou,China
    Manufacturer JEDEC ID: 0x0301
    FRU Location: FC MIDPLANE SLOT
    Chassis Serial Number: 0A7805
    FRU Status: OK
    Name: FC_RAID_IOM
    Description: SE3510 I/O w/SES RAID FC 2U
    Part Number: 371-0532
    Serial Number: HL12LM
    Revision: 01
    Initial Hardware Dash Level: 01
    FRU Shortname:
    Manufacturing Date: Wed Jul 5 12:11:49 2006
    Manufacturing Location: Suzhou,China
    Manufacturer JEDEC ID: 0x0301
    FRU Location: UPPER FC RAID IOM SLOT
    Chassis Serial Number: 0A7805
    FRU Status: OK
    Name: BATTERY_BOARD
    Description: SE351X Hot Swap Battery Module
    Part Number: 371-0539
    Serial Number: GP15BJ
    Revision: 01
    Initial Hardware Dash Level: 01
    FRU Shortname:
    Manufacturing Date: Thu Jul 6 03:25:30 2006
    Manufacturing Location: Suzhou,China
    Manufacturer JEDEC ID: 0x0301
    FRU Location: UPPER BATTERY BOARD SLOT
    Chassis Serial Number: 0A7805
    FRU Status: OK
    Name: AC_POWER_SUPPLY
    Description: SE3XXX AC PWR SUPPLY/FAN, 2U
    Part Number: 371-0108
    Serial Number: GK0XC5
    Revision: 01
    Initial Hardware Dash Level: 01
    FRU Shortname:
    Manufacturing Date: Mon May 22 08:45:16 2006
    Manufacturing Location: Irvine California, USA
    Manufacturer JEDEC ID: 0x048F
    FRU Location: RIGHT AC PSU SLOT #1 (RIGHT)
    Chassis Serial Number: 0A7805
    FRU Status: OK
    Name: AC_POWER_SUPPLY
    Description: SE3XXX AC PWR SUPPLY/FAN, 2U
    Part Number: 371-0108
    Serial Number: GK0XC2
    Revision: 01
    Initial Hardware Dash Level: 01
    FRU Shortname:
    Manufacturing Date: Mon May 22 08:53:42 2006
    Manufacturing Location: Irvine California, USA
    Manufacturer JEDEC ID: 0x048F
    FRU Location: AC PSU SLOT #0 (LEFT)
    Chassis Serial Number: 0A7805
    FRU Status: OK
    Name: FC_RAID_IOM
    Description: SE3510 I/O w/SES RAID FC 2U
    Part Number: 371-0532
    Serial Number: HL12QD
    Revision: 01
    Initial Hardware Dash Level: 01
    FRU Shortname:
    Manufacturing Date: Wed Jul 5 15:16:42 2006
    Manufacturing Location: Suzhou,China
    Manufacturer JEDEC ID: 0x0301
    FRU Location: LOWER FC RAID IOM SLOT
    Chassis Serial Number: 0A7805
    FRU Status: OK
    Name: BATTERY_BOARD
    Description: SE351X Hot Swap Battery Module
    Part Number: 371-0539
    Serial Number: GP15GW
    Revision: 01
    Initial Hardware Dash Level: 01
    FRU Shortname:
    Manufacturing Date: Thu Jul 6 05:26:03 2006
    Manufacturing Location: Suzhou,China
    Manufacturer JEDEC ID: 0x0301
    FRU Location: LOWER BATTERY BOARD SLOT
    Chassis Serial Number: 0A7805
    FRU Status: OK
    * access-mode
    access-mode: inband
    * controller-date
    Boot time : Fri Jan 5 10:20:15 2007
    Current time : Thu Feb 8 10:25:36 2007
    Time Zone : GMT
    * disk-array
    init-verify: disabled
    rebuild-verify: disabled
    normal-verify: disabled
    rebuild-priority: normal

  • Sun StorEdge 3510 ADD new disks

    I have working array StorEdge 3510 connected to SunFire V490.
    What I need to do is to ADD another 4 disks and split them into a RAID 5.
    For that reason im going to connect directly to array.
    Does anyone can tell me what steps should I take to attach these disks and make them visable under Solaris 10 ?
    Let me say again - array is working with 7 other disks and its fine. all of them are visable in Solaris - I NEED TO ADD another 4.

    Wow, it took me about 5 hours to set mine up from scratch and I'm a pretty strong storage guy. Here's an overview of what I did. Most of the info is in the 3510 documentation.
    Step 1 is connectivity. Get your host to see the 3510 (FC).
    Step 2 - use the cli utilities to setup the IP address on the controller (in-band).
    Step 3 - ssh in, setup the RAID and presentation of the luns
    Step 4 - rescan on the hosts to see the new luns

  • SUN StorEdge 3510 MIBs

    Hi all,
    Does anybody know, where to get MIB files for SUN storage devices (in particular, StorEdge 3510) ? My company have a few 3510, and as long as I'm is a developer and maintainer of "self-made" monitoring system for all network, power, storage & other devices and servers, I want to include support for monitoring SUN storage devices in it.
    Thanks, Oleg.

    Hello,
    a few days ago a subscriber of the Sunmanagers list lost his "patience" and posted a checklist about the resources to consult <b>before</b> posting on the Sunmanagers list. This was due to the increasing number of inappropriate posts.
    The Sunmanagers list is for <b>timecritical</b> problems.
    These forums are <b>not</b> offical Sun Support, all contributors just volunteer.
    I did a google search (substitute your favorite search machine) with the query "MIB +StorEdge", the third hit was a site that offered MIBs for free.
    It's really recommended to research before posting on these forums.
    Michael

  • StorEdge 3510 FC array Configuration

    Hello,
    I'm running Solaris 8 on a Neta 20. I have 3 StorEdge 3510 FC arrays attached; one controller each array. Each has 12x146 drives for a total of 2TB each array. As this if my first time dealing with this array, I need some advise. I configured my first array with one logical drive and 4 partitions. maped the luns, and reset. (oh I changed the seq to ramdom)
    I am now trying to configure the 2nd array, however it will only let me make a logical drive of 512GB max. I've tried everything I could think of to overcome this. Any ideas, I was talking to someone today and they mentioned that solaris will only reconize 1TB. Is this true? If so does anyone have any ideas on a better way to configure the arrays. They will be used for large oracle databases.
    Thanks
    [email protected]

    I presume that by now you have sorted this problem, but in case not, 512Gb is the maximum logical drive size when the i/o optimisation is random.
    On the max disk size, I read somewhere that VxFS might help.
    I found your query when looking for help on directing StorEdge events to the syslogd on our central loghost - which I still haven't found out how to do!?

  • StorEdge 3510 disk array & SunMC

    Hi
    I have a question regarding Sun StorEdge 3510 & SMC. I want to have notifications about hdd problems, cache or controller faults. I know that I can install StorADE and send messages to SMC. Is there any other way to have 3510 notifications in SMC withouht hiring StorADE?
    Regards
    Paul

    Hello,
    You can use following way to monitores 3510 Raid array :
    Install packages on the 3510 connected server
    system SUNWsccli Sun StorEdge(tm) 3000 Family CLI
    system SUNWscsa Sun StorEdge(tm) Diagnostic Reporter daemon
    system SUNWscsd Sun StorEdge(tm) Configuration Service Agent
    application SUNWscsu Sun StorEdge(tm) Configuration Service Console
    system SUNWscui Sun StorEdge(tm) Diagnostic Reporter console
    This provide ssconsole binary. Start it and assign your 3510 Raid ctrl to send alarms to one Solaris host.
    Exact procedure (account creation, etc...) to do so should be present somewhere on docs.sun.com
    Now, each time an event occurs on the 3510, a log will appears with SUNWscsd... blablabla into /var/adm/messages :
    /var/adm/messages.0:Jun 26 15:48:11 PNMSSS082 SUNWscsdMonitor[522]: [ID 163373 daemon.error] [SUNWscsd 0x030B2007:0x00FFFF00 Informational] <rctrl2041> Logical Drive 0(ID=6327904A, RAID 5), Rebuild Completed. Informational message. Replace defective drive with new drive. (Mon Jun 26 15:22:48 2006) {Unique ID#: 07e60a}
    Use SMC agent log file scanning to monitor trap with this string "SUNWscsd"
    Hope this will help.
    Regards,
    Damien

  • StorEdge 3510: can't newfs on Solaris 10

    We have Solaris 10 with ssconsole and all relevant drivers and software installed. Using ssconsole I created three logical drives. format sees them and I can successfully repartition all three disks:
    AVAILABLE DISK SELECTIONS:
    0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci&#64;9,600000/SUNW,qlc&#64;2/fp&#64;0,0/ssd&#64;w500000e010d77c01,0
    1. c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci&#64;9,600000/SUNW,qlc&#64;2/fp&#64;0,0/ssd&#64;w500000e010d828d1,0
    2. c2t42d0 <SUN-StorEdge3510-327R cyl 4391 alt 2 hd 255 sec 255>
    /pci&#64;8,600000/SUNW,qlc&#64;1/fp&#64;0,0/ssd&#64;w226000c0ffa7fe9d,0
    3. c4t40d0 <SUN-StorEdge3510-327R cyl 6588 alt 2 hd 255 sec 255>
    /pci&#64;8,600000/SUNW,qlc&#64;2/fp&#64;0,0/ssd&#64;w216000c0ff87fe9d,0
    4. c4t40d1 <SUN-StorEdge3510-327R cyl 6588 alt 2 hd 255 sec 255>
    /pci&#64;8,600000/SUNW,qlc&#64;2/fp&#64;0,0/ssd&#64;w216000c0ff87fe9d,1
    When I do newfs on any of the new disks partition I get this error:
    bash-3.00# newfs /dev/rdsk/c2t42d0s0
    newfs: construct a new file system /dev/rdsk/c2t42d0s0: (y/n)? y
    There is no block size that can support this disk
    Any help?

    Upgrading the 3510 firmware to 4.x may help with this issue. I can't see what your logical drive size is, but I suspect that it is too big for that drive geometry. The 4.x firmware reduces the geomtry so that newfs doesn't complain.
    With Solaris 9 newfs complains, but gives you some hints as to what you can do to get around the limitation. When I swithced to 4.11 (and 4.13) newfs stopped complaining.

  • 2nd StorEdge 3510 not visible to OS solaris 9

    Added 2nd 3510 array via daisy chain to existing 3510. First one has been in operation, now after adding the second one the OS does not see the new LUNS. One logical drive 4.2 TB with 5 partitions each under 900 GB. Thought that maybe a reboot -- -r might fix it, but not true. Mapped the LUNS to each partition. Read about the changing the geometry for logical drives larger than 253 GB. Won't that change the geometry on the existing 3510 disks. Any advice is much appreciated.

    You said you installed the necessary "san packages, san patches". Are these the packages to be found in "Sun StorEdge &#91;tm&#93; 3000 Family Storage Products--Related Software V2.x"? I have worked with 3310s and these are not SAN, but directly attached and need the above drivers to work.
    Cheers,
    Arty

  • Sun Storedge 3510 FC - NVRAM CRC ERROR

    Hi,
    I installed the latest patch for this issue. patch installed on 3510 FC storedge - 113723-20. Still the problem persist. How can I resolve this issue.
    Is there any additional patch required for this issue.
    While rebooting the stroredge I used to get an error messgae like "NVRAM CRC ERROR - nvram must be reinitialized - replace controller" and type to continue:
    Regards,
    Senthilkumar

    Hi,
    The patch readme seems to suggest this is a firmware update, have you applied the new firmware to the array? Have you checked to see if the Host FC ports also have a firmware update available?
    Stuart.

  • Error in StorEdge 3510 connected to V880 Server

    Hi Friends
    My V880 server's /var/adm/messages file is showing the below error
    Sep 21 08:33:41 mum1pp1-a-fixed SUNW,UltraSPARC-III: [ID 332531 kern.info] [AFT2] errID 0x00000709.621043d0 PA=0x000000c0.f9d6a9c0
    Sep 21 08:33:41 mum1pp1-a-fixed E$tag 0x00000181.f3400080 E$state_7 Exclusive
    Sep 21 08:33:41 mum1pp1-a-fixed SUNW,UltraSPARC-III: [ID 819380 kern.info] [AFT2] E$Data (0x00) 0x40000000.00000000 0x09000000.00000000 ECC 0x1c9 Bad Esynd=0x0cc
    Sep 21 08:33:41 mum1pp1-a-fixed SUNW,UltraSPARC-III: [ID 895151 kern.info] [AFT2] E$Data (0x10) 0x40000000.00000000 0x00000000.00000000 ECC 0x1c9
    Sep 21 08:33:41 mum1pp1-a-fixed SUNW,UltraSPARC-III: [ID 895151 kern.info] [AFT2] E$Data (0x20) 0x40000000.00000000 0x00000000.00000000 ECC 0x1c9
    Sep 21 08:33:41 mum1pp1-a-fixed SUNW,UltraSPARC-III: [ID 895151 kern.info] [AFT2] E$Data (0x30) 0x40000000.00000000 0x00000000.00000000 ECC 0x1c9
    Sep 21 08:33:41 mum1pp1-a-fixed SUNW,UltraSPARC-III: [ID 929717 kern.info] [AFT2] D$ data not available
    Sep 21 08:33:41 mum1pp1-a-fixed unix: [ID 836849 kern.notice]
    Sep 21 08:33:41 mum1pp1-a-fixed ^Mpanic[cpu4]/thread=30010c5d440:
    Sep 21 08:33:41 mum1pp1-a-fixed unix: [ID 144365 kern.notice] [AFT1] errID 0x00000709.621043d0 UE Error(s)
    Sep 21 08:33:41 mum1pp1-a-fixed See previous message(s) for details
    Sep 21 08:33:42 mum1pp1-a-fixed unix: [ID 100000 kern.notice]
    Sep 21 08:33:42 mum1pp1-a-fixed genunix: [ID 723222 kern.notice] 000002a100eb1510 SUNW,UltraSPARC-III:cpu_aflt_log+55c (2a100eb15ce, 101578a8, 10157880, 0, 2a100eb1758, 2a100eb161b)
    Sep 21 08:33:42 mum1pp1-a-fixed genunix: [ID 179002 kern.notice] %l0-3: 0000000010339b38 000002a100eb1818 0000000000000003 0000000000000010
    Sep 21 08:33:42 mum1pp1-a-fixed %l4-7: 0000000000010009 0000030010c59580 0000000000000000 0000000000000003
    Sep 21 08:33:42 mum1pp1-a-fixed genunix: [ID 723222 kern.notice] 000002a100eb1760 SUNW,UltraSPARC-III:cpu_deferred_error+4cc (400000000, 980c00000000, 1, 40100004232000cc, 2a100eb1ba0, 40100004232000cc)
    Sep 21 08:33:42 mum1pp1-a-fixed genunix: [ID 179002 kern.notice] %l0-3: 0000000000000001 000002a100eb1818 0000000000000000 0000000000000001
    Sep 21 08:33:42 mum1pp1-a-fixed %l4-7: 0000000010030208 000000c0f9d6a9c0 0000000000000000 0000000000000003
    Sep 21 08:33:42 mum1pp1-a-fixed unix: [ID 100000 kern.notice]
    Sep 21 08:33:42 mum1pp1-a-fixed genunix: [ID 672855 kern.notice] syncing file systems...
    Sep 21 08:33:43 mum1pp1-a-fixed genunix: [ID 733762 kern.notice] 482
    Sep 21 08:34:15 mum1pp1-a-fixed last message repeated 20 times
    Sep 21 08:34:16 mum1pp1-a-fixed genunix: [ID 622722 kern.notice] done (not all i/o completed)
    Sep 21 08:34:17 mum1pp1-a-fixed genunix: [ID 353387 kern.notice] dumping to /dev/dsk/c1t0d0s1, offset 65536
    Sep 21 08:34:57 mum1pp1-a-fixed genunix: [ID 409368 kern.notice] ^M100% done: 104068 pages dumped, compression ratio 5.36,
    Sep 21 08:34:57 mum1pp1-a-fixed genunix: [ID 851671 kern.notice] dump succeeded
    Sep 21 08:35:51 mum1pp1-a-fixed genunix: [ID 540533 kern.notice] ^MSunOS Release 5.8 Version Generic_117350-26 64-bit
    Sep 21 08:35:51 mum1pp1-a-fixed genunix: [ID 913632 kern.notice] Copyright 1983-2003 Sun Microsystems, Inc. All rights reserved.
    Sep 21 08:35:51 mum1pp1-a-fixed genunix: [ID 678236 kern.info] Ethernet address = 0:3:ba:b:37:99
    Sep 21 08:35:51 mum1pp1-a-fixed swapgeneric: [ID 370176 kern.warning] WARNING: forceload of drv/SUNW,qlc failed
    Sep 21 08:35:51 mum1pp1-a-fixed unix: [ID 389951 kern.info] mem = 12582912K (0x300000000)
    Sep 21 08:35:51 mum1pp1-a-fixed unix: [ID 930857 kern.info] avail mem = 12348809216
    Sep 21 08:35:51 mum1pp1-a-fixed rootnex: [ID 466748 kern.info] root nexus = Sun Fire 880
    Sep 21 08:35:51 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] pcisch0 at root: SAFARI 0x8 0x700000
    Sep 21 08:35:51 mum1pp1-a-fixed genunix: [ID 936769 kern.info] pcisch0 is /pci@8,700000
    Sep 21 08:35:51 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] pcisch1 at root: SAFARI 0x8 0x600000
    Sep 21 08:35:51 mum1pp1-a-fixed genunix: [ID 936769 kern.info] pcisch1 is /pci@8,600000
    Sep 21 08:35:51 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] pcisch2 at root: SAFARI 0x9 0x700000
    Sep 21 08:35:51 mum1pp1-a-fixed genunix: [ID 936769 kern.info] pcisch2 is /pci@9,700000
    Sep 21 08:35:51 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] pcisch3 at root: SAFARI 0x9 0x600000
    Sep 21 08:35:51 mum1pp1-a-fixed genunix: [ID 936769 kern.info] pcisch3 is /pci@9,600000
    Sep 21 08:35:51 mum1pp1-a-fixed qlc: [ID 171021 kern.info] Qlogic FCA Driver v20050209-1.40 (0)
    Sep 21 08:35:51 mum1pp1-a-fixed qlc: [ID 637753 kern.info] NOTICE: qlc(0): Firmware version 2.1.140
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 686697 kern.info] NOTICE: Qlogic qlc(0): Loop OFFLINE
    Sep 21 08:35:52 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: SUNW,qlc@2, qlc0
    Sep 21 08:35:52 mum1pp1-a-fixed genunix: [ID 936769 kern.info] qlc0 is /pci@8,600000/SUNW,qlc@2
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 171021 kern.info] Qlogic FCA Driver v20050209-1.40 (1)
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 637753 kern.info] NOTICE: qlc(1): Firmware version 3.2.110
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 686697 kern.info] NOTICE: Qlogic qlc(1): Loop OFFLINE
    Sep 21 08:35:52 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: SUNW,qlc@3, qlc1
    Sep 21 08:35:52 mum1pp1-a-fixed genunix: [ID 936769 kern.info] qlc1 is /pci@9,700000/SUNW,qlc@3
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 171021 kern.info] Qlogic FCA Driver v20050209-1.40 (2)
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 637753 kern.info] NOTICE: qlc(2): Firmware version 3.2.110
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 686697 kern.info] NOTICE: Qlogic qlc(2): Loop OFFLINE
    Sep 21 08:35:52 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: SUNW,qlc@4, qlc2
    Sep 21 08:35:52 mum1pp1-a-fixed genunix: [ID 936769 kern.info] qlc2 is /pci@9,700000/SUNW,qlc@4
    Sep 21 08:35:52 mum1pp1-a-fixed genunix: [ID 936769 kern.info] fp0 is /pci@8,600000/SUNW,qlc@2/fp@0,0
    Sep 21 08:35:52 mum1pp1-a-fixed genunix: [ID 936769 kern.info] fp1 is /pci@9,700000/SUNW,qlc@3/fp@0,0
    Sep 21 08:35:52 mum1pp1-a-fixed genunix: [ID 936769 kern.info] fp2 is /pci@9,700000/SUNW,qlc@4/fp@0,0
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 686697 kern.info] NOTICE: Qlogic qlc(1): Loop ONLINE
    Sep 21 08:35:52 mum1pp1-a-fixed qlc: [ID 686697 kern.info] NOTICE: Qlogic qlc(2): Loop ONLINE
    Sep 21 08:35:54 mum1pp1-a-fixed qlc: [ID 686697 kern.info] NOTICE: Qlogic qlc(0): Loop ONLINE
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd13 at fp2: name w256000c0ffd05e18,2, bus address 9e
    Sep 21 08:35:54 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd13 is /pci@9,700000/SUNW,qlc@4/fp@0,0/ssd@w256000c0ffd05e18,2
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN-StorEdge3510-327P cyl 53138 alt 2 hd 127 sec 127>
    Sep 21 08:35:54 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@9,700000/SUNW,qlc@4/fp@0,0/ssd@w256000c0ffd05e18,2 (ssd13) online
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd12 at fp1: name w226000c0ffa05e18,2, bus address a5
    Sep 21 08:35:54 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd12 is /pci@9,700000/SUNW,qlc@3/fp@0,0/ssd@w226000c0ffa05e18,2
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN-StorEdge3510-327P cyl 53138 alt 2 hd 127 sec 127>
    Sep 21 08:35:54 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@9,700000/SUNW,qlc@3/fp@0,0/ssd@w226000c0ffa05e18,2 (ssd12) online
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd2 at fp0: name w21000004cf99cee6,0, bus address e0
    Sep 21 08:35:54 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd2 is /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99cee6,0
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    Sep 21 08:35:54 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99cee6,0 (ssd2) online
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd5 at fp0: name w21000004cf99d007,0, bus address ef
    Sep 21 08:35:54 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd5 is /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99d007,0
    Sep 21 08:35:54 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99d007,0 (ssd5) online
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd3 at fp0: name w21000004cf99cf71,0, bus address e8
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd3 is /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99cf71,0
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99cf71,0 (ssd3) online
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd4 at fp0: name w21000004cf99cdf5,0, bus address e1
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd4 is /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99cdf5,0
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99cdf5,0 (ssd4) online
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd1 at fp0: name w21000004cf99d05a,0, bus address e4
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd1 is /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99d05a,0
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99d05a,0 (ssd1) online
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ssd0 at fp0: name w21000004cf99d140,0, bus address e2
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ssd0 is /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99d140,0
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 365881 kern.info] <SUN36G cyl 24620 alt 2 hd 27 sec 107>
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 408114 kern.info] /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cf99d140,0 (ssd0) online
    Sep 21 08:35:55 mum1pp1-a-fixed scsi: [ID 365881 kern.info] /pci@8,700000/scsi@1 (glm0):
    Sep 21 08:35:55 mum1pp1-a-fixed Rev. 4 Symbios 53c875 found.
    Sep 21 08:35:55 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: scsi@1, glm0
    Sep 21 08:35:55 mum1pp1-a-fixed genunix: [ID 936769 kern.info] glm0 is /pci@8,700000/scsi@1
    Sep 21 08:35:56 mum1pp1-a-fixed scsi: [ID 243001 kern.info] /pci@9,700000/SUNW,qlc@4/fp@0,0 (fcp2):
    Sep 21 08:35:56 mum1pp1-a-fixed ndi_devi_online: failed for scsiclass,0d: target=9e lun=0 ffffffff
    Sep 21 08:35:56 mum1pp1-a-fixed scsi: [ID 243001 kern.info] /pci@9,700000/SUNW,qlc@4/fp@0,0 (fcp2):
    Sep 21 08:35:56 mum1pp1-a-fixed ndi_devi_online: failed for scsiclass,0d: target=9f lun=0 ffffffff
    Sep 21 08:35:56 mum1pp1-a-fixed scsi: [ID 243001 kern.info] /pci@8,600000/SUNW,qlc@2/fp@0,0 (fcp0):
    Sep 21 08:35:56 mum1pp1-a-fixed ndi_devi_online: failed for scsiclass,0d: target=dc lun=0 ffffffff
    Sep 21 08:35:56 mum1pp1-a-fixed scsi: [ID 243001 kern.info] /pci@9,700000/SUNW,qlc@3/fp@0,0 (fcp1):
    Sep 21 08:35:56 mum1pp1-a-fixed ndi_devi_online: failed for scsiclass,0d: target=a5 lun=0 ffffffff
    Sep 21 08:35:56 mum1pp1-a-fixed scsi: [ID 243001 kern.info] /pci@9,700000/SUNW,qlc@3/fp@0,0 (fcp1):
    Sep 21 08:35:56 mum1pp1-a-fixed ndi_devi_online: failed for scsiclass,0d: target=a3 lun=0 ffffffff
    Sep 21 08:35:58 mum1pp1-a-fixed scsi: [ID 193665 kern.info] sd6 at glm0: target 6 lun 0
    Sep 21 08:35:58 mum1pp1-a-fixed genunix: [ID 936769 kern.info] sd6 is /pci@8,700000/scsi@1/sd@6,0
    Sep 21 08:36:01 mum1pp1-a-fixed swapgeneric: [ID 308332 kern.info] root on /pseudo/vxio@0:0 fstype ufs
    Sep 21 08:36:01 mum1pp1-a-fixed genunix: [ID 370176 kern.warning] WARNING: forceload of drv/SUNW,qlc failed
    Sep 21 08:36:01 mum1pp1-a-fixed genunix: [ID 370176 kern.warning] WARNING: forceload of drv/pci failed
    Sep 21 08:36:01 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: ebus@1, ebus0
    Sep 21 08:36:09 mum1pp1-a-fixed ebus: [ID 521012 kern.info] todds12870 at ebus0: offset 1,300070
    Sep 21 08:36:09 mum1pp1-a-fixed genunix: [ID 936769 kern.info] todds12870 is /pci@9,700000/ebus@1/rtc@1,300070
    Sep 21 08:36:09 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] mc-us30 at root: SAFARI 0x0 0x400000 ...
    Sep 21 08:36:09 mum1pp1-a-fixed genunix: [ID 936769 kern.info] mc-us30 is /memory-controller@0,400000
    Sep 21 08:36:09 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] mc-us31 at root: SAFARI 0x1 0x400000 ...
    Sep 21 08:36:09 mum1pp1-a-fixed genunix: [ID 936769 kern.info] mc-us31 is /memory-controller@1,400000
    Sep 21 08:36:09 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] mc-us32 at root: SAFARI 0x2 0x400000 ...
    Sep 21 08:36:09 mum1pp1-a-fixed genunix: [ID 936769 kern.info] mc-us32 is /memory-controller@2,400000
    Sep 21 08:36:09 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] mc-us33 at root: SAFARI 0x3 0x400000 ...
    Sep 21 08:36:09 mum1pp1-a-fixed genunix: [ID 936769 kern.info] mc-us33 is /memory-controller@3,400000
    Sep 21 08:36:09 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] mc-us34 at root: SAFARI 0x4 0x400000 ...
    Sep 21 08:36:09 mum1pp1-a-fixed genunix: [ID 936769 kern.info] mc-us34 is /memory-controller@4,400000
    Sep 21 08:36:09 mum1pp1-a-fixed rootnex: [ID 349649 kern.info] mc-us35 at root: SAFARI 0x6 0x400000 ...
    Sep 21 08:36:09 mum1pp1-a-fixed genunix: [ID 936769 kern.info] mc-us35 is /memory-controller@6,400000
    Sep 21 08:36:10 mum1pp1-a-fixed ebus: [ID 521012 kern.info] se0 at ebus0: offset 1,400000
    Sep 21 08:36:10 mum1pp1-a-fixed genunix: [ID 936769 kern.info] se0 is /pci@9,700000/ebus@1/serial@1,400000
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 270833 kern.info] cpu6: UltraSPARC-III (portid 6 impl 0x14 ver 0x34 clock 750 MHz)
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 270833 kern.info] cpu0: UltraSPARC-III (portid 0 impl 0x14 ver 0x34 clock 750 MHz)
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 721127 kern.info] cpu 0 initialization complete - online
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 270833 kern.info] cpu1: UltraSPARC-III (portid 1 impl 0x14 ver 0x34 clock 750 MHz)
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 721127 kern.info] cpu 1 initialization complete - online
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 270833 kern.info] cpu2: UltraSPARC-III (portid 2 impl 0x14 ver 0x34 clock 750 MHz)
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 721127 kern.info] cpu 2 initialization complete - online
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 270833 kern.info] cpu3: UltraSPARC-III (portid 3 impl 0x14 ver 0x34 clock 750 MHz)
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 721127 kern.info] cpu 3 initialization complete - online
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 270833 kern.info] cpu4: UltraSPARC-III (portid 4 impl 0x14 ver 0x34 clock 750 MHz)
    Sep 21 08:36:10 mum1pp1-a-fixed unix: [ID 721127 kern.info] cpu 4 initialization complete - online
    Sep 21 08:36:11 mum1pp1-a-fixed ebus: [ID 521012 kern.info] su0 at ebus0: offset 1,3062f8
    Sep 21 08:36:11 mum1pp1-a-fixed genunix: [ID 936769 kern.info] su0 is /pci@9,700000/ebus@1/rsc-control@1,3062f8
    Sep 21 08:36:11 mum1pp1-a-fixed ebus: [ID 521012 kern.info] su1 at ebus0: offset 1,3083f8
    Sep 21 08:36:11 mum1pp1-a-fixed genunix: [ID 936769 kern.info] su1 is /pci@9,700000/ebus@1/rsc-console@1,3083f8
    Sep 21 08:36:13 mum1pp1-a-fixed pseudo: [ID 129642 kern.info] pseudo-device: fcp0
    Sep 21 08:36:13 mum1pp1-a-fixed genunix: [ID 936769 kern.info] fcp0 is /pseudo/fcp@0
    Sep 21 08:36:17 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ses85 at fp2: name w256000c0ffd05e18,0, bus address 9e
    Sep 21 08:36:17 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ses85 is /pci@9,700000/SUNW,qlc@4/fp@0,0/ses@w256000c0ffd05e18,0
    Sep 21 08:36:17 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ses86 at fp2: name w256000c0ffc05e18,0, bus address 9f
    Sep 21 08:36:17 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ses86 is /pci@9,700000/SUNW,qlc@4/fp@0,0/ses@w256000c0ffc05e18,0
    Sep 21 08:36:17 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ses48 at fp0: name w5080020000195699,0, bus address dc
    Sep 21 08:36:17 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ses48 is /pci@8,600000/SUNW,qlc@2/fp@0,0/ses@w5080020000195699,0
    Sep 21 08:36:17 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ses88 at fp1: name w226000c0ffa05e18,0, bus address a5
    Sep 21 08:36:17 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ses88 is /pci@9,700000/SUNW,qlc@3/fp@0,0/ses@w226000c0ffa05e18,0
    Sep 21 08:36:17 mum1pp1-a-fixed scsi: [ID 799468 kern.info] ses87 at fp1: name w226000c0ffb05e18,0, bus address a3
    Sep 21 08:36:17 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ses87 is /pci@9,700000/SUNW,qlc@3/fp@0,0/ses@w226000c0ffb05e18,0
    Sep 21 08:36:20 mum1pp1-a-fixed scsi: [ID 365881 kern.info] /pci@8,700000/scsi@1/st@5,0 (st5):
    Sep 21 08:36:20 mum1pp1-a-fixed <HP DDS-4 DAT (Sun)>
    Sep 21 08:36:20 mum1pp1-a-fixed scsi: [ID 193665 kern.info] st5 at glm0: target 5 lun 0
    Sep 21 08:36:20 mum1pp1-a-fixed genunix: [ID 936769 kern.info] st5 is /pci@8,700000/scsi@1/st@5,0
    Sep 21 08:36:20 mum1pp1-a-fixed pseudo: [ID 129642 kern.info] pseudo-device: devinfo0
    Sep 21 08:36:20 mum1pp1-a-fixed genunix: [ID 936769 kern.info] devinfo0 is /pseudo/devinfo@0
    Sep 21 08:36:21 mum1pp1-a-fixed vxdmp: [ID 220011 kern.notice] NOTICE: vxvm:vxdmp: added disk array 005E18, datype = SUN3510
    Sep 21 08:36:22 mum1pp1-a-fixed eri: [ID 517527 kern.info] SUNW,eri0 : Local Ethernet address = 0:3:ba:b:37:99
    Sep 21 08:36:22 mum1pp1-a-fixed eri: [ID 517527 kern.info] SUNW,eri0 : Using local MAC address
    Sep 21 08:36:22 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: network@1,1, eri0
    Sep 21 08:36:22 mum1pp1-a-fixed genunix: [ID 936769 kern.info] eri0 is /pci@9,700000/network@1,1
    Sep 21 08:36:22 mum1pp1-a-fixed hme: [ID 517527 kern.info] SUNW,hme0 : PCI IO 2.0 (Rev Id = c1) Found
    Sep 21 08:36:22 mum1pp1-a-fixed hme: [ID 517527 kern.info] SUNW,hme0 : Local Ethernet address = 0:3:ba:1c:e4:c8
    Sep 21 08:36:22 mum1pp1-a-fixed hme: [ID 517527 kern.info] SUNW,hme0 : Using local MAC address
    Sep 21 08:36:22 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: SUNW,hme@1,1, hme0
    Sep 21 08:36:22 mum1pp1-a-fixed genunix: [ID 936769 kern.info] hme0 is /pci@9,600000/SUNW,hme@1,1
    Sep 21 08:36:27 mum1pp1-a-fixed hme: [ID 517527 kern.info] SUNW,hme0 : Internal Transceiver Selected.
    Sep 21 08:36:27 mum1pp1-a-fixed hme: [ID 517527 kern.info] SUNW,hme0 : 100 Mbps Full-Duplex Link Up
    Sep 21 08:36:28 mum1pp1-a-fixed eri: [ID 517527 kern.info] SUNW,eri0 : 100 Mbps full duplex link up
    Sep 21 08:39:04 mum1pp1-a-fixed pcisch: [ID 370704 kern.info] PCI-device: usb@1,3, ohci0
    Sep 21 08:39:04 mum1pp1-a-fixed genunix: [ID 936769 kern.info] ohci0 is /pci@9,700000/usb@1,3
    Sep 21 08:39:15 mum1pp1-a-fixed genunix: [ID 454863 kern.info] dump on /dev/dsk/c1t0d0s1 size 16000 MB
    Sep 21 08:39:32 mum1pp1-a-fixed ipf: [ID 920137 kern.notice] IP Filter: attach to [eri0,0] - IPv4
    Sep 21 08:39:32 mum1pp1-a-fixed ipf: [ID 920137 kern.notice] IP Filter: attach to [hme0,0] - IPv4
    Sep 21 08:39:32 mum1pp1-a-fixed ipf: [ID 989912 kern.notice] IP Filter: v3.4.28, attaching complete.
    Sep 21 08:39:38 mum1pp1-a-fixed savecore: [ID 570001 auth.error] reboot after panic: [AFT1] errID 0x00000709.621043d0 UE Error(s)
    Sep 21 08:39:38 mum1pp1-a-fixed See previous message(s) for details
    Sep 21 08:39:38 mum1pp1-a-fixed savecore: [ID 662545 auth.error] not enough space in /var/crash/mum1pp1-a-fixed (40 MB avail, 818 MB needed)
    Sep 21 08:39:38 mum1pp1-a-fixed savecore: [ID 570001 auth.error] reboot after panic: [AFT1] errID 0x00000709.621043d0 UE Error(s)
    Sep 21 08:39:38 mum1pp1-a-fixed See previous message(s) for details
    Sep 21 08:39:38 mum1pp1-a-fixed savecore: [ID 662545 auth.error] not enough space in /var/crash/mum1pp1-a-fixed (40 MB avail, 818 MB needed)
    Sep 21 08:39:41 mum1pp1-a-fixed pseudo: [ID 129642 kern.info] pseudo-device: tod0
    Sep 21 08:39:41 mum1pp1-a-fixed genunix: [ID 936769 kern.info] tod0 is /pseudo/tod@0
    Sep 21 08:39:41 mum1pp1-a-fixed pseudo: [ID 129642 kern.info] pseudo-device: pm0
    Sep 21 08:39:41 mum1pp1-a-fixed genunix: [ID 936769 kern.info] pm0 is /pseudo/pm@0
    Sep 21 08:39:42 mum1pp1-a-fixed There are no devices (controllers) in the system; nvutil terminated.
    Sep 21 08:39:43 mum1pp1-a-fixed Array Monitor initiated
    Sep 21 08:39:43 mum1pp1-a-fixed /usr/lib/osa/bin/arraymon: [ID 712306 user.error] No RAID devices found to check.
    Sep 21 08:39:43 mum1pp1-a-fixed /usr/lib/osa/bin/arraymon: [ID 712306 user.error] No RAID devices found to check.
    Sep 21 08:39:43 mum1pp1-a-fixed RDAC daemons initiated
    Sep 21 08:39:43 mum1pp1-a-fixed rdriver: [ID 400281 kern.notice] ID[RAIDarray.rdaemon.1001] RDAC Resolution Daemon locked in memory
    Sep 21 08:39:44 mum1pp1-a-fixed pseudo: [ID 129642 kern.info] pseudo-device: vol0
    Sep 21 08:39:44 mum1pp1-a-fixed genunix: [ID 936769 kern.info] vol0 is /pseudo/vol@0
    Sep 21 08:39:44 mum1pp1-a-fixed pseudo: [ID 129642 kern.info] pseudo-device: fcode0
    Sep 21 08:39:44 mum1pp1-a-fixed genunix: [ID 936769 kern.info] fcode0 is /pseudo/fcode@0
    Sep 21 08:39:50 mum1pp1-a-fixed ntpdate[612]: [ID 558275 daemon.notice] adjust time server 61.1.128.45 offset -0.002149 sec
    Sep 21 08:39:53 mum1pp1-a-fixed xntpd[982]: [ID 702911 daemon.notice] xntpd 3-5.93e Mon Sep 20 15:47:11 PDT 1999 (1)
    Sep 21 08:39:53 mum1pp1-a-fixed xntpd[982]: [ID 301315 daemon.notice] tickadj = 5, tick = 10000, tvu_maxslew = 495, est. hz = 100
    Sep 21 08:39:53 mum1pp1-a-fixed xntpd[982]: [ID 798731 daemon.notice] using kernel phase-lock loop 0041
    Sep 21 08:39:54 mum1pp1-a-fixed last message repeated 1 time
    Sep 21 08:45:35 mum1pp1-a-fixed pseudo: [ID 129642 kern.info] pseudo-device: devinfo0
    Sep 21 08:45:35 mum1pp1-a-fixed genunix: [ID 936769 kern.info] devinfo0 is /pseudo/devinfo@0
    I suspect there was a problem with memory
    Any help
    Thanks in Advance
    Jai

    All the log entries that you seem to be concerned about
    occurred as the system crashed then booted booted back up.
    Your Solaris was trying to turn on all the devices.
    "forceload" entries are usually result from having a line-entry
    in your /etc/system file.
    All they mean is that a driver was told to launch before it was
    actually needed. They're harmless. Ignore such things.
    The driver will engage when it's needed.
    You had a cpu ecache error that your Solaris could not overcome.
    Rather than leave the computer in an unstable state,
    and rather than have the possibility of data corruption,
    the computer did exactly what it's designed to do ...
    crash the OS, attempt to save the corefiles, and clear out
    whatever was causing that unstable and bad data in the ecache data registers.
    The crash occurred at Sep 21 08:33:41
    The reboot began at Sep 21 08:35:51
    If this is the only recent crash, then ignore the event.
    However, since /VAR was full, I fear you have many corefile sets already.
    Another possibility was that the full filesystem gave you no swap space.
    The only way to get an accurate analysis, is to open a Sun service case.
    You may need to replace some hardware, or you may need to do
    some extensive Solaris patching, or you may need to do both of those.
    Get proper advice from Sun's techsupport.
    This is just a generic user-to-user discussion forum.

Maybe you are looking for

  • How to install Bluestacks on a mac properly?

    I'm trying to install bluestacks on my mac which is a software for running apps from all other mobile devices out there for pcs or macs. When I do the normal procedure of moving the application into the applications folder you would think that the so

  • Crystal Reports Server XI - what files need to be backed up?

    I don't want to set up a fail over cluster which there seems to be plenty of information on. All I want to do is identify what files I need to backup incase I need to do a restore. We use Tivoli Storage Manageru2019s backup and recovery solution for

  • Vista x64 and desktop manager

    I had my phone (8100) hooked up and synched perfectly with XP Pro. I recently bought a new notebook with Vista x64. Although 4.2.2 supports Vista 64, the sync is not working. Clicking synchronize just causes the application to sit there. No progress

  • When I reply to or forward emails using Horde webmail, the original email is not 'quoted' consistently.

    This started happening about 7-10 days ago. I use Horde to get mail online. Suddenly, when I would hit reply or forward the original email would not be quoted. I had changed nothing in any of my preferences. Since then, it has been spotty and intermi

  • Mac Version of PSE6

    OK, you Mac users, does PSE6 work well with a Mac? I've read the Apple site comments on PSE6 and the opinions run all over the place. I am switching to a Mac and before I make a purchase, I want to know if the SW is as reliable and capable on the Mac