Solaris 10 and Intel RAID Controllers SRCSAS144E

Where can I download a drivers ?

That answer was trollish.
Did you look into this card - did you even try to see if Solaris could support a PCI-express MegaRAID SAS? This is the heart of the - ssr212mc2 - http://www.intel.com/design/servers/storage/ssr212mc2/index.htm . This is a pure Intel reference design. This is the most drives that fits into a 2U **EVER**. This is the best storage product in existence today. So being snarky and not supporting it will only hurt Sun. The SRCSAS144E is a MegaRAID SAS controller. Need I remind you that the Sun v60 and Sun v65 were pure Intel reference servers that worked GREAT! It works in Linux, want to see:
cat /etc/redhat-release
CentOS release 5 (Final)
uname -a
Linux localhost.localdomain 2.6.18-8.1.15.el5 #1 SMP Mon Oct 22 08:32:28 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
megasas: 00.00.03.05 Mon Oct 02 11:21:32 PDT 2006
megasas: 0x1000:0x0411:0x8086:0x1003: bus 9:slot 14:func 0
ACPI: PCI Interrupt 0000:09:0e.0[A] -> GSI 18 (level, low) -> IRQ 185
megasas: FW now in Ready state
scsi0 : LSI Logic SAS based MegaRAID driver
Vendor: Intel Model: SSR212MC Rev: 01A
Type: Enclosure ANSI SCSI revision: 05
Vendor: INTEL Model: SRCSAS144E Rev: 1.03
Type: Direct-Access ANSI SCSI revision: 05
SCSI device sda: 2919915520 512-byte hdwr sectors (1494997 MB)
sda: Write Protect is off
sda: Mode Sense: 1f 00 00 08
SCSI device sda: drive cache: write back
SCSI device sda: 2919915520 512-byte hdwr sectors (1494997 MB)
sda: Write Protect is off
sda: Mode Sense: 1f 00 00 08
SCSI device sda: drive cache: write back
sda: sda1 sda2 sda3
sd 0:2:0:0: Attached scsi disk sda
Fusion MPT base driver 3.04.02
Copyright (c) 1999-2005 LSI Logic Corporation
Fusion MPT SAS Host driver 3.04.02
ACPI: PCI Interrupt 0000:04:00.0[A] -> GSI 17 (level, low) -> IRQ 177
mptbase: Initiating ioc0 bringup
ioc0: SAS1064E: Capabilities={Initiator}
PCI: Setting latency timer of device 0000:04:00.0 to 64
scsi1 : ioc0: LSISAS1064E, FwRev=01100000h, Ports=1, MaxQ=511, IRQ=177
09:0e.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS
Subsystem: Intel Corporation SRCSAS144E RAID Controller
09:0e.0 0104: 1000:0411
Subsystem: 8086:1003
Usually we get this problem with PERCs here, this crap where Solaris drivers, unlike **all** other operating systems, strangely don't work with Modern (far more modern than any of the crap storage cards Sun sells) versions of LSI cards.
Strange.

Similar Messages

  • Tiffsplit crashes on Solaris (sparc and intel), okay on Linux

    Hello experts,
    I have a tiff file that causes a crash with tiffsplit when run
    on Solaris (both Sparc and Intel) but not on Linux.
    It works successfully on
    Red Hat Linux 2.4.18 -- libtiff.so.3.5
    but crashes on
    Solaris 9 on Sparc -- libtiff.so.3.7.1
    Solaris 9 on Sparc -- libtiff.so.3.9.4
    Solaris 10 on Intel -- libtiff.so.3.8.2
    It's crashing at a call to
    TIFFGetField(in, TIFFTAG_GROUP3OPTIONS, &v)
    and pstack shows stack trace like
    $ pstack /var/core/core.tiffsplit.15965
    core '/var/core/core.tiffsplit.15965' of 15965: /usr/local/bin/tiffsplit file.tiff
    ff32a4e8 _TIFFVGetField (26280, 124, ffbff260, 67, 27e48, 2a) + 1a0
    ff3498f0 PredictorVGetField (26280, 124, ffbff25c, ff3213c4, 3, 26288) + 30
    ff32acd0 TIFFVGetField (26280, 124, ffbff25c, ff34d838, ff34d848, ff34d858) + 70
    ff32ac50 TIFFGetField (26280, 124, ffbff2a4, 0, 0, 26000) + 1c
    00011754 tiffcp (26280, 285f8, ffbff321, ffbff318, ffbfffdd, 404) + 1f8
    0001135c main (fffffffe, 285f8, ffbff7a0, 26130, 0, 0) + 98
    0001115c _start   (0, 0, 0, 0, 0, 0) + 5c
    Running tiffinfo produces following:
    $ tiffinfo file.tiff
    file.tiff: Warning, unknown field with tag 292 (0x124) ignored.
    file.tiff: Warning, unknown field with tag 326 (0x146) ignored.
    file.tiff: Warning, unknown field with tag 327 (0x147) ignored.
    file.tiff: Warning, unknown field with tag 328 (0x148) ignored.
    file.tiff: Warning, unknown field with tag 37680 (0x9330) ignored.
    TIFF Directory at offset 0xb156
    Subfile Type: (0 = 0x0)
    Image Width: 1728 Image Length: 2162
    Resolution: 204, 196 pixels/inch
    Bits/Sample: 1
    Compression Scheme: LZW
    Photometric Interpretation: min-is-black
    Date & Time: "2010/07/28 14:19:00"
    Software: "....."
    Orientation: row 0 top, col 0 lhs
    Samples/Pixel: 1
    Rows/Strip: 56
    Planar Configuration: single image plane
    Page Number: 1-1
    Predictor: none 1 (0x1)
    file.tiff: Warning, unknown field with tag 292 (0x124) ignored.
    file.tiff: Warning, unknown field with tag 326 (0x146) ignored.
    file.tiff: Warning, unknown field with tag 327 (0x147) ignored.
    file.tiff: Warning, unknown field with tag 328 (0x148) ignored.
    TIFF Directory at offset 0x14512
    Subfile Type: (0 = 0x0)
    Image Width: 1728 Image Length: 2162
    Resolution: 204, 196 pixels/inch
    Bits/Sample: 1
    Compression Scheme: LZW
    Photometric Interpretation: min-is-black
    Date & Time: "2010/07/28 14:19:00"
    Software: "......."
    Orientation: row 0 top, col 0 lhs
    Samples/Pixel: 1
    Rows/Strip: 56
    Planar Configuration: single image plane
    Page Number: 1-1
    Predictor: none 1 (0x1)
    What could I do to make it work properly on Solaris 9 on Sparc ?
    Thanks for your help.

    You should have no problems if both are using Sun's Java Runtime enviroment. I believe there are some problems with Microsofts implementation of Java, but I don't know exactly what.

  • Dual HBA and dual RAID controllers

    PCI Dual Ultra3 SCSI Host adapter, single host (V210), StorEdge 3320 with dual RAID controllers.
    Looking for best speed and reliability - can't find any documentation explaining how best to utilize dual controllers in SE-3320 RAID array.
    Only reference is in "Sun StorEdge 3000 Family Best Practices" document, that says:
    Use dual-controller arrays to avoid a single point of failure. A dual-controller SCSI array features a default active-to-active controller configuration. This configuration improved application availability because, in the unlikely event of a controller failure, the array automatically fails over to a second controller, resulting in no interruption of data flow.
    Does the HBA provide the same active-active configuration?
    Can BOTH ports of HBA be connected to the dual controllers (one-to-one) to provide redundency/failover at both ends?
    Does this provide better throughput?

    problem starting to look like harware or firmware...
    installed patch 124750-02 T2000 firmware update
    installed patch 123305-02 Qlogic firmware update
    after updates: port 1 works fine, port 2 still flashing lights
    as long as fiber port lights flash - there is no communication occuring
    so, there are no LD's, LUN's to report even at OBP.
    fiber cable is proven good, controller port is proven good...
    guessing it's a bad HBA GBIC port, can't explain it any other way.
    turned on extended-logging=1; in qlc.conf
    /var/adm/messages reports a problem on port 2 during boot...
    qlc(0): ql_check_isp_firmware: Load RISC code
    qlc(0): ql_fw_ready: mailbox_reg[1] = 4h
    ql_async_event, 8013h LIP received
    etc...
    NOTICE: Qlogic qlc(0): Loop ONLINE
    NOTICE: qlc(0): Firmware version 4.0.23
    qlc(1): ql_check_isp_firmware: Load RISC code
    qlc(1): ql_fw_ready: mailbox_reg[1] = 4h
    NOTICE: Qlogic qlc(1): Loop OFFLINE
    NOTICE: qlc(1): Firmware version 4.0.23
    qlc(1): ql_fw_ready: failed = 100h

  • Solaris 9 and Intel 845

    Hi
    I'm trying to get this Intel 845 graphic chip to work with Solaris 9.
    I've installed The XFree86 porting kit, tried the VESA driver but it still doesn't work.
    Had a look att this page, but it didn't help me.
    http://www.xfree86.org/~dawes/845driver.html
    Any ideas anyone???

    After two days of hard work and digging the Sun documents site, I found the following:
    Execute the code below to re-scan the SCSI bus and identify any changes made in the storages, like adding or removing virtual disks.
    cfgadm -al -o show_FCP_dev c#where c# is the controller number allocated for your SAN
    bash-3.2# cfgadm -a
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c5                             fc-fabric    connected    configured   unknown
    ...Cheers,
    Andreas

  • Solaris 9 and Intel 8086,109a nics

    Due to MySQL Cluster hanging Solaris 10, I have attempted to run Solaris 9 on these Supermicro 6014 Intel servers. The hardware is a little too new, but it mostly works. I can not boot directly off HDD ("prom_panic: Could not mount filesystem") so I am currently using a boot floppy. This will do temporarily.
    However, I had to use an old 3com nic in the server, and it would be much better if I could get the on-board nics to work. I have been attempting to get them to plumb for quite a while without success.
    The meaty details:
    SunOS mysqldata02 5.9 Generic_122301-17 i86pc i386 i86pc
    Installed "115320-12" in the hopes of it helping.
    mysqldata02:~# prtconf -vp | grep 109a
    compatible: 'pci8086,109a.15d9.109a.0' + 'pci8086,109a.15d9.109a' + 'pci15d9,109a' + 'pci8086,109a.0' + 'pci8086,109a' + 'pciclass,020000' + 'pciclass,0200'
    device-id: 0000109a
    model: 'PCI: 15d9,109a - Intel(R) PRO/1000 Server Adapter Driver'
    name: 'pci15d9,109a'
    subsystem-id: 0000109a
    mysqldata02:~# grep 109a /etc/driver_aliases
    e1000g "pci8086,109a"
    e1000g "pciex8086,109a"
    mysqldata02:~# mount -F pcfs /dev/diskette /mnt
    mysqldata02:~# grep 109a /mnt/solaris/devicedb/master
    pci8086,109a pci8086,109a net pci none "Intel(R) PRO/1000 Server Adapter Driver"
    mysqldata02:~# ifconfig e1000g0 plumb
    ifconfig: plumb: e1000g0: No such file or directory
    mysqldata02:~# ls -l /dev/e1*
    /dev/e1*: No such file or directory
    mysqldata02:~# modinfo | grep e100
    mysqldata02:~#
    mysqldata02:~# devfsadm -v
    mysqldata02:~# devfsadm -i e1000g
    devfsadm: driver failed to attach: e1000g
    mysqldata02:~# dmesg | egrep -i 'e1000g|109a'
    mysqldata02:~#
    mysqldata02:~# egrep -i 'e1000g|109a' /var/adm/messages
    mysqldata02:~#
    -rwxr-xr-x 1 root sys 133632 Jul 20 2007 /kernerl/drv/e1000g
    mysqldata02:/kernel/drv# modload /kernel/drv/e1000g
    mysqldata02:/kernel/drv# modinfo | grep e1000g
    194 d2ceb000 15a26 110 1 e1000g (Intel(R) PRO/1000 Server Adapte)
    Repeat all commands above, no difference. It just does not want to see it. I must admit I do not know if it is supposed to be pci8086 or pciex8086. Perhaps Solaris 9 can not do pciex?
    I have tried disabling ACPI, and Plug'n'Pray, and all combinations of those two.
    Any help would be appreciated.

    And anyway if you use a Solaris8 DCA that works the samem, since they are not much changes in hardware support for x86, and the important thing is to use further a Solaris 9 Cd to install.
    By the way these IBM x86 boxes (300 an NetVista series from a pair of years ago) are a pain,they don't support el-torito, even with the more recent bios.

  • Questions on Marvell vs. intel Raid setup for new build

    I’m getting ready to do my first build this weekend, and had some questions on the Asus p9x79pro motherboard’s raid controllers.  They are:
    What is the difference between the Marvell controller and the intel software controller?
    Why would I use one over the other?
    Do the Marvell raid controllers need to be pre-installed using the F6 method, same as the intel raid controllers?
    Here is my planned drive arrangement, feedback appreciated.
    (1) Crucial M4 CT128M4SSD2 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD)  - OS (win7 64 bit), pagefile, & programs,  6gb sata on SSD caching port
    (2) Western Digital WD Black WD1002FAEX 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive - RAID 0 Media and projects, 6b sata port
    (1) Western Digital WD Black WD1002FAEX 1TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive -Bare Drive - media cache & renders
    (1) WB Black 32mb cache 3.0 Sata - Exports and non video file storage
    Asus BD burner on 3gb port
      Thanks

    What is the difference between the Marvell controller and the intel software controller?
    I haven't tested it, but rumors are that Intel is marginally faster than Marvell.
    Why would I use one over the other?
    It depends on the number of ports in your system. That can be motherboard dependent.
    Do the Marvell raid controllers need to be pre-installed using the F6 method, same as the intel raid controllers?
    No. You can define a raid after you installed your OS by simply going into the BIOS.
    On my motherboard, I have 2 Intel SATA 6 G ports and 2 Marvell SATA 6 G ports. The rest are SATA 3 G.
    If that is the same in your case, I would do the following:
    Crucial M4 on Intel SATA 6G
    WD Black on Intel SATA 6G
    2 x WD Black/64 on Marvell SATA 6G in raid0
    WD Black/32 on Intel SATA 3G
    DVD/BDR on Intel SATA 3G
    one SATA 3G for eSATA connection.

  • How can I use the "gcc compiler and emacs editor" in Solaris 8 for Intel?

    I installed Solaris 8 for Intel to my desktop computer.
    After the installation, I found I couldn't use the companion software (the software CD included in the Multilinugal Media Kit for Solaris 8) - especially the emacs editor and gcc compilers.
    I certainly installed the companion software to my computer. And I can identify that those softwares are
    really installed to my system. In order to check out whether I installed them or not, I went to the "System Administrator" and opened the "Solaris Product Registry", in which I identified that all the files in
    the companion software were succesfully installed in my computer.
    However, when I open terminal or console box and tries to use those
    software, the system responds that "there is no emacs or gcc."
    Why I cannot use those things?
    Did I install the companion software wrong?
    I really don't understand why I can't use them.
    For your reference, I saw the directories "/usr/bin" and "usr/ccs/bin"
    too.
    In /usr/bin I typed
    #as
    and no such command.
    In /usr/ccs/bin
    #as
    and also no such command
    was what I had as a response.
    If I installed the companion software CD correctly and if they are installed somewhere in my computer,
    where I can find them and how I can use those softwares?
    Lastly, whenever I try to connect to my ISP via telnet mode, I see this
    message.
    "Try to connect ...
    connected to ***.***.***.*** (IP address of my ISP)
    Closed by foreign hosts."
    Is there any one who knows why this happens?
    Thanks for reading my question.

    gcc and emacs install in /opt/sfw. You should change your path statement to include /opt/sfw/bin and should obtain the FAQ from sunfreeware.com. The FAQ answers all your questions.
    I have installed gcc and gtk and am using both at this time.
    [email protected]

  • Cheap KVM switch for Solaris 10 on US and Intel boxes?

    Hi. Any one got some idea on which kind of KVM switch is supported on Solaris 10 on UltraSparc as well as Intel boxes? I can only find those expensive ones for enterprise users, but I am just a home user, unable to afford that. Thanks for your suggestion!!

    From what I can tell, containers (zones) are supported just fine on the x86/64 systems with the exception of Solaris 8 and Solaris 9 branded zones. Oh, and Linux-branded zones only work on x86/64 systems. Solaris 10 though as a non-global zone works on any S10 host OS.

  • TM and Migrating from G5 to Intel RAID Mac.

    The good news: I have a new Mac Intel Pro and all my previous G5's data is backed up by Time Machine to a fast Newr Tech FW RAID.
    The funky news:
    1. I'd read a User Tip Contribution by Kappy that suggested how to migrate from a Motorola -based system to the new Intel. It seemed that Migration Assistant is not useful here, much less via Time Machine. True? (while I can no longer find this tip, I have a Web Archive of it)
    2. Also, I've read here that I cannot continue using my old Time Machine backup account on the new Intel Mac. I'll have to wait while TM backs up up my new Mac entirely to a new TM account. I only have 300 GB left on my TM backup drive. Will I lose my G5's data when I add my Time Machine/FW disk to the new Intel. How, then, can I access my old data? If I can access my old data, must I retain my Username from my previous G5's accounts to be able to access TM backup data from my old Mac?
    3. Another variable: I got three ATA disks and an Apple RAID card for the new Intel Mac. I assume that setting up the array will be the first task when booting my Intel Mac for the first time, yes? I'm also guessing that I would want my OS/Apps on one disk, and a RAID array from the remaining two for files, as in OS9 days when one couldn't install an OS on a RAID disk, yes? Does this suggest that all my Home directory's files, such as Documents, Movies, Music, etc. should be aliased to the RAID array disks, to keep the Apps on non-RAID, files on RAID discrete?
    4. I gather I'll have to re-install all my 3rd Party software: Adobe CS3, Intel -compatible games, Final Cut Pro, Toast, etc. instead of using TM. Wow, no wonder I've been putting off setting up my new Mac.
    All advice is welcome.
    Message was edited by: John Dwight

    John Dwight wrote:
    The good news: I have a new Mac Intel Pro and all my previous G5's data is backed up by Time Machine to a fast Newr Tech FW RAID.
    The funky news:
    1. I'd read a User Tip Contribution by Kappy that suggested how to migrate from a Motorola -based system to the new Intel. It seemed that Migration Assistant is not useful here, much less via Time Machine. True? (while I can no longer find this tip, I have a Web Archive of it)
    the fact that you'd be migrating from TM backup is irrelevant here. the fact that you'd be migrating from PPC to intel is relevant. You can try doing that but there may be problems. definitely do not migrate applications.
    Here is one of kappy's posts on the issue
    http://discussions.apple.com/thread.jspa?threadID=1571689&tstart=0
    2. Also, I've read here that I cannot continue using my old Time Machine backup account on the new Intel Mac. I'll have to wait while TM backs up up my new Mac entirely to a new TM account.
    yes
    I only have 300 GB left on my TM backup drive. Will I lose my G5's data when I add my Time Machine/FW disk to the new Intel.
    I would suggest you use a different disk or reformat the existing one. You should format TM disk with GUID partition map for intel TM backups and yours is most likely formatted with Apple partition Map.
    http://support.apple.com/kb/TS1550?viewlocale=en_US
    You can try using the old TM disk but you might run into problems later. if you do use the old drive TM should give you an option of keeping the stuff already on the drive so that you can ahve your old TM backups there too.
    How, then, can I access my old data? If I can access my old data, must I retain my Username from my previous G5's accounts to be able to access TM backup data from my old Mac?
    not username but userid. It's most likely 501 in both cases so that should not be a problem.

  • Compare Solaris based processor and Intel processor for EBS

    We are planning to migrate our 9i database and Application(11.5.10.2) from Solaris to Linux.
    Also planning for some major hardware changes.
    We want to compare sparc V9 processors and Intel Processor.
    Please let us know any documentation which compares both processors
    - Bench marck for Solaris and Inter processors
    - Any calculation like 1 Inter processor = n Solaris based processor
    - Any other related topics .
    Thanks
    TT

    Hi,
    Please see the links referenced in the following threads.
    Hardware for Oracle E-Biz R12(12.0.4)
    Hardware for Oracle E-Biz R12(12.0.4)
    Hardware requirements for Oracle APPS
    Hardware requirements for Oracle APPS
    Question Regarding R12
    http://forums.tangosol.com/forums/thread.jspa?threadID=861380
    Sizing..
    Re: Sizing..
    Regards,
    Hussein

  • Oracle 8.1.5 and Solaris v7 for Intel

    Does anyone know if Oracle 8i (8.1.5) will work on Solaris v7
    for Intel. Don't laugh...I know these are old versions but if
    they will work I would like to know.

    Does anyone know if Oracle 8i (8.1.5) will work on Solaris v7
    for Intel. Don't laugh...I know these are old versions but if
    they will work I would like to know.

  • Solaris 10 x86 - software RAID

    I try make software RAID on x86 server with Solaris 10.
    2 ata/ide disks on Intel i865 board.
    I make mirroring all data partitions - it normally working.
    make 2-nd disk bootable.
    What I can do with BOOT partition ? mirror ? format on pcfs and copy data from 1-st disk? copy it with dd ?

    You can use SMC for other purposes but it won't help your with RAID.
    Sol 10 1/06 has raidctl which handles LSI1030 and LSI1064 RAID�enabled controllers (from raidctl(1M)).
    Some of the PERCs (most?) are LSI but I don't know if they are chipsets used by your PoweEdge (I doubt it).
    Generally you can break it down like this for x86:
    If you are using hardware RAID with Solaris 10 x86 you have to use pre-Solaris (i.e. on the RAID controller) managment or hope that the manufacturer of the device has a Solaris managment agent/interface (good luck).
    The only exception to this that I know of is the RAID that comes with V20z, V40z, X4100, X4200.
    Otherwise you will want to go with SVM or VxVM and manage RAID within Solaris (software RAID).
    SMC etc are only going to show you stuff if SVM is involved and VxVM has its own interface, otherwise the disks are controlled by PERC and just hanging out as far as Solaris is concerned.
    Hope this helps.

  • Intel raid, find the failing (but not failed) drive?

    One of my two Seagate drives is failing, I get intermittent system 'hangs', drive clicking, and the following error in event viewer:
    Quote
    "The device, \Device\Ide\iaStor0, did not respond within the timeout period.
    When I built the system I created two volumes from my two hard drives.  The first volume is a Raid 1 mirror set for my root drive.  The  second volume is a raid 0 stripe for my non-important stuff.  The two volumes are named "Root_Mirror" and "Data_Stripe"
    Here's the problem, How do I know which drive is on its way out?  I believe the event viewer error is complaining about the mirrored volume set (iaStor0 = "Root_Mirror" volume?), but how do I prove this?  (I am correct in thinking that WinXP talks to the Intel raid controller, the raid controller talks to the hard drives.  Consequently WinXP can only report errors about the raid volume, not the underlying physical hardware.
    I have a strong background in unix (Sun Solaris) disk and volume management.  If this was a work machine, I'd run an "iostat -En" look at the error count for each device, then determine which lun was having problems.  Once I know which lun is in trouble, I'd run a health check on the lun via the array management software (RM6 or whatever).  I don't see these tools in WinXP or the Intel Matrix driver... 
    Here is a system report from the Intel Matrix storage console if it helps:
    Quote
    System Information
    Kit Installed: 6.0.0.1022
    Kit Install History: 6.0.0.1022
    Shell Version: 6.0.0.1022
    OS Name: Microsoft Windows XP Professional
    OS Version: 5.1.2600 Service Pack 2 Build 2600
    System Name: C2D6600
    System Manufacturer: MICRO-STAR INT'L
    System Model: MS-7238
    Processor: Intel(R) Core(TM)2 CPU          6600  @ 2.40GHz
    BIOS Version/Date: American Megatrends Inc. V1.2, 11/08/2006
    Language: ENU
    Intel(R) RAID Technology
    Intel RAID Controller: Intel(R) ICH8R/DO/DH SATA RAID Controller
    Number of Serial ATA ports: 6
    RAID Option ROM Version: 6.1.0.1002
    Driver Version: 6.0.0.1022
    RAID Plug-In Version: 6.0.0.1022
    Language Resource Version of the RAID Plug-In: 6.0.0.1022
    Create Volume Wizard Version: 6.0.0.1022
    Language Resource Version of the Create Volume Wizard: 6.0.0.1022
    Create Volume from Existing Hard Drive Wizard Version: 6.0.0.1022
    Language Resource Version of the Create Volume from Existing Hard Drive Wizard: 6.0.0.1022
    Modify Volume Wizard Version: 6.0.0.1022
    Language Resource Version of the Modify Volume Wizard: 6.0.0.1022
    Delete Volume Wizard Version: 6.0.0.1022
    Language Resource Version of the Delete Volume Wizard: 6.0.0.1022
    ISDI Library Version: 6.0.0.1022
    Event Monitor User Notification Tool Version: 6.0.0.1022
    Language Resource Version of the Event Monitor User Notification Tool: 6.0.0.1022
    Event Monitor Version: 6.0.0.1022
    Array_0000
    Status: No active migration(s)
    Hard Drive Write Cache Enabled: Yes
    Size: 596.1 GB
    Free Space: 0 GB
    Number of Hard Drives: 2
    Hard Drive Member 1: ST3320620AS
    Hard Drive Member 2: ST3320620AS
    Number of Volumes: 2
    Volume Member 1: Root_Mirror
    Volume Member 2: Data_Stripe
    Root_Mirror
    Status: Normal
    System Volume: Yes
    Volume Write-Back Cache Enabled: Yes
    RAID Level: RAID 1 (mirroring)
    Size: 100 GB
    Number of Hard Drives: 2
    Hard Drive Member 1: ST3320620AS
    Hard Drive Member 2: ST3320620AS
    Parent Array: Array_0000
    Data_Stripe
    Status: Normal
    System Volume: No
    Volume Write-Back Cache Enabled: Yes
    RAID Level: RAID 0 (striping)
    Strip Size: 128 KB
    Size: 396.1 GB
    Number of Hard Drives: 2
    Hard Drive Member 1: ST3320620AS
    Hard Drive Member 2: ST3320620AS
    Parent Array: Array_0000
    Hard Drive 0
    Usage: Array member
    Status: Normal
    Device Port: 0
    Device Port Location: Internal
    Current Serial ATA Transfer Mode: Generation 1
    Model: ST3320620AS
    Serial Number: 5QF1FGRZ
    Firmware: 3.AAE
    Native Command Queuing Support: Yes
    Hard Drive Write Cache Enabled: Yes
    Size: 298 GB
    Number of Volumes: 2
    Volume Member 1: Root_Mirror
    Volume Member 2: Data_Stripe
    Parent Array: Array_0000
    Hard Drive 1
    Usage: Array member
    Status: Normal
    Device Port: 1
    Device Port Location: Internal
    Current Serial ATA Transfer Mode: Generation 1
    Model: ST3320620AS
    Serial Number: 5QF1G7GE
    Firmware: 3.AAE
    Native Command Queuing Support: Yes
    Hard Drive Write Cache Enabled: Yes
    Size: 298 GB
    Number of Volumes: 2
    Volume Member 1: Root_Mirror
    Volume Member 2: Data_Stripe
    Parent Array: Array_0000
    Unused Port 0
    Device Port: 2
    Device Port Location: Internal
    Unused Port 1
    Device Port: 3
    Device Port Location: Internal
    Unused Port 2
    Device Port: 4
    Device Port Location: Internal
    Unused Port 3
    Device Port: 5
    Device Port Location: Internal

    Well, I found a way to identify a failing drive, but it is not pretty..
    download Seagate's "Seatools"
    burn Seatools to bootable cd
    go into bios / Integrated peripherals / on-chip ATA devices / change from "raid" to "ide"
    F11 boot to cdrom
    run Seatools quick check to almost instantly identify the failing drive.
    run Seatools extended test to find a whole SLEW of failed sectors   :shocking:
    reboot back into bios, Integrated peripherals / on-chip ATA devices / change back to "raid" (pray this won't blow away your data, which it didn't but you don't know that until you do it once 
    setup RMA refund thru Newegg, order replacement drive, and hope new drive makes it before old drive goes belly up.
    while patiently waiting for new drive, sit in amazement that the Intel matrix driver ignores all the errors that Seatools found in a matter of seconds..
    Here's a question.  Should I be pissed that the Intel raid controller isn't reporting a bunch of errors, or should I be excited that the Intel raid contorller can keep a raid 0 stripe functioning with a clearly failing disk drive. (is it a bug or a feature????)
    {sigh}

  • Solaris 10 X86 - Hardware RAID - SMC/SVM question...

    I have gotten back into Sun Solaris System Administration after a five year hiatus... My skills are a little rusty and some of the tools have changed, so here is are my questions...
    I have installed Solaris 10 release 1/06 on a Dell 1850 with an attached PowrVault 220v connected to a Perc 4 Di controller. The RAID is configured via BIOS interface to my liking, Solaris is installed and see's all my partitions which I created during install.
    For testing purposes, the servers internal disk is used for the OS, the PowerVault is split into 2 RAID's - one is a mirror, one is a stripe...
    The question is; do I manage the RAID using Sun Management Console and the tools OR do I use SMC?
    When I launch SMC and go into Enhanced Storage... I do not see any RAID's... If I select "Disks" I do see them, but when I select them, it wants to run "FDISK" on them... now this is OK since they are blank but I want to ensure I am not doing sometinhg I should not be concerned with...
    If the PERC controller is controlling the RAID, what do I need SMC for?

    You can use SMC for other purposes but it won't help your with RAID.
    Sol 10 1/06 has raidctl which handles LSI1030 and LSI1064 RAID�enabled controllers (from raidctl(1M)).
    Some of the PERCs (most?) are LSI but I don't know if they are chipsets used by your PoweEdge (I doubt it).
    Generally you can break it down like this for x86:
    If you are using hardware RAID with Solaris 10 x86 you have to use pre-Solaris (i.e. on the RAID controller) managment or hope that the manufacturer of the device has a Solaris managment agent/interface (good luck).
    The only exception to this that I know of is the RAID that comes with V20z, V40z, X4100, X4200.
    Otherwise you will want to go with SVM or VxVM and manage RAID within Solaris (software RAID).
    SMC etc are only going to show you stuff if SVM is involved and VxVM has its own interface, otherwise the disks are controlled by PERC and just hanging out as far as Solaris is concerned.
    Hope this helps.

  • Fail to install solaris 8 for intel

    hello. i downloaded three files and made a CD for each one. but during the installing on a Dell PowerEdge 6450
    (4 x Pentium III 550)which has two AIC 7899 ultra160/m SCSI adapters and a Dell PERC RAID controller (i note they are not included in the hardware compatibility list of solaris 8 for
    intel), the install process stopped when it was "I2O Nexus:
    initializing IO processor 0" and didn't go any further. when i attempt to intall it on another machine(2 x PIII 1000) which has a AIC 7892 ultra160/m SCSI adapter, the process
    stopped saying "no disks that meet the criteria in the solaris installer documentation found" after solaris web start began to run. before this, it has reported resource conflict of PNP0C01
    and ISY0050. and the installing stopped when "using RPC bootparams for network configuration information" on a PIII 800 PC. what's wrong with my installing. is it due to the incompatibility of hardware or something else?how to resolve it if possible? help is in need very much.besides,what does
    DCA mean?sorry, i'm a newbie in this field.
    thanks
    xoing

    Hi,
    Problem with PERC RAID controller card.
    You try to install the driver for AIC 7899 and Dell PERC RAID controller card. In dell computers, they have dell PERC RAID controller driver for Solaris 7 version. They don't have the driver for solaris 8 version.
    You can download driver for SCSI AIC 7899 from this URL:
    http://www.adaptec.com/worldwide/support/driverdetail.html?cat=/Product/ASC-39160&filekey=u160_slrs_v121_dd.img.Z
    Then you check with your local dell support provider for PERC RAID controller driver for solaris 7 and solaris 8.
    Thanks.
    revert back.
    regards,
    Senthilkumar
    SUN - DTS

Maybe you are looking for