Attaching 2540 array to Solaris 8

Hello,
We are in a pickel. Currently, we have a 3511 connected to a V880 running Solaris 8.
(SunOS agatlsun01 5.8 Generic_117350-43 sun4u sparc SUNW,Sun-Fire-880).
And this array is not worth fixing any more.
We are purchasing a 2540 as a replacement.
We run Oracle 9i and 10g databases on this server.
I've been told that the 2540 is not supported by Solaris 8.
We would like to have both arrays connected to the same box to allow us to migrate the 5 databases off the 3511 and on to the 2540.
Is anyone aware of patches or work-arounds that would allow us to attache 2540 to the V880 running Solaris 8?
If this is not possible we will have to attach the 2540 to another server, migrate the databases to this temporary server, then attach the 2540 back to the orignal server. Seems like a lot of unnessary work.
Thanks.

You can find the supported components here:
http://sunsolve.sun.com/handbook_private/validateUser.do?target=Systems/2540/components
The only other problems, I can think of would be with:
1. The QLC driver
2. The CAM software (verified this is supported under Solaris 8 SPARC 4/01)
So that leaves the drivers.

Similar Messages

  • 2540 array performance

    I have a 2540 array and I'm monitoring it using CAM. I suspect the write speed to my volumes are slow. I've looked at the performance statistics in CAM, but it doesn't seem to give me the write speed. Below is a sample of the statistics I'm getting
    Timestamp: Fri Mar 06 12:33:33 GMT 2009
    Total IOPS: 416.83
    Average IOPS: 157.08
    Read %: 90.74
    Write %: 9.25
    Total Data Transferred: 14359.20 KBps
    Read: 14065.17 KBps
    Average Read: 4376.78 KBps
    Peak Read: 16570.53 KBps
    Written: 294.03 KBps
    Average Written: 126.59 KBps
    Peak Written: 294.03 KBps
    Average Read Size: 271.64 KB
    Average Write Size: 26.05 KB
    Cache Hit %: 24.08
    Does anyone know how I can determine of my write speed is slow or not.
    Thanks

    To determine if the array is running slowly, you need to really check on the servers.
    Run 'iostat -zxn 60' (assuming Solaris)
    The thing you're looking for is a high busy %age, high service time and a large number of requests in the queue. Either on their own is not really sufficient to point at a performance issue but both together would show that all is not well.
    There's usally a lot more to it than that but that can give you a fair indication of potential problems.
    You're showing quite high transfer sizes (writing avg 26K and reading an avg of 272K) - what profile did you use to create the volumes you're accessing here?

  • 2530/2540 array cache size vs. 6140

    Greetings,
    Needing some new storage we are considering the new 2540 array. Everything looks great except the cache size is up to 1GB. 512MB per controller. We are currently using 3510 storage arrays with dual controllers and 1GB cache per controller.
    So at this point I'm thinking we should opt to go with a 6140 instead.
    We've ruled out the 3510 just because we'd like this storage to take us 3-4 years down the road.
    Any thoughts on this?
    Thanks.

    IMHO the 2540 is placed as "low cost entry" storage system, midrange is 6140/2GB and 4GB. everything higher: 6540/9990 ;)
    i personally prefer ST6140/2GB as "entry" (better scalabality, nice IOPS)
    -- randy

  • Couldn't see LUNS fpr  Compaq Storage array 1000 solaris box

    Hi All,
    I want to connect Compaq storage Works SAN array to solaris10 Box. I can see array as connected but state as unusable.
    How can i see the luns from the storage on solaris box??
    dmesg output:
    Feb  1 08:29:54 testappl        ndi_devi_online: failed for array-controller: target=11000 lun=0 ffffffff
    Feb  1 08:30:25 testappl fctl: [ID 517869 kern.warning] WARNING: fp(7)::PLOGI succeeded: no skip(2) for D_ID 11000
    Feb  1 08:30:25 testappl genunix: [ID 599346 kern.warning] WARNING: Page83 data not standards compliant COMPAQ   MSA1000          2.38
    Feb  1 08:30:25 testappl scsi: [ID 243001 kern.info] /pci@1d,700000/SUNW,emlxs@2/fp@0,0 (fcp7):
    Feb  1 08:30:25 testappl        ndi_devi_online: failed for array-controller: target=11000 lun=0 ffffffff
    Feb  1 08:47:27 testappl emlxs: [ID 349649 kern.info] [ 5.05F8]emlxs3: NOTICE: 730: Link reset.
    Feb  1 08:47:27 testappl emlxs: [ID 349649 kern.info] [ 5.0337]emlxs3: NOTICE: 710: Link down.
    Feb  1 08:47:30 testappl emlxs: [ID 349649 kern.info] [ 5.054D]emlxs3: NOTICE: 720: Link up. (2Gb, fabric, initiator)
    Feb  1 08:47:30 testappl genunix: [ID 599346 kern.warning] WARNING: Page83 data not standards compliant COMPAQ   MSA1000          2.38
    Feb  1 08:47:30 testappl scsi: [ID 243001 kern.info] /pci@1d,700000/SUNW,emlxs@2/fp@0,0 (fcp7):
    Feb  1 08:47:30 testappl        ndi_devi_online: failed for array-controller: target=11000 lun=0 ffffffff
    cfgadm -al output:
    bash-3.00# cfgadm -al
    Ap_Id                          Type         Receptacle   Occupant     Condition
    c0                             scsi-bus     connected    configured   unknown
    c0::dsk/c0t0d0                 CD-ROM       connected    configured   unknown
    c1                             scsi-bus     connected    configured   unknown
    c1::dsk/c1t0d0                 disk         connected    configured   unknown
    c1::dsk/c1t1d0                 disk         connected    configured   unknown
    c1::dsk/c1t2d0                 disk         connected    configured   unknown
    c2                             fc           connected    unconfigured unknown
    c5                             scsi-bus     connected    unconfigured unknown
    c6                             fc           connected    unconfigured unknown
    c7                             fc-fabric    connected    configured   unknown
    c7::500805f3000186d9           array-ctrl   connected    configured   unusable
    usb0/1                         usb-device   connected    configured   ok
    usb0/2                         unknown      empty        unconfigured ok
    usb1/1                         unknown      empty        unconfigured ok
    usb1/2                         unknown      empty        unconfigured okthanks in advance.

    Looks like the LUN is not configured correctly on the storage array. The kernel can't take the LUN online.
    I have seen this on various boxes, most of the time it was either that the LUN wasn't set online on the SAN box or there is a SCSI reservation on the LUN.
    Can you access the LUN from another host?

  • Attaching binary arrays to a menu ring

    I have been playing around with the menu rings and can't seem to get it to do the function that I want.  I am looking to making a pull-down menu for some user options for different resoultion modes for an intrument.  So for example one option will be "2 eV" and with this option on I need to send 011 to a instrument.  Should I store it as a binary string or array, and how can I do that?
    Thanks in advance

    Do you always need to send a number to the instrument? If so do it this way:
    For the menu ring go to its Properties >> Edit Items. In this dialog you can enter the displayed text and the assosiated number. So yo can pair 2eV with 11. The terminal will give you the number. From your example I assume that the instrument will see the number as text with three digits. Use the "Format into string" function. Set the format string to "%03d" which will give a number with three digits padded with 0 to the left. Inthe example choosing 2eV will give 11 from the terminal and "011" as result from the Format function.
    Waldemar
    Using 7.1.1, 8.5.1, 8.6.1, 2009 on XP and RT
    Don't forget to give Kudos to good answers and/or questions

  • Mirroring two ESX Servers with direct attached arrays

    Hi, am new to the forum so please forgive if I commit any faux pas
    We have two HP DL585 servers (amd opteron) with Direct attached storage arrays (attached via scsi cable) We would like to create a VM environment using ESX 3.5 (this is whats available to us - packaged) We also have all the vmware tools available to us but no other budget to spend on NAS\SAN's
    Is there a way to create a mirror between the two machines (and arrays so to speak) so that failover is possible if one goes down or something similar - I think HP storageworks storage mirroring does this but does VMware have anything build in to do the job?
    we will be installing the ESX build on the local hard disks(mirrored) and the VM slices on the array (raid 5).  Also the arrays are seen as another local disk on the server. Would Vmotion help or what could we put in place here?. Thanks
    Alternatively, what is a good design to follow with the hardware we have? I'm finding it hard to get examples of VMware environments on the web. Thanks in advance

    Hi Kelly,
    I've put this on a different thread but said I would ask you as you have been of such help in the past
    Converter errors
    Im running vmware converter standalone, ran on a W2k3 server (importing an online physical server,i.e itself ) . I am trying to P2V itself and send it to out to our ESX3.5 server\datastore (destination)
    sorry I cant explain it any better than this.
    Hi All, I'm getting this error
    Unknown error returned by Vmware convertor Agent
    and in the logs its saying
    2009-10-29 15:18:11.119 'ClientConnection' 5028 info Making sure that UFAD interface has version vmware-converter-4.0.2
    2009-10-29 15:18:11.135 'ClientConnection' 5028 info UFAD interface version is vmware-converter-4.0.2
    2009-10-29 15:18:11.150 'P2V' 5028 info task,277 Task execution completed
    2009-10-29 15:18:11.181 'P2V' 5028 info ufaSession,129 DoImport called
    2009-10-29 15:18:11.181 'P2V' 4256 info task,275 Starting execution of a Task
    2009-10-29 15:18:11.181 'P2V' 4256 info ufaTask,207 Successfully connected to VMImporter
    2009-10-29 15:19:20.149 'P2V' 4256 error task,296 Task failed: P2VError UNKNOWN_METHOD_FAULT(sysimage.fault.ImageProcessingTaskFault)
    Is there no way I can P2V it to itself and then copy it to the datastore
    (when i try this, it just doesnt see a file the file even though its there)
    What is the best P2V tool out there?..
    Thanks for any help provided

  • StorageTek 2540 and 2501 performance

    I'm doing some work with a dual controller 2540 with 2501 expansion trays attached and would like to identify what the maximum throughput of the arrays should be.
    This is how I understand the architecture:
    Each 4Gbit FC host port can be active concurrently providing up to 400MB/sec per port, total of 1600MB/sec between the hosts and the array.
    Each controller has a PCI-X bus between the host board interface and the SAS I/O controller, capable of 1GB/sec per controller.
    The "SAS I/O controller" in each controller connects to it's local "SAS Expander" using a 2x wide SAS channel = 6Gbit/sec, and also connects to the other controller's SAS expander using (another?) 2x wide SAS channel.
    The SAS expander in each controller has a separate single lane, 3Gbit connection to each of the 12 internal disks.
    Each disk has a connection to the two SAS expanders, but only one is active (?). Total connection per disk to the rest of the array is 3Gbit?
    There is a 4x wide connector from each SAS expander to the SAS drive port (to connect to a 2501). This has a throughput of 12Gbit/sec?
    If the above is correct (and please feel free to put me right if I've got something wrong!), what happens in the 2501? From reading the documentation on the 2501, it would appear that the connection between the 2501 and the 2540 is 3Gbit (per controller). Is this a significant bottleneck if two 2501 expansion trays are added?
    I got most of the above from http://shop.tisource.ch/pdf.php?id=1619793 and trying to match it with jmiller's comment in https://opensolaris.org/jive/thread.jspa?messageID=258953
    Any insights appreciated.
    Thanks
    JR

    The same result here for a 2540 attached to a Sun M4000 (with qlogic-hbas) running latest Solaris 10 8/07 with latest Patches. There is an 3510 attached also. The mpxio-device for the luns of the 3510 are working well.
    The command "mpathadm-command show mpath-support libmpscsi_vhci.so" shows that the 2540-array (LCSM100_F) is supported. Nevertheless the mpxio-devices for the 2540 are not shown up.
    I will making a sun-call now.
    Christian

  • New to Solaris administration - Need some help with some issues

    Hello all,
    I am a new to Solaris administration and need some assistance with a few things. I was going to make separate posts but decided it would be easy to keep track of in one. I really do not know much about the OS but I do have a little Linux background so that might help me out. I am going to number my problems to keep them sorted, so here we go.
    The machine:
    Sunfire V880
    4x 73GB HDs
    PCI dual fiber channel host adapter
    Attached RAID array:
    Sun StorEdge T3 Array with 9x 73GB HDs
    Sun DDS4 Tape Drive in a Unipack
    OS: Solaris 5.10
    Updates: Updated everything except 2 patches (Updating is a real pain isn't it? At least it seems that way to me.)
    1. So I might as well start with the update issues! These 2 updates will not install:
    -PostgreSQL 8.2 source code, (137004-02)
    Utility used to install the update failed with exit code {0}.
    -Patch for mediaLib in solaris, (121620-03)
    Install of update failed. Utility used to install the update is not able to save files. Utility used to install the update failed with exit code 4.
    No idea why the PostgreSQL update is not working, but the medialib patch seems to not have enough hard drive space.
    2. Where are all the drives? I don't know how to find the RAID box or the other 3 internal hard drives. When I installed the OS, I think I installed it on only one hard drive and that might be part of the reason why the medialibe update above says that I don't have enough space.
    3. I probably need more space for the OS and updates, is there a way to "add" space onto the hard drive that currently is running the OS?
    3. Once I see the other hard drives I wish to combine them to make a RAID 0 and RAID 5 array, how do I go about doing that?
    4. How can I find/see the tape drive?
    5. Does my swap space really need to be 64GB? I know the book I have read suggests it, but I only made it 5GB because it didn't seem to make sense to make it 64GB.
    Thank you in advance for the help. I know these are a lot of questions to ask but please go easy on me :)
    rjbanker
    Edited by: rjbanker on Mar 7, 2008 8:21 AM

    SolarisSAinPA*
    1.
    -PostgreSQL 8.2 source code, (137004-02)
    Utility used to install the update failed with exit code {0}.
    Exit code 0 means there were no errors. When you run showrev -p 137004-02, does your system show that the patch is installed? You can check the log for a particular patch add attempt in /var/sadm/patch/+patch_num_rev+1- A bunch of stuff shows up, here is a portion (I am not entirely sure what it means, there must be a least a page of stuff like this):
    Patch: 121081-08 Obsoletes: Requires: 121453-02 Incompatibles: Packages: SUNWc cccrr, SUNWccccr, SUNWccfw, SUNWccsign, SUNWcctpx, SUNWccinv, SUNWccccfg, SUNWcc fwctrl
    Patch: 122231-01 Obsoletes: Requires: 121453-02 Incompatibles: Packages: SUNWc ctpx
    Patch: 120932-01 Obsoletes: Requires: Incompatibles: Packages: SUNWcctpx
    Patch: 123123-02 Obsoletes: Requires: Incompatibles: Packages: SUNWccinv
    Patch: 121118-12 Obsoletes: Requires: 121453-02 Incompatibles: Packages: SUNWc smauth, SUNWppror, SUNWpprou, SUNWupdatemgru, SUNWupdatemgrr, SUNWppro-plugin-su nos-base
    2.
    Where are all the drives? I don't know how to find the RAID box or the other 3 internal hard drives. When I >installed the OS, I think I installed it on only one hard drive and that might be part of the reason why the >medialibe update above says that I don't have enough space.
    When you run format command, how many drives are listed? Identify your root drive (compare with output of df command you ran earlier) Please post here.2. Output of df-hk, looks like I ran out of room. Should I just go ahead and reinstall the OS?
    Filesystem size used avail capacity Mounted on
    /dev/dsk/c1t0d0s0 5.9G 5.4G 378M 94% /
    /devices 0K 0K 0K 0% /devices
    ctfs 0K 0K 0K 0% /system/contract
    proc 0K 0K 0K 0% /proc
    mnttab 0K 0K 0K 0% /etc/mnttab
    swap 42G 1.3M 42G 1% /etc/svc/volatile
    objfs 0K 0K 0K 0% /system/object
    /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
    5.9G 5.4G 378M 94% /platform/sun4u-us3/lib/libc_psr.so.1
    /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
    5.9G 5.4G 378M 94% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
    fd 0K 0K 0K 0% /dev/fd
    swap 42G 1.1M 42G 1% /tmp
    swap 42G 32K 42G 1% /var/run
    /dev/dsk/c1t0d0s7 46G 47M 46G 1% /export/home
    3. So I guess the general consensus is to reinstall the OS, is that correct?
    4. There is nothing in \dev\rmt, and unfortunately I don't have a tape to test it with!
    5. I guess 5GB will be ok for what we do.
    Alan.pae*
    1. I think the above text might explain why it failed, although I don't know how to correct it.
    2. Output of format:
    # mount
    / on /dev/dsk/c1t0d0s0 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=1d80008 on Mon Mar 10 10:56:51 2008
    /devices on /devices read/write/setuid/devices/dev=4dc0000 on Mon Mar 10 10:56:19 2008
    /system/contract on ctfs read/write/setuid/devices/dev=4e00001 on Mon Mar 10 10:56:19 2008
    /proc on proc read/write/setuid/devices/dev=4e40000 on Mon Mar 10 10:56:19 2008
    /etc/mnttab on mnttab read/write/setuid/devices/dev=4e80001 on Mon Mar 10 10:56:19 2008
    /etc/svc/volatile on swap read/write/setuid/devices/xattr/dev=4ec0001 on Mon Mar 10 10:56:19 2008
    /system/object on objfs read/write/setuid/devices/dev=4f00001 on Mon Mar 10 10:56:19 2008
    /platform/sun4u-us3/lib/libc_psr.so.1 on /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 read/write/setuid/devices/dev=1d80008 on Mon Mar 10 10:56:50 2008
    /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1 on /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 read/write/setuid/devices/dev=1d80008 on Mon Mar 10 10:56:50 2008
    /dev/fd on fd read/write/setuid/devices/dev=50c0001 on Mon Mar 10 10:56:51 2008
    /tmp on swap read/write/setuid/devices/xattr/dev=4ec0002 on Mon Mar 10 10:56:52 2008
    /var/run on swap read/write/setuid/devices/xattr/dev=4ec0003 on Mon Mar 10 10:56:52 2008
    /export/home on /dev/dsk/c1t0d0s7 read/write/setuid/devices/intr/largefiles/logging/xattr/onerror=panic/dev=1d8000f on Mon Mar 10 10:56:57 2008
    3. Judging by the above text I will be doing a reinstall huh?
    4. Actually I am not familiar with tape backups let alone solaris backup apps! Any suggestions? (Preferably free, have to cut down on costs.)
    5. No comment
    Thanks for the help, hope to hear from you again!
    rjbanker

  • How to enable multipathing on Solaris 10

    I have a Sun SPARC T2000 connected to a 2540 array.
    Originally I only installed a single channel FC HBA and connected it to tray 1 of the array. Today I've installed another FC HBA and connected it to the 2nd tray of the array. When I run format on my Solaris 10 data host I can see that there are now 2 entries for the same LUN.
    I've enabled multipathing by running the following command
    # stmsboot -D fp -e
    <reboot>
    After the reboot I still see two entries for the same LUN when I run format
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@0,0
    1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@1,0
    2. c2t0d31 <SUN-UniversalXport-0670 cyl 8 alt 2 hd 64 sec 64>
    /pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w202400a0b85a1793,1f
    3. c3t2d31 <SUN-UniversalXport-0670 cyl 8 alt 2 hd 64 sec 64>
    /pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w202500a0b85a1793,1f
    Specify disk (enter its number): ^D
    I was under the impression that I should only see one entry after enabling multipathing.
    Am I wrong?

    Further to my original posting.
    I've now created a few volumes and when I run format I'm presented with the following:
    AVAILABLE DISK SELECTIONS:
    0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@0,0
    1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
    /pci@780/pci@0/pci@9/scsi@0/sd@1,0
    2. c2t0d31 <SUN-UniversalXport-0670 cyl 8 alt 2 hd 64 sec 64>
    /pci@7c0/pci@0/pci@1/pci@0,2/SUNW,qlc@1/fp@0,0/ssd@w202400a0b85a1793,1f
    3. c3t2d31 <SUN-UniversalXport-0670 cyl 8 alt 2 hd 64 sec 64>
    /pci@7c0/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w202500a0b85a1793,1f
    4. c4t600A0B80005A0C870000027C4905FD67d0 <SUN-LCSM100_F-0670 cyl 2558 alt 2 hd 64 sec 64>
    /scsi_vhci/ssd@g600a0b80005a0c870000027c4905fd67
    5. c4t600A0B80005A0C870000027D4905FED3d0 <SUN-LCSM100_F-0670 cyl 51198 alt 2 hd 64 sec 64>
    /scsi_vhci/ssd@g600a0b80005a0c870000027d4905fed3
    6. c4t600A0B80005A0C870000027E4905FFCBd0 <SUN-LCSM100_F-0670 cyl 51198 alt 2 hd 64 sec 64>
    /scsi_vhci/ssd@g600a0b80005a0c870000027e4905ffcb
    7. c4t600A0B80005A1793000002A14905FF3Cd0 <SUN-LCSM100_F-0670 cyl 33278 alt 2 hd 256 sec 64>
    /scsi_vhci/ssd@g600a0b80005a1793000002a14905ff3c
    8. c4t600A0B80005A1793000002A34905FFEEd0 <SUN-LCSM100_F-0670 cyl 51198 alt 2 hd 64 sec 64>
    /scsi_vhci/ssd@g600a0b80005a1793000002a34905ffee
    9. c4t600A0B80005A17930000029E4905FCDAd0 <SUN-LCSM100_F-0670 cyl 43518 alt 2 hd 256 sec 64>
    /scsi_vhci/ssd@g600a0b80005a17930000029e4905fcda
    I assume the c4's are my multipath volumes, but how on earth and I going to create slices and mount them with this sort of disk alias?

  • Installing Sun firmware on Seagate disks for StorageTek 2540

    Hello,
    I recently purchased a StorageTek 2540 array that has several open drive slots. As luck would have it, I also have several Seagate ST3146356SS SAS drives sitting on the shelf that according to the System Handbook are one of the models of drives supported in the STK2540 (Sun part #390-0422).
    So I popped them in the chassis, and they show up immediately and are available for use, which is good.
    The problem is, these drives have standard Seagate firmware on them (version 0006) rather than Sun firmware. As such, they show up with a different capacity than the Sun 146GB drives that are already in the chassis (which happen to be Hitachi, rather than Seagate drives).
    I would like to update the firmware on these drives to the official Sun version of the firmware, but I cannot see how to do that. I tried to use the Install Array Firmware Baseline wizard, but when it analyzes the drives, it tells me for the Seagate drive:
    Current Firmware: Tray.85.Drive.06: 0006
    Baseline: Tray.85.Drive.06: Not Applicable
    In the past I owned a bunch of Sun T3 arrays, and I used to replace standard manufacturer firmware with Sun firmware all the time, and it was very useful. I'd like to be able to do the same thing here. I also have a SAS controller in my PC, and could use Seagate's Seatools or DriveDetect tools (Windoze) or Seagate Enterprise CLI (Linux) to update the firmware, I believe.
    It looks like I have the Sun firmware on my Solaris server (the /opt/SUNWstkcam/share/fw/images/disk/D_ST314656SSUN146G_0A1C.dlp file included in CAM 6.7).
    Is it safe to grab that file and use it to update the drive firmware? Or is there a better way?
    I realize that burning firmware to a non-Sun drive is not supported, and may render the drive(s) useless, but as I am not using the drives for anything else today, anyway, I'm willing to take that risk.
    Any suggestions would be appreciated.
    Thanks,
    Bill

    FYI, after some digging around, I discovered the "sscs modify firmware" command and tried that, but it did not work:
    61# sscs modify -a ss2540 -c Tray.85.Drive.06 -f -o -w \
    -p /opt/SUNWstkcam/share/fw/images/disk/D_ST314656SSUN146G_0A1C.dlp \
    firmware
    Analyzing Firmware
    Incompatible firmware image. Skipping Tray.85 Slot.6
    Firmware install failed.I'm guessing that since the non-Sun drives show up as model # ST3146356SS instead of ST314656SSUN146G, the firmware is deemed incompatible by the CAM CLI.
    So I'm still looking...
    Thanks,
    Bill

  • Hardware recommendations for learning Solaris Cluster on Sparc (at home)

    On a low budget, I'd like to put together a Solaris Cluster on Sparc (at home). At "work" in the next year we will be implementing a Solaris Cluster to run Tomcat and a custom CORBA server. (These apps will be migrated from very old hardware and VCS) The CORBA server is a Sparc binary, hence the need for Sparc. I'd like my home-office cluster to be similar in function to what I have at work. At work we have (2) T5120 Servers and a 2540 (2500-M2) Array waiting. From looking at the Solaris Cluster docs, it looks like you use a 2540 in a Direct-Connect configuration. We will be going to Solaris Cluster training eventually, but not soon. In the meantime, I'd like to keep/gain some skills/experience.
    Potential (cheap) Home Cluster:
    (2) SunFire V245 or (2) T1000 or (2) something_cheap
    connected to
    (1) Storedge D2 or (1) Storedge S1
    My main desire, is for the interconnects and failover on this Home Cluster to behave the same way as the T5120s with the 2540 Array. Example, if I yank a HD (or replace) then I'd like it to give very similar messages to what I will face at work in the future. I'd like the creation of ZFS pools etc to work similarly. I'd like SCSI cards (HBAs or whatever) and cabling to be cheap.
    Any recommendations on hardware> Servers? Arrays? SCSI Cards/cabling?
    Thanks,
    Scott

    I settled on:
    (2) Sunfire V210
    Storedge 3120
    Connected by VHDCI
    All used equipment at a cheap price. Should be a great little testbed.

  • Replacement for 2540?

    We've built a number of Sun (SPARC) servers for customers with 2540 arrays fibre attached.
    Now the 2540 is end-of-life, what's the replacement (if any)?
    The 6000 series comes in at over twice the price. Is the Open Storage range a suitable alternative (fibre channel support?).
    Thanks.

    There may be one in the pipeline I hear.... Sometime around June time frame. It will probably be called a 2600 and will be the same as the IBM DS3512, which if it
    doesn't materialize by then would be a good bet to pursue. It will probably be cheaper as well.
    I got caught out by exactly the problem after buying a 2540 full of 2TB disks in October 2010, I went to add more trays and disks in February and was told it
    was no longer available. I was extraordinarily pissed off by this and it may well end up costing Oracle a lot of business. This is not how Sun used to do it with
    last ship dates etc. Larry's policy of selling only whats on the truck may well come back to haunt him.
    Edited by: Storage Guy on Apr 4, 2011 7:28 PM
    Edited by: Storage Guy on Apr 4, 2011 7:29 PM

  • Clustering Solaris 10 (SPARC)  with QFS 4.3

    I have searched to no avail for a solution to my error. The error is bolded and Italics in the information below. I would appreciate any assists!!
    System
    - Dual Sun-Fire-280R with external dual ported SCSI-3 disk arrays.
    - Solaris 10 Update 1 with the latest patch set (as of 5/2/06)
    - Clustering from Java Enterpriset System 2005Q4 - SPARC
    - StorEdge_QFS_4.3
    The root/boot disk is not mirrored - don't want to introduce another level
    of complication at this point.
    I followed an example in one of the docs for "HA-NFS on Volumes Controlled by Solstice DiskSuite/Solaris Volume Manager" from setting up an HA QFS file system".
    The following is additional information:#
    hosts file for PREFERRED - NOTE Secondary has same entries but PREF and SEC loghosts are switched.
    # Internet host table
    127.0.0.1 localhost
    XXX.xxx.xxx.11 PREFFERED loghost
    XXX.xxx.xxx.10 SECONDARY
    XXX.xxx.xxx.205 SECONDARY-test
    XXX.xxx.xxx.206 PREFERRED-test
    XXX.xxx.xxx.207 VIRTUAL
    Please NOTE I only have one NIC port to the public net.
    ifconfig results from the PREFERRED for the interconnects only
    eri0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
    inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
    ether 0:3:ba:18:70:15
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
    ether 8:0:20:9b:bc:f9
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 5
    inet 172.16.193.1 netmask ffffff00 broadcast 172.16.193.255
    ether 0:0:0:0:0:1
    lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
    inet6 ::1/128
    eri0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 3
    inet6 fe80::203:baff:fe18:7015/10
    ether 0:3:ba:18:70:15
    hme0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 4
    inet6 fe80::a00:20ff:fe9b:bcf9/10
    ether 8:0:20:9b:bc:f9
    PLEASE NOTE!! I did disable ipv6 during Solaris installation and I have modified the defaults to implement NFS - 3
    ifconfig results from the SECONDARY for the interconnects only
    eri0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
    inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255
    ether 0:3:ba:18:86:fe
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet 172.16.1.2 netmask ffffff80 broadcast 172.16.1.127
    ether 8:0:20:ac:97:9f
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 5
    inet 172.16.193.2 netmask ffffff00 broadcast 172.16.193.255
    ether 0:0:0:0:0:2
    lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
    inet6 ::1/128
    eri0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 3
    inet6 fe80::203:baff:fe18:86fe/10
    ether 0:3:ba:18:86:fe
    hme0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 4
    inet6 fe80::a00:20ff:feac:979f/10
    ether 8:0:20:ac:97:9f
    Again - I disabled ipv6 and install time.
    I followed all instructions and below are the final scrgadm command sequences:
    scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"
    scrgadm -a -t SUNW.HAStoragePlus
    scrgadm -a -t SUNW.nfs
    scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs
    scrgadm -a -L -g nfs-rg -l VIRTUAL_HOSTNAME
    scrgadm -c -g nfs-rg -h PREFERRED_HOST,SECONDARY_HOST
    scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/qfsnfs1 -x Filesy
    stemCheckCommand=/bin/true
    scswitch -Z -g nfs-rg
    scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y Resource_dependencies=qfsnfs1-res
    PREFERRED_HOST - Some shared paths in file /global/nfs/SUNW.nfs/dfstab.nfs1-res are invalid.
    VALIDATE on resource nfs1-res, resource group nfs-rg, exited with non-zero exit status.
    Validation of resource nfs1-res in resource group nfs-rg on node PREFERRED_HOST failed.
    Below are the contents of /global/nfs/SUNW.nfs/dfstab.nfs1-res:
    share -F nfs -o rw /global/qfsnfs1
    AND Finally the results of the scstat command - same for both hosts:(root)[503]# scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: PREF Online
    Cluster node: SEC Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: PREF:hme0 SEC:hme0 Path online
    Transport path: PREF:eri0 SEC:eri0 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: PREF 1 1 Online
    Node votes: SEC 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: nfs1dg PREF SEC
    Device group servers: nfsdg PREF SEC
    -- Device Group Status --
    Device Group Status
    Device group status: nfs1dg Online
    Device group status: nfsdg Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: nfs-rg VIRTUAL qfsnfs1-res
    -- Resource Groups --
    Group Name Node Name State
    Group: nfs-rg PREF Online
    Group: nfs-rg SEC Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: VIRTUAL PREF Online Online - LogicalHo
    stname online.
    Resource: VIRTUAL SEC Offline Offline - LogicalH
    ostname offline.
    Resource: qfsnfs1-res PREF Online Online
    Resource: qfsnfs1-res SEC Offline Offline
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: PREF ipmp1 Online ce0 Online
    IPMP Group: SEC ipmp1 Online ce0 Online
    ALSO the system will not fail over

    Good Morning Tim:
    Below are the contents of /global/nfs/SUNW.nfs/dfstab.nfs1-res:
    share -F nfs -o rw /global/qfsnfs1
    Below are the contents of vfstab for the Preferred host:
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c1t1d0s1 - - swap - no -
    /dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 / ufs 1 no -
    #/dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s3 /globaldevices ufs 2 yes -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes size=1024M
    /dev/did/dsk/d2s3 /dev/did/rdsk/d2s3 /global/.devices/node@1 ufs 2 no global
    qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1
    Below are the contents of vfstab for the Secondary host:
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c1t1d0s1 - - swap - no -
    /dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 / ufs 1 no -
    #/dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s3 /globaldevices ufs 2 yes -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes size=1024M
    /dev/did/dsk/d20s3 /dev/did/rdsk/d20s3 /global/.devices/node@2 ufs 2 no global
    qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1
    Below are contents of /var/adm/messages from scswitch -Z -g nfs-rg through the offending scrgadm command:
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource qfsnfs1-res status on node PREFFERED_HOST change to R_FM_ONLINE
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource qfsnfs1-res status msg on node PREFFERED_HOST change to <>
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource qfsnfs1-res state on node PREFFERED_HOST change to R_MON_STARTING
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node PREFFERED_HOST change to RG_PENDING_ON_STARTED
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hastorageplus_monitor_start> for resource <qfsnfs1-res>, resource group <nfs-rg>, timeout <90> seconds
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hastorageplus_monitor_start> completed successfully for resource <qfsnfs1-res>, resource group <nfs-rg>, time used: 0% of timeout <90 seconds>
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource qfsnfs1-res state on node PREFFERED_HOST change to R_ONLINE
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_monitor_start> completed successfully for resource <merater>, resource group <nfs-rg>, time used: 0% of timeout <300 seconds>
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource merater state on node PREFFERED_HOST change to R_ONLINE
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node PREFFERED_HOST change to RG_ONLINE
    May 15 14:42:47 PREFFERED_HOST Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_validate> for resource <nfs1-res>, resource group <nfs-rg>, timeout <300> seconds
    May 15 14:42:47 PREFFERED_HOST SC[SUNW.nfs:3.1,nfs-rg,nfs1-res,nfs_validate]: [ID 638868 daemon.error] /global/qfsnfs1 does not exist or is not mounted.
    May 15 14:42:47 PREFFERED_HOST SC[SUNW.nfs:3.1,nfs-rg,nfs1-res,nfs_validate]: [ID 792295 daemon.error] Some shared paths in file /global/nfs/admin/SUNW.nfs/dfstab.nfs1-res are invalid.
    May 15 14:42:47 PREFFERED_HOST Cluster.RGM.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <nfs1-res>, resource group <nfs-rg>, time used: 0% of timeout <300, seconds>
    If there is anything else that might help, please let me know. I am currently considering tearing the cluseter down and rebuilding it to test with a UFS filesystem to see if the problem might be with QFS.,

  • Object Array Data Provider Refresh Possible bug

    Hello
    I am having a problem with Object Array Data Provider in terms that the table's data is not changed corectly after a request as expected, but after two requests.
    Steps to reproduce the bug:
    0. Create a new Visual Web project, call it 'test'.
    Set 'Bundled Tomcat ' or 'Sun 9' as deploy target server.
    Edit 'Page1' of the project.
    1. Create an Entity class, simple class that has only getters and setters with a few fields (lets say 'id' and 'name').
    2. In the SessionBean1 that is generated by the framework create an array of Entity class named 'entityArr', and getters and setters for this field.
    3. Add a new Array Object Data Provider on the page and set in its properties as array the array created in the previous step 'entityArr (SessionBean1)'.
    4. Add a new table component, and set as data provider the provider created in step 3.
    In the Table's Layout map the fields from the Entity class, and set whatever compnents you desire for each component's type or leave the default ones (Static Text).
    5. Add a 'property change trigger component' on the page. I called it like this because I tried the following :
    5.1 A text field and a button to submit the text value
    5.2 A Calendar component with autosubmit
    5.3 A DropDown with autosubmit.
    6. On the property change trigger component created at step 5 set a value_changed method that changes the the array that should be displayed by the table.
    For Example, for a DropDown component, you will have a method like this :
    public void dropDown1_processValueChange(ValueChangeEvent event) {
    String idStr = (String) event.getNewValue();
    System.out.println("Entity id :"+idStr);
    if(idStr .equals("item1")) {
    fillSessionBean1(true);
    else {
    fillSessionBean1(false);
    getSessionBean1().setItemName(idStr);
    private void fillSessionBean1(boolean fillValues) {
    Entity[] values ;
    if(fillValues) {
    values = new Entity[4];
    for ( int i=0;i<values.length; i++) {
    Entity entity = new Entity();
    entity.setDescription("Description "+i);
    entity.setName("Name "+i);
    entity.setId(i) ;
    values[i] = entity;
    else {
    //values = new Entity[0];
    values = null;
    getSessionBean1().setEntityArr(values);
    7. When running the program, if the selected is Item1, the table does not show the array set if on this branch.
    I am using :
    Netbeans 5.5 build 20061017100,
    Visual Web Pack 070104_2,
    Ent.Pack 20061212
    jdk 1.6.0
    Operating Systems : Both Linux Suse10 and Windows.
    If anyone has a solution for this please let me know.

    OK
    While no one responded, I had to think for myself.
    There is a bug in the code generated by netbeans or there is nowhere specified that if you attach an array to a data provider you will have to notify by hand the data provider that the array has changed.
    You will have to put this line that is generated in init function in your valuechanged functions :
    objectArrayDataProvider1.setArray((java.lang.Object[])getValue("#{SessionBean1.entityArr}"));
    I think this aproach is a little bit wrong, even it works.
    I believe the data provider should be notified the array has been changed.
    There could be a much more simple aproaches :
    1. In the code generated by netbeans, if you attach an array to a data provider the data provider will be notified after any set(Object[])
    2. The data provider could have a function so you will ne anle to attach to the data provider an Object (the session bean, in my case) and the name of the function that retrieves the array (in my case, 'getEntityArr') .
    The code generated by netbeans could add this function easily.
    Maybe there are any other better aproaches, and I might be wrong.
    It's good that it works.

  • How to calculate the mean of a section of an array?

    hello
    i have an array of 26 elements.some sections of it are all zeros and some others are numbered. i want to be able to find the mean of the numbered sections and replace those values. an example 0,0,0,0,0,2,3,4,6,5,7,8,0,0,0,9,8,7,4,0,0,0.....and i want to be able to get it as in 0,0,0,0,0,5,5,5,5,5,5,5,0,0,0,7,7,7,7,0,0,0....
    i know you can do it "kinda" manually with the array subset vi. but i want to find a more correct way to do this.can anybody help me on this?
    i have attached the array with the actual numbers if it will save you 2 minutes of your time by generating another one
    cheers
    Attachments:
    array.vi ‏9 KB

    Here's an alternative solution (LV 8.0)  that uses fewer shift registers and has only one case structure. Se if it makes more sense to you.
    Notice that Dr.Ivels solution is incorrect if the array starts with a nonzero segment (e.g. 4,5,0,0,0,0,...), or ends with a nonzero segment (e.g. ...0,0,0,7,8). It would need to be tweaked a bit for correct operations under these conditions.
    Please verify correct operations, I haven't fully tested my solution either with pathological data, but it seems to work correctly in the above cases.
    Message Edited by altenbach on 02-14-2007 04:10 PM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    SegmentAverages.vi ‏16 KB
    SegmentAverages.png ‏18 KB

Maybe you are looking for

  • Additional results doesn't work with multi-line string dat

    I observed an erratic behavior with "Additional Results" in TestStand 2010. The problem statement is as follows: 1. I have a variable "Locals.FirstName" 2. I have another variable "Locals.LastName" 3. Concatenate these two strings with a line feed or

  • Can't get switch ports to work

    Okay so I have a basic home lab, 2600 router x2 and 2900 XL switch x 2. I've connected each router together (they "see" each other in cdp), and each router to one switch. My problem is that the interfaces that the router connects to the switch won't

  • *** Sequence settings for NTSC DV 16:9?

    I have 4:3 footage I am going to put inside a 16:9 sequence and then pull out a bit in order to fill the frame. What are the sequence settings I should use to get a 16:9 NTSC DV sequence? Do I click anamorphic in this situation? Thanks in advance!

  • How do I have my newest messages appear in apple mail on my iPad without...

    How do I have my newest messages appear in apple mail on my iPad without having to scroll up each time

  • Can't clean history in MF 23 on Win 7; too smal pop-up window

    I use MF 23 on Win 7 at work. I use MF 23 on Win 8 at home. I just noticed I can't clear recent history at work due to the very small pop-up window. I changed the theme but it happened again. I'm sure it works normally at home on Win 8. The pop-up wi