StorageTek 6140 / CAM 6.1 newby, setup Qs

Just got a M4000 with a 6140 connected via dual Fibre HBA. Confused what S/W is installed where. I installed the CAM s/w on the M4000 (Data Host?) but I just re-read a StorageTek Array Admin Gde that says the management s/w (CAM?) must be installed on a Sun workstation - .i.e. not on my M4000 is this true? A 2nd Sun system needs to be used to run the CAM application?
Lastly, if M4000 is running Solaris 10 is there any Data Host s/w (drivers, etc.) that I have to install or is it already included in pre-installed Solaris 10 O/S on my M4000 or do I have to find a Solaris 10 O/S CD that has extra array drivers that I need to install? I did enable Multipathing on Solaris 10 via the stmsboot.
Any guidance is greatly appreciated.
Dean

FYI - I answered my own Qs with a bunch of RTFM and trial-and-error.
I originally put the CAM S/W on the M4000 (is data host). Obtained a static IP # for network connection to 6140 Controller A and used laptop to connect to 6140 via seria cable (with proper serial-to-CAT-5 adaptor) to configure IP for controller. Then I could go to URL CAM S/W Install Gde said to use from my desktop and Register 6140 by supplying IP instead of searching network. Was able to register 6140 but network guys complained of "flapping" between M4000 and 6140 so I uninstallled CAM on M4000 and installed it on spare Ultra 25, re-registerd 6140 but network guys say I'm still "flapping".
As for S/W on M4000 to drive 6140 array - Solaris 10 includeds necessary drivers and no need to download SAN S/W from Sun. I did issue Solaris command to support Multipathing.
As for configuring the 6140 with 6-146GB,15KRPM FC-AL disks: I don't have any Premium Licenses as this M4000 will be the only data host to use the 6140... I have a Oracle 10 D/B to house on 6140 so starting with adding a storage pool that asks for a storage profile - I chose ORACLE_OLTP_HA. Then I created a Volume referencing the added storage pool so when I select all 6 disks I got Raid 1+0 and half the total disk space of the 6 disks. I finally created an initiator - the only unkown was when it wanted Unique Identifier (WWN) the Volume create screens listed a WWN of 32chars (WWN of 1st and 6th disk in my volume) but Create Initiator screen wouldn't take this - just had to give it the WWN of 1st disk of Volume and then used hostname of M4000 as only host associated with this initiator.
After a quick reboot of M4000 format finally saw a big 400GB+ disk!!!! The format tool showed the same 8 slices to configure as with individual disks. I zeroed-out all slices but slice 0 and mapped the entire Volume to this slice.
Now for a follow-up Q:
I got this big disk on the array created but I need more than 7 slices to carve it up. What's the best way to do it? A guy here that has older disk arrays has always used Volume Mgr S/W and created 1 big Concat over the entire array volume and chopped it up via soft partitions. Is this a good approach that will achieve good performance? Is there a way to chop up the disks within CAM so M4000 sees multiple logical disks that I can then use the 7 slices per each logical disk for my mount pts? I want to keep Raid 1+0 if possible.
Dean U.

Similar Messages

  • StorageTek 6140: How to upgrade from 06.60.11.11 to 07.15.11.12?

    Hi,
    How can I upgrade to 07.15.11.12?
    CAM shows me only 06.60.11.11 as last firmware for this storage.
    Another StorageTek 6140 are running with 07.15.11.12 and is in Ok-state, so CAM is up-to-date.
    More info:
    System/NVSRAM: N399X-660843-003
    Tray.85.Controller.A: 06.60.11.11
    Tray.85.Controller.B: 06.60.11.11
    Tray.85.Drive.01: 0605
    Tray.85.Drive.02: 0605
    Tray.85.Drive.03: 0605
    Tray.85.Drive.04: 0605
    Tray.85.Drive.05: 0605
    Tray.85.Drive.06: 0605
    Tray.85.Drive.07: 0605
    Tray.85.Drive.08: 0605
    Tray.85.Drive.09: 0605
    Tray.85.Drive.10: 0605
    Tray.85.Drive.11: 0605
    Tray.85.Drive.12: 0605
    Tray.85.Drive.13: 0605
    Tray.85.Drive.14: 0605
    Tray.85.Drive.15: 0605
    Tray.85.Drive.16: 0605

    Ok, let's try to clarify things. The FW 06.xx and 07.xx are incompatible. The configuration database (DB) which is on the drives is different between 06.xx and 07.xx, that is why an upgrade from 06.xx to 07.10.xx.xx requires a special utility, why it should be done by Sun support engineers and why it can be done only offline. Today we cannot upgrade from 06.xx to 07.15, we need to upgrade to 07.10 with the special utility then CAM can be used to upgrade to 07.15. Any further upgrades within 07.xx can be done online with CAM.
    If you move a drive from an array with 06.xx into an array with 07.xx, there is a special procedure in CAM Service Advisor. However moving drives from 07.xx to 06.xx isn't supported and won't work, this can even lead to issues.
    The FW 07.xx includes bug fixes as well as new features (ie: volumes >2TB, Raid 6, etc...) whereas the FW 06.xx remains now a sustaining FW which means that there is only bug fixes.
    The latest FW for both streams are:
    06.60.11.10 (for the 6130), 06.60.11.11 (ie: 6140, 6540)
    07.15.11.12 (6140, 6540)
    07.30.xx.xx (6580/6780)
    The latest version of CAM is now CAM 6.2.0_14 which brings 06.60.11.10/11 and 07.15.11.12 and 07.30.xx.xx (I don't remember now the exact numbers for xx.xx).
    If you install CAM 6.2.0_14 and have a 6140 running 06.19.25.xx, CAM will suggest you to upgrade to 06.60.11.11, it won't suggest you to upgrade to 07.xx because it knows that this requires a special utility and procedure.
    If you have this array running 07.10, CAM will suggest you to upgrade it to 07.15.11.12 and this can be done online with CAM.
    Hope this helps

  • Storagetek 6140 & serial port

    I'm trying to access with a PC, using the serial port, to the Storagetek 6140's VT 100 console. I connected the ps2-6-pins-to-rj45 cable (530-3544-01) + RJ45-DB9 DTE adapter (530-3100-01), on Controller A and to com1 of my Windows pc. I downloaded TTerm (so I can send the ctrl-break), I configured com port like 6140's manual, but with no fortune. When I send ctrl-break nothing happens. I change the pc, com, I controlled the cable, but it's all ok. I think I made some mistake, but where?
    Thanks

    praxis22 wrote:
    from there I was able to hit alt-break and it gave me the options of S for service menu or space for baud rate
    I his S (capital S) and it asked me for a password, but since I wanted to reset the Array password that wasn't immediately useful. but I figure I can google or ask Sun for that. We are a little further on at
    least,.I think you are mixing the different passwords. The password which is requested when you type "S" is kra16wen and it cannot be reset. This password is documented in the customer documentations for the 6140.
    The array password is a different one, it can be reset by accessing the Service Interface Menu. It can be set and/or modified using CAM. Everytime you proceed with modifications on the array, CAM will read this password in the array and compare it with the one CAM has in its internal DB. If the password is similar, CAM proceeds with the modification, however if it is different, CAM will report an error. If you do not remember this password, you can use the Service Interface Menu to reset it. If you remember and want to change it or get CAM in sync with the real password in the array, you can use CAM to do that.
    Hope this clears things.

  • ZFS and StorageTek 6140 performance

    We have a Sun StorageTek 6140 Disk array and currently two Solaris 10 x86 hosts connected to it via Fibre channel through a Qlogic 5602 FC Switch.
    One system is our production E-mail system (Running Sun Messaging) the other is a backup server.
    The backup server is running CAM software an periodically issues a snapshot to be done on the 6140. I have noticed that copying or taring up files on either the production volume or the snapshot volume has very poor performance.
    Basically between 2-4MB/s
    We have patched the kernel 5.10 Generic_127128-11 i86pc i386 i86pc and tried various settings in /etc/system
    set zfs:zfs_prefetch_disable=1
    set zfs:zfs_nocacheflush=1
    But still the performance is not improving. The array seems to function properly (that is if I use "dd" then the array performs quickly so I must believe that it has something to do with ZFS).
    Has anyone else had issues with ZFS performance on a 6140 array or similar? What kind of speeds are you seeing with actual file system usage?
    I should also add that If I used a UFS formatted filesystem on the array I saw cp/tar speeds around 10-12MB/s
    thanks,
    -Tomas

    Hello Nik,
    Fortunately I have generated supportdata package before upgrading and CAM version is 6.7.0.12. In addition to your reply I found an article at http://www.tune-it.ru/web/bender/blogs/-/blogs/восстановление-томов-на-массивах-6000-и-2000-серии , for not-russian speakers: the article provides the similar solution with /opt/SUNWsefms/bin/service utility, but the author made a note about an offset=<blocks> field, he multiplies the value from profile by 512, I had some volumes being stored on a single Vdisk, and I'm not sure now, because in your's and author's service utulity template it was clearly marked that the value of an argument is in blocks, and the stored in profile value is also in blocks (not in bytes, the piece of my profile - "+...GB (598923542528 bytes) Offset (blocks): ...+"), is he right by multiplying the value from profile?
    - Second question - does a service utility provide a functionality to change wwn's on volumes and Storage array identifier (SAID) at whole device? I found out that the previous license files are not accepted, because of another Feature Enable Identifier (I think it is calculated from a changed value of Storage array identifier, am I right?), and why I want to change the wwn's and mappings (mappings will correct from the bui) on recreated volumes as per profile is is that I want to avoid problems by possible misrecognition them by vxvm at a server side (target numbering change) and further recorrecting/reimporting vxvm disks and disk group ownership.

  • Oracle recommend configuring a Sun StorageTek 6140 with 15 hard disks

    Hi,
    How would Oracle recommend configuring a Sun StorageTek 6140 with 15 hard disks for optimal use with multiple Solaris servers running Oracle instances connected via a SAN? Should the storage array be configured as a single RAID 5 device and then LUNs created for the different servers? Or should each Oracle instance have its own dedicated hard disks?
    Also might we see better performance if we used Solid State Devices for the ZFS Intent Log (ZIL) and/or L2ARC on ZFS, instead of using the UFS file system straight to the SAN?
    Regards
    NM

    I don't think Oracle would recommended any particular way. I would consider the following, but your mileage may vary, so testing is very important.
    Raid 1 (10) for logs
    RAID5 for Tables
    But if you have many servers access the same LUN's/VDISKS then contention may be a problem. Maybe consider consolidating all you servers to a single Oracle Server.( For protection a second server with Oracle Solaris Cluster)
    ASM is the best method for managing disks and data placement. But if like to see files/directories then go for UFS with direct I/O enabled. ZFS is fantastic if you want to use Snapshot/clones/Compression, etc.. but I think UFS is faster.
    As for cache, Oracle 11gR2 has support for storing objects specifically in FLASH, so look to use the Sun F20 card with 96Gb of flash.
    HTH
    Andy

  • SUN StorageTek 6140 w/ Linux Automatic Volume Transfer (AVT)

    Hi2all
    Im connecting a SUN StorageTek 6140 to Linux RHEL witjh qlogic cards/drivers
    Using the latest drivers now 8.02.23
    I had several I/O errors at fdisk -l ( 4mins time ) and boot was 15mins
    Now with a change in StorageTek 6140 settings, enabling Linux
    Automatic Volume Transfer (AVT) is ok
    fdisk -l is < 1sec, boot normal
    Can you explain this? is this the correct way to set it up?
    Much Thanks
    Jose

    It really depends which failover driver you're using / wantint to use.
    I'd suggest downloading and using the RDAC driver from [http://www.sun.com/download/index.jsp?cat=Hardware%20Drivers&tab=3&subcat=Storage]
    It sounds like the RHEL box was trying to probe volumes on boot and fdisk but since the initiator type wasn't correct, it would timeout eventually on each volume on one of the paths.

  • Using the isight camera in the bootcamp setup with windows 8.1

    Hi
    I am running OSX Maverick in a bootcamp setup with Windows 8.1 and have played around with the isight camera but I would like to know what folder the photo's/video's are sent to?
    Also, are there any settings or user guide for the isight camera in Windows 8.1?
    Thanks

    I don't use Boot Camp, but does anything here help you? Boot Camp - Apple Support.
    My look there did not show anything about the new Windows 8.1.  The latest I saw was Windows 8.
    Brave01Heart wrote: ... what folder the photo's/video's are sent to?...
    The answer will be wherever the app you are iusing for the capture stores them unless you change the default location.  This answer is the same for either Windows or Apple operating systems.

  • 6140 CAM Software Connection Problems

    We recently purchased a 6140 array and are currently performing evaluations before putting it into production. The array firmware version is 06.19.25.10. We�ve loaded the CAM software (version 5.1) from the CD that came with the array on 3 management servers. Two are running Solaris 10 and one is running windows. The array has been registered with the CAM interface on each server.
    Occasionally we are unable to connect with the array via the web based interface or the CLI. We get messages such as �Connection Refused� or �The Operation Failed�. First impressions lead us to think that the array (controller) may have an issue. We are able to �ping� the IP addresses of each controller verifying a good connection on the LAN. Since we have more than one management server set up, we decided to try connecting with another server. This usually works.
    Thinking that the array (controllers) may have some residual information or an improper logout had occurred, we connected via one of the other servers and issued a reset to each of the controllers. This does not seem to help. The last time we encountered this problem it was necessary to completely uninstall the CAM software and re-install it. This has happened on both platforms. As you can imagine, we�re not looking forward to doing this every time the CAM software has an �issue�.
    It appears from our informal testing that the problem lies on the server rather than the array when this happens. A little guidance or access to some documentation that deals with our problem would be greatly appreciated.

    For those who may be following this thread.
    Just prior to upgrading the CAM software on the Sun server that has the connection problems, I verified that we could access the array via the CAM software on the other Sun server and through our Windows machine. This leads me to believe that the array controllers and array firmware were functioning properly.
    We upgraded the CAM software (which performs an uninstall first) on the Sun server with the connection issues and was then able to communicate with the array. We have not upgraded firmware on the array of reset the array controllers. Again this has me thinking that the problem we're experiencing is related to something happening locally on the server. I can't help thinking there's a file somewhere that can simply be cleared out when this happens. If I happen to come across any more information on this I'll post it to this thread.

  • Re: Lorex Security Camera with Linksys WRT320n Setup Help

    I setup a Lorex security camera system that allows for to view them via remote acces.  I am able to view them through the local network but not via outside network connection.  I believe lies within the network settings that i have in place.  I have tried to apply the port forwarding settings but still not able to.  If someone has this type of system setup or know where to point me I would appreciate the help.  The default web port is 80 and the TCP port 6100. 

    1)  I assume that you have set your security camera to a fixed (static) LAN IP address  -- is this correct?
    2)  And that you are forwarding the appropriate port to that address   --  is that correct?
    Rules for using fixed LAN IP addresses on Linksys routers:
    With Linksys routers, a fixed (static) LAN IP addresses must be assigned in the device that is using the address. So you need to enter the fixed address in the computer , printer, or camera, not in the router.  This is the best way to assign a fixed LAN IP address.   In your WRT320N, there is an alternative method  --- you could use the "DHCP reservation" method to assign your camera the same fixed LAN IP address each time you boot up your network.
    When using a Linksys router, normally, any fixed LAN IP address must be outside the DHCP server range (typically 192.168.1.100 thru 192.168.1.149), and it cannot end in 0, 1, or 255.  However, if you are using "DHCP reservation" your camera's address should be within the DHCP server range.
    Therefore any fixed LAN IP address would normally need to be in the range of
    192.168.1.2 thru 192.168.1.99 or
    192.168.1.150 thru 192.168.1.254
    assuming you are still using the default DHCP server range.
    Also, in the computer, when you set up a static LAN IP address, you would need to set the "Subnet mask" to 255.255.255.0 and the "Default Gateway" to 192.168.1.1 and "DNS server" to 192.168.1.1
    It is also important that no two devices on your network be set to the same static LAN IP address.
    Also note that many cameras cannot use the proxy DNS server located at 192.168.1.1 .  In this case, enter your true Internet DNS server address (found in the router)  into you camera.
    Message Edited by toomanydonuts on 10-11-2009 03:27 AM

  • HF R500 as camera for closed circuit setup. Will it show the what is showing in the viewfinder

    I am needing a camcorder to use as a camera for a closed circuit system in a church.  Will the menus in the viewfinder show through the HDMI output. Or will it look like it does when playing back a recording.  I have used cameras before which showed the rec light in the picture.  I need a camera which doesn't do that.  Will the Vixva HF R500 work?
    Solved!
    Go to Solution.

    Hi Rzrbk!
    Thanks for the post.
    What you see on the screen will appear on the live output over HDMI.  To turn off the onscreen displays, please fo the following:
    Open the menu.
    Select the [Display Setup] tab.
    Select [Output Onscreen Displays].
    Select [Off].
    Did this answer your question? Please click the Accept as Solution button so that others may find the answer as well.

  • Questionable performance of Sun Storagetek 6140

    Hi
    I've got StorageTek 6410 with 16 FC 15K 68GB disks connected to Sun M4000 (SPARC) with two 4Gbps Qlogic HBAs (2460). OS is Solaris 10 5/09.
    I tried everything - creating RAID1, RAID5, RAID10, block size 256, 512k volumes, ZFS or UFS. I'm getting maximum average write performance of 80-160MB/s.
    Tested with dd and iostat. In another thread someone claims that it has to be at least 260MB/s. What's wrong? Maybe dd is not good for testing? What parameters can i tune?
    TIA
    Edited by: wdmadm on Aug 26, 2009 8:52 AM

    Following example is RAID5 volume of 8 disks and 512 KB block size, but im getting similiar results with RAID10 volumes:
    I'm starting dd:
    ex.
    dd if=/dev/zero of=test2 bs=512kand using iostat on another terminal
    iostat command:
    iostat -xnpmY c3t600A0B8000267DD0000046934A94BB79d0 1example iostat output:
                       extended device statistics
        r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
        0.0  710.0    0.0 86874.3  0.0 33.2    0.0   46.8   0 100 c3t600A0B8000267DD0000046934A94BB79d0
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c3t600A0B8000267DD0000046934A94BB79d0.t200500a0b8267d75
        0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c3t600A0B8000267DD0000046934A94BB79d0.t200500a0b8267d75.fp2
        0.0  710.0    0.0 86874.3  0.0  0.0    0.0    0.0   0   0 c3t600A0B8000267DD0000046934A94BB79d0.t200400a0b8267d75
        0.0  710.0    0.0 86874.3  0.0  0.0    0.0    0.0   0   0 c3t600A0B8000267DD0000046934A94BB79d0.t200400a0b8267d75.fp0Edited by: wdmadm on Aug 27, 2009 7:57 AM

  • Storagetek 6140 - chunk size? - veritas and sun cluster tuning?

    hi, we've just got a 6140 and i did some raw write and read tests -> very nice box!
    current config: 16 fc-disks (300gbyte / 2gbit/sec): 1x hotspare, 15x raid5 (512kibyte chunk)
    3 logical volumes: vol1: 1.7tbyte, vol2: 1.7tbyte, vol3 rest (about 450gbyte)
    on 2x t2000 coolthread server (32gibyte mem each)
    it seems the max write perf (from my tests) is:
    512kibyte chunk / 1mibyte blocksize / 32 threads
    -> 230mibyte/sec (write) transfer rate
    my tests:
    * chunk size: 16ki / 512ki
    * threads: 1/2/4/8/16/32
    * blocksize (kibyte): .5/1/2/4/8/16/32/64/128/256/512/1024/2048/4096/8192/16384
    did anyone out there some other tests with other chunk size?
    how about tuning veritas fs and sun cluster???
    veritas fs: i've read so far about write_pref_io, write_nstream...
    i guess, setting them to: write_pref_io=1048576, write_nstream=32 would be the best in this scenario, right?

    I've responded to your question in the following thread you started:
    https://communities.oracle.com/portal/server.pt?open=514&objID=224&mode=2&threadid=570778&aggregatorResults=T578058T570778T568581T574494T565745T572292T569622T568974T568554T564860&sourceCommunityId=465&sourcePortletId=268&doPagination=true&pagedAggregatorPageNo=1&returnUrl=https%3A%2F%2Fcommunities.oracle.com%2Fportal%2Fserver.pt%3Fopen%3Dspace%26name%3DCommunityPage%26id%3D8%26cached%3Dtrue%26in_hi_userid%3D132629%26control%3DSetCommunity%26PageID%3D0%26CommunityID%3D465%26&Portlet=All%20Community%20Discussions&PrevPage=Communities-CommunityHome
    Regards
    Nicolas

  • In-Band Management Software for StorageTek 2530

    I'm trying to install the software needed for in-band management for a StorageTek 2530 SAS Array. Page 83 of Sun StorageTek Common Array Manager Software Installation Guide, 820-2934-10, list the following needed for Linux x86_64
    Linux:
    &#9632; SMagent-LINUX-90.00.A0.06-1.ia64.rpm
    &#9632; SMruntime-LINUX-90.10.A0.02-1.ia64.rpm
    I have downloaded SMagent-LINUX-90.00.A0.06-1.ppc64.rpm on the Sun website but I cannot find SMruntime-LINUX-90.10.A0.02-1.ia64.rpm
    Does anyone know where I can download it from?
    Thanks

    Download Sun StorageTek Common Array Manager In-Band Proxy Agent Packages 6.0.0 for Red Hat Enterprise Linux 4, English
    Download Information and Files
         Get the latest Java Runtime Environment to use Sun Download Manager
    Internet Explorer Users: Check the top of this page for a "Java(TM) Web Start ActiveX Control" message in the information bar. If it appears, click it to finish detecting your Java version.
    We were unable to detect a recent version of Java Runtime Environment (JRE) on your system. With the latest JRE, you can automatically download, install, and run Sun Download Manager (SDM) directly from this page. We highly recommend SDM to easily manage your downloads (pause, resume, restart, verify, and more). Visit java.com for the latest JRE.
    Sun StorageTek Common Array Manager 6.0 Documentation
    Sun StorageTek 2500 Arrays Documentation
    Sun StorageTek 6140 and 6540 Arrays Documentation
    Instructions: Select the files you want, then click the "Download Selected with Sun Download Manager" (SDM) button below to automatically install and use SDM (learn more). Alternately, click directly on file names to download with your browser. (Use of SDM is recommended but not required.)
    Your download should start automatically.
    If not, click the file link below.
    Sun Download Manager(SDM) installation should begin automatically.
    Once it is running, click Start to download the product.
    If your system does not support SDM, click the file link below to download.
    (For help with SDM, see SDM Troubleshooting.)
    Required Files Select All      File Description and Name      Size
         CAM 6.0 In-Band Management Installation Document
    InBand_Installation.html      25.95 KB
         CAM 6.0 In-Band Management Release Notes
    RELEASE_NOTES.txt      7.81 KB
    Optional Files Select All      File Description and Name      Size
         Agent for Linux running on ia64 (Intel Itanium) processors
    SMagent-LINUX-90.00.A0.06-1.ia64.rpm      0.45 MB
         Agent for Solaris SPARC platforms
    SMagent-SOL-90.00.00.06.pkg      1.77 MB
         Runtime package for Solaris SPARC platforms
    SMruntime-SOL-10.10.02.00.pkg      95.59 MB
         Agent for Linux running on Power PC 64-bit processors
    SMagent-LINUX-90.00.A0.06-1.ppc64.rpm      0.45 MB
         Agent for Linux running on 386 platform (AMD & Intel)
    SMagent-LINUX-90.00.A0.06-1.i386.rpm      0.45 MB

  • CAM Version 6.2.0.13 and OOB

    Hi,
    I'm having issues with our Storagetek 6140 and an error "Lost OOB communication with 6140".
    I was reading previous threads about this subject and only 1 issue had come up before, and the fix was to upgrade to CAM Version 6.1. Since we are well above 6.1, is this still a bug or something new?
    Thanks
    -Guycamero

    This event "Lost OOB communication" isn't specific to one bug. This is a general event indicating that CAM lost the communication with the array, however the root causes might be different for every issue. Yes there was a bug in a previous release, but now if you still encounter this issue with the latest version of CAM, this means that you are suffering another issue which isn't necessarily a bug, this may be an issue with your array and your network topology.
    Such event requires an investigation by collecting the logs from the array, from CAM and also to analyze the ethernet network topology.
    Regards
    Nicolas

  • Traffic only going through controller A on 6140

    We have two seperate StorageTek 6140 arrays both with dual FC controllers and acting in a similar way, so I'm not sure if this is normal or whether its our site config.
    Controller B on each array doesnt seem to be getting utilised at all when looking at the performance monitoring. It says the controller is up and our disk volumes are distributed accross both controllers as the preffered controllers. We have multipathing on Solaris 10 using traffic manager (assuming configured and working correctly).
    Is this normal to only see traffic on controller A and not on controller B? I believe traffic manager is configured for round-robin load balancing but as far as I can tell this isn't what i'm seeing taking place. Where would I look to change it so the traffic load is balanced across controllers?
    Forgive me, I'm new to these devices so I'm still learning how they are configured. Thanks!

    Hi Nik
    Thanks for replying. Here is the output of a disk:
    root@myhost# luxadm display /dev/rdsk/c5t600A0B800026AD9000000D3F4CA11187d0s2
    DEVICE PROPERTIES for disk: /dev/rdsk/c5t600A0B800026AD9000000D3F4CA11187d0s2
    Vendor: SUN
    Product ID: CSM200_R
    Revision: 0660
    Serial Num: +(removed for privacy)+
    Unformatted capacity: 102400.000 MBytes
    Write Cache: Enabled
    Read Cache: Enabled
    Minimum prefetch: 0x3
    Maximum prefetch: 0x3
    Device Type: Disk device
    Path(s):
    /dev/rdsk/c5t600A0B800026AD9000000D3F4CA11187d0s2
    /devices/scsi_vhci/ssd@g600a0b800026ad9000000d3f4ca11187:c,raw
    Controller /devices/pci@1e,600000/SUNW,qlc@3/fp@0,0
    Device Address 200600a0b8290184,bb
    Host controller port WWN 210000e08b921406
    Class secondary
    State STANDBY
    Controller /devices/pci@1e,600000/SUNW,qlc@2/fp@0,0
    Device Address 200700a0b8290184,bb
    Host controller port WWN 210000e08b924106
    Class primary
    State ONLINE
    After learning some more about how the 6140 works (I believe it is called an asymmetric array, at any one time a single volume will be owned by one controller, and all I/O to that volume will be via the path to that controller and not via round-robin), I believe it to be a software/firmware bug.
    We have no problems communicating to volumes on preferred controller B and there are no alarms, so i can only conclude trafic must be going through that controller. But a volume on controller B will have no performance stats showing in CAM unless it is moved to controller A (stats subsequently go away again when moved back to controller B). Its not affecting the operation of the array, we just can't get volume performance stats on anything using controller B :(
    CAM is 6.6 and IOM is 98D0 (latest is 98D3).

Maybe you are looking for