A1000 on Solaris 10?

While I've found evidence that people have gotten RM6 (Raid Manager 6.22.1) to work on Solaris 10 which has been upgraded from a RM6-supported version of Solaris, I have not found any evidence of a fresh RM6 install on Solaris 10.
Yes, I know this array was EOLed in 2004, but the thing is still running, so I'd like to use it. I just need to reset the battery age. Does anyone know how to either install RM6 on Solaris 10, or have another way of resetting the battery age on the A1000?
-- M

To answer my own question - - the RM6 software does work on Solaris 9 and 10, even though the install of the main package fails due to "Solaris 10 being unsupported". After the install you need to remove 2 of the 3 forceloads in /etc/system (leave the SD forceload and remove the other 2) to remove some warnings at bootup. But despite the driver removal and incomplete install, everything works fine.
-- M
Edited by: MikeVarney on Mar 9, 2011 4:43 AM

Similar Messages

  • A1000 on solaris 10 as a JBOD

    I want to upgrade a system from sol9 to sol10, but raid manager is not supported on solaris10
    Is there any way I can use an A1000 as a JBOD on solaris 10

    So, A1000 cannot be used with solaris 10, I'll stick with solaris 9 then as I have no hardware budget. (unless of course there is a way of changing the controller on the a1000 :-) )

  • Raid storage

    Hello!
    Is the version Solaris 10 supported in RAID manager 6.22?
    Thanks

    Since that post, I've learned that others have had success using RM6 with an A1000 on Solaris 10. I don't have one to test myself, and I've seen some other posts from users reporting problems, but I hope that it does in fact work.
    See also this usenet thread:
    http://groups.google.com/group/comp.unix.solaris/browse_thread/thread/78cc3db9a19d1fac/43ac969ddb4894e1
    Darren

  • Solaris 10 on 280R with A1000

    We have loaded solaris 10 ( 3/05) on our server i.e. Sunfire 280R with A1000 storage connected to it thru SCSI cable. We have addon SCSI card installed in server. Our A1000 is having only one controller.
    Afterthat we have loaded sun storage RAID Manager 6.22 software to configure A1000 , we have made slices using RAID 5 & using RM6 utility. While rebooting the server we are getting following two errors & keeps scrolling on screen for about 10 minutes , though we are able to access A1000;
    1. Warning : mod_load : cannot load module 'rdriver'
    2. /kernel/drv/spark9/rdriver:undefined symbol 'dev_get_dev_info'
    Is any solution to above errors? Is any patch / upgrade / firmware etc for above errors ?
    Will it recommended to upgrade to solaris 10 or continue with solaris 9. we are using this as a database server with oracle 10G.

    FYI, I think Sun discontinued support for the A1000 h/w in Solaris 10... should be documented.
    I only mention this in case you want to have Sun support help you... if it works fine, I generally wouldn't worry. But it is a production system, I might have second thoughts about using Solaris 10 with unsupported h/w.
    My $.02, YMMV.
    David Strom

  • A1000 on E250 solaris 8

    hello all
    i'm trying to install A1000 on E250 Solaris 8.
    Yes, i read some threads related to A1000 on this forum.
    Some messages are saying that A1000 is recognized, but nothing work
    Here are the messages:
    iostat -En
    sd147 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
    Vendor: Symbios Product: StorEDGE A1000 Revision: 0301 Serial No: 1T02596846
    Size: 180.72GB <180718141440 bytes>
    Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
    Illegal Request: 0 Predictive Failure Analysis: 0
    but format don't show devices, nor lad
    probe-scsi-all don't show something related to A1000
    and when i try to boot -r, kernel crashes:
    panic[cpu1]/thread=3000730e020: BAD TRAP: type=31 rp=2a100324ff0 addr=300078c5b48 mmu_fsr=0
    devfsadm: trap type = 0x31
    addr=0x300078c5b48
    pid=59, pc=0x1027b914, sp=0x2a100324891, tstate=0x4480001606, context=0x1ff3
    i'm in solaris 8, 117350-43 patch kernel
    There are no devices (controllers) in the system; nvutil terminated.
    There are no devices (controllers) in the system.
    fwutil failed!
    Array Monitor initiated
    RDAC daemons initiated
    Dec 19 15:54:11 serengheti /usr/lib/osa/bin/arraymon: No RAID devices found to check.
    Dec 19 15:54:11 serengheti rdriver: ID[RAIDarray.rdaemon.1001] RDAC Resolution Daemon locked in memory
    What can i do?
    Thanks in advance for help,

    Hello,
    iostat -En
    sd147 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
    Vendor: Symbios Product: StorEDGE A1000 Revision: 0301 Serial No: 1T02596846
    Size: 180.72GB <180718141440 bytes>
    Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
    Illegal Request: 0 Predictive Failure Analysis: 0
    ...probe-scsi-all don't show something related to A1000
    iostat works at the operating system level, therefore the A1000 was indeed detected at the ok-prompt, otherwise the device tree wouldn't have been build properly.
    Even an A1000 w/o disks should be detected at the ok-prompt. The integrated RAID-Controller is already a device. The installed disks are only detected with probe-scsi-all after being configuring with RaidManager (the configured LUNs are displayed, not the individual disks).
    Please check the cable and the HVD-SCSI terminator at the rear of the A1000.
    After the A1000 is detected at the ok-prompt advance to the next step.
    Check if the 4 SUNWosa... packages that ssteimann mentioned are installed.
    The version of RaidManager must match the installed A1000 firmware. If you can login to SunSolve there is a firmware matrix (InfoDoc 43483).
    Maybe you should remove these packages and re-install them.
    Merry Christmas !
    Michael
    Any updates ?
    Message was edited by:
    MAALATFT

  • Performance in Sun Java Communication Suite 5 between Solaris 9 and 10

    Somebody knows which is the best Operation System to deploy the Sun Java Communication Suite? in performance terms?
    I have and old Sun Fire 280R with two 750 Mhz Processors, 3 GB Ram, and an A1000 Storage.
    Thanks a lot,
    Andres.

    AYacopino wrote:
    Somebody knows which is the best Operation System to deploy the Sun Java Communication Suite? in performance terms?Solaris 10 by far for several reasons:
    -> improved overall performance (kernel/networking level)
    -> ZFS for storage
    -> dtrace for debugging
    -> zones for the various components of the communication suite (http://www.sun.com/blueprints/0806/819-7663.html)
    I have and old Sun Fire 280R with two 750 Mhz Processors, 3 GB Ram, and an A1000 Storage.I'm not sure how many users you are planning to provide access to but your RAM is going to be the bottleneck.
    Regards,
    Shane.

  • Sun Ultra 10 with SunStorEDGE A1000/D1000 unable to setup

    I realise this covers old ground but I have not been able to trace a uitable resolvement and would be grateful for assistance.
    I have a Sun Ultra 10 with 256MB RAM and 440GHz cpu.. It has a pci Adaptec 2944UW HVD SCSI interface card. The storage system is a tray of eight scsi 72GB disks housed in a D1000 case. NetBSD reported it as an A1000 SunStorEDGE at sd0 having 16 targets and 8 luns. However NetBSD did not have a readily available controller to setup the raid system so I installed an alternative. I presume the case is a reused old case housing an A1000 unit.
    I tried to format the drives directly but they are not lists as devices such as ct1sd1 etc. and so this is not possible.
    I have installed Sun Solaris 10 but could not communicate and reported inability to use rdriver and rdnexus and after reading a number of reports that driver hooks had been removed from the kernel in Solaris 10 I have installed Solaris 9.
    On both occasions I installed a number of RAID Manager 6.1.1 control packs, namely SUNWosafw, osar, osau, osamn and vtsse direct from my CD here.
    I commanded dr_hotadd.sh to take up the challenge but it failed to communicate.
    I added a number of Rdac amendments to rparams in /etc/osa but this did not resolve the issue.
    I have tried to call probe-scsi-all after doing a reset-all from the OBP but this fails to report anything. I presume this is because the unit is connected by the interface card rather than a direct scsi disk.
    dmesg pipe grep scsi reports:
    unknown scsi sd0 at uata0 target 2 lun 0
    I would be very grateful fr guidance to resolve this connection so that I can create a suitable filestore.
    Thanks
    c

    blackfoot wrote:
    I realise this covers old ground but I have not been able to trace a uitable resolvement and would be grateful for assistance.
    I have a Sun Ultra 10 with 256MB RAM and 440GHz cpu.. It has a pci Adaptec 2944UW HVD SCSI interface card. The storage system is a tray of eight scsi 72GB disks housed in a D1000 case. NetBSD reported it as an A1000 SunStorEDGE at sd0 having 16 targets and 8 luns. However NetBSD did not have a readily available controller to setup the raid system so I installed an alternative. I presume the case is a reused old case housing an A1000 unit.If it's really an A1000, you won't have access to the drives. You'll only see exposed LUNS from the raid controller. Do you have a dial in the rear with a scsi address? That will be the address that the A1000 will respond on. In addition, the A1000 has only one pair of SCSI ports. The D1000 has two pair and doesn't have the scsi adress dial (because each of the disks in the chassis respond on their own addresses). Instead you have some DIP switches to change the addressing behavior.
    I tried to format the drives directly but they are not lists as devices such as ct1sd1 etc. and so this is not possible.
    I have installed Sun Solaris 10 but could not communicate and reported inability to use rdriver and rdnexus and after reading a number of reports that driver hooks had been removed from the kernel in Solaris 10 I have installed Solaris 9.That's fine. You can ignore them. Neither the A1000 nor D1000 require those drivers. You should still be able to use 'rm6' or the CLI tools to interact with an A1000.
    On both occasions I installed a number of RAID Manager 6.1.1 control packs, namely SUNWosafw, osar, osau, osamn and vtsse direct from my CD here.
    I commanded dr_hotadd.sh to take up the challenge but it failed to communicate.
    I added a number of Rdac amendments to rparams in /etc/osa but this did not resolve the issue.
    I have tried to call probe-scsi-all after doing a reset-all from the OBP but this fails to report anything. I presume this is because the unit is connected by the interface card rather than a direct scsi disk.If the controller doesn't support the OBP environment, then yes, you won't see anything then.
    dmesg pipe grep scsi reports:
    unknown scsi sd0 at uata0 target 2 lun 0So that might be a single LUN from an A1000 controller. If the selector switch in the back is set to '2', then it's even more likely.
    Darren

  • Solaris 10 booting issue

    Hello,
    Just liveupgraded solaris 8 to the latest solaris 10 on V440/sparc. After luactivate, trying to boot into Solaris 10 but system is throwing below messages on console,
    Configuring devices.
    WARNING: /pci@1f,700000/scsi@2,1 (mpt1):
    hard reset failed
    WARNING: /pci@1f,700000/scsi@2,1 (mpt1):
    mpt_restart_ioc failed
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    hard reset failed
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    mpt restart ioc failed
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    mpt restart ioc failed
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    mpt restart ioc failed
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    mpt restart ioc failed
    WARNING: /pci@1f,700000/scsi@2 (mpt0):
    firmware image bad or mpt ARM disabled. Cannot attempt to recover via firmware download because driver's stored firmware is incompatible with this adapter.
    And, it continues to do so. Any idea on how this can be fixed?

    OK, as I was booting into Solaris 8, that also gave me mpt errors and suddenly realized of one change that I had done recently and I had not rebooted Solaris 8 after that change. The change was connecting A1000 device to scsi port on the host(not external scsi adapter). I removed the A1000 cable from scsi port and there you go solaris 8 came up.
    Thought I 'd try booting Solaris 10 again and now the earlier errors don't come but I see following warnings,
    Loading smf(5) service descriptions: 1/187
    WARNING: svccfg import /var/svc/manifest/application/management/wbem.xml failed
    2/187
    WARNING: svccfg import /var/svc/manifest/system/metainit.xml failed
    172/187
    WARNING: svccfg import /var/svc/manifest/system/power.xml failed
    173/187
    WARNING: svccfg import /var/svc/manifest/system/postrun.xml failed
    174/187
    WARNING: svccfg import /var/svc/manifest/system/resource-mgmt.xml failed
    175/187
    WARNING: svccfg import /var/svc/manifest/system/zones.xml failed
    176/187
    WARNING: svccfg import /var/svc/manifest/system/poold.xml failed
    177/187
    WARNING: svccfg import /var/svc/manifest/system/pools.xml failed
    178/187
    WARNING: svccfg import /var/svc/manifest/system/picl.xml failed
    179/187
    WARNING: svccfg import /var/svc/manifest/system/installupdates.xml failed
    180/187
    WARNING: svccfg import /var/svc/manifest/system/labeld.xml failed
    181/187
    WARNING: svccfg import /var/svc/manifest/system/tsol-zones.xml failed
    182/187
    WARNING: svccfg import /var/svc/manifest/system/iscsi_target.xml failed
    183/187
    WARNING: svccfg import /var/svc/manifest/system/cvc.xml failed
    184/187
    WARNING: svccfg import /var/svc/manifest/system/rcap.xml failed
    185/187
    WARNING: svccfg import /var/svc/manifest/system/fpsd.xml failed
    186/187
    WARNING: svccfg import /var/svc/manifest/system/br.xml failed
    187/187
    WARNING: svccfg import /var/svc/manifest/system/sar.xml failed
    svccfg import warnings. See /var/svc/log/system-manifest-import:default.log .
    WARNING: svccfg apply /var/svc/profile/generic.xml failed
    WARNING: svccfg apply /var/svc/profile/platform.xml failed
    Requesting System Maintenance Mode
    (See /lib/svc/share/README for more information.)
    Console login service(s) cannot run
    Reading ZFS config: *
    Root password for system maintenance (control-d to bypass):done.
    Login incorrect
    Root password for system maintenance (control-d to bypass):
    I can connect A1000 to other machine or external card and deal with it later however how to get system out of this state?

  • Solaris 9 and WebSphere Commerce Suite 5.6

    We're upgrading our servers from Solaris 8 to 9. Does anyone know of any issues that may pop up with our WebSphere Commerce Suite 5.6, service pack 5 during or after the upgrade?

    AYacopino wrote:
    Somebody knows which is the best Operation System to deploy the Sun Java Communication Suite? in performance terms?Solaris 10 by far for several reasons:
    -> improved overall performance (kernel/networking level)
    -> ZFS for storage
    -> dtrace for debugging
    -> zones for the various components of the communication suite (http://www.sun.com/blueprints/0806/819-7663.html)
    I have and old Sun Fire 280R with two 750 Mhz Processors, 3 GB Ram, and an A1000 Storage.I'm not sure how many users you are planning to provide access to but your RAM is going to be the bottleneck.
    Regards,
    Shane.

  • StorEge A1000 + PC (with Mylex960 Raid Controller)

    PC = PIII / 800EB / 256MB / 20GB
    Raid card = Mylex DAC960
    StorEdge = A1000 with 3 x 18Gb drives loaded.
    Wehn Mylex scans for a new scsi device, Storedge A1000 can't be traced/probed. I tried this with HP scsi box and were able to probe the external devices but for Sun Storedge A1000 it was not successfull.
    Any advise on how I can probe/trace this external device (Storedge A1000).
    I am planning also to install solaris 10 (x86) and hopefully Solaris could detect the external storage (series of drives) at the application level. (cross finger)
    For your advise Gurus.
    Petalio

    Can you describe the features and specifications of that SCSI card ?
    The A1000 array has a High-Voltage-Differential interface.
    (See its link in the Sun System Handbook)
    HVD is not common in the PeeCee universe.
    The array already has a RAID controller in its chassis,
    and will not work with a RAID controller SCSI card.
    Any attempts to use a LVD card or a S/E card will just not work, either.
    It would be invisible to the SCSI chain.
    ... then, additionally, you're going to need some sort of RAID control software
    to administer the A1000 and its internal RAID controller.
    if you do eventually get a compatible HBA, you also need to be aware
    that functional support for the array was specifically dropped from Solaris 10.
    You'd need to run Sol8 or Sol9 with RM6 software, and I cannot remember
    whether RM6 was ever ported to x86 Solaris.
    I fear you're just going to be out of luck,
    and may need to get rid of the array (e.g. Ebay ?).

  • Raid Manager for Solaris 9

    I have an A1000 and have been all over the Sun and sunsolve sites looking for raid manager 6.2 or higher, but the download link redirects me to a new storage hardware page. Does anyone have any idea where I can get this software for Solaris 9

    http://javashoplm.sun.com/ECom/docs/Welcome.jsp?StoreId=8&PartDetailId=RaidMgr-6.22-SP-G-F&TransactionId=Try

  • SUN Cluster 2.2 behaviour when removing SCSI from A1000

    We're running a SUN Cluster 2.2 , 2 node cluster on a SUN E5500 running Solaris 5.8 and Veritas VM.
    A1000 Boxes cross connected over SCSI to each node and D1000 dual attached per node.
    What would one as cluster behaviour expect if one Node crashes, powering the crashed node off and removing the SCSI cable from the A1000 to the surviving node. Would the surviving node continue to run properly ?
    Thanks

    There is potential that the surviving node could panic when termination is lost when the cable is disconnected.

  • SunFire V120 with A1000

    OK, we've got a SunFire V120 server with an X6541A PCI SCSI card installed. The card is connected to an A1000 RAID Array with a good quality HVD SCSI cable. The other port on the A1000 has a HVD terminator installed. There are eight 73GB drives in the array, and there are two internal drives running on the server's built-in SCSI. Running Solaris 8 on the server.
    So far, I have not been able to get the server to recognize the array at all. If I run a "probe-scsi-all" I get this:
    <i>
    ok probe-scsi-all
    /pci&#64;1f,0/pci&#64;1/scsi&#64;5,1
    /pci&#64;1f,0/pci&#64;1/scsi&#64;5
    Fatal SCSI error at script address 10 Unexpected disconnect
    /pci&#64;1f,0/pci&#64;1/scsi&#64;8,1
    /pci&#64;1f,0/pci&#64;1/scsi&#64;8
    Target 0
    Unit 0 Disk FUJITSU MAP3367N SUN36G 0301
    Target 1
    Unit 0 Disk SEAGATE ST336704LSUN36G 0326</i>
    As you can see, it recognizes the internal drives on SCSI address 8 and the X6541 card at SCSI address 5, but reports an error on the port attached to the A1000.
    I also get the following email to root occasionally:
    <i>To: root
    Subject: raid Event
    Content-Length: 183
    An array event has been detected on Controller Unknown
    Device Unknown at Host <<our domain name here>> - Time 08/23/2005 21:14:27</i>
    And I get errors on boot up:
    <i>Sun Fire V120 (UltraSPARC-IIe 648MHz), No Keyboard
    OpenBoot 4.0, 1536 MB memory installed, Serial #53835884.
    Ethernet address 0:3:ba:35:78:6c, Host ID: 8335786c.
    last command: boot
    Boot device: disk File and args:
    SunOS Release 5.8 Version Generic_108528-17 64-bit
    Copyright 1983-2001 Sun Microsystems, Inc. All rights reserved.
    WARNING: /pci&#64;1f,0/pci&#64;1/scsi&#64;5 (glm2):
    unexpected SCSI interrupt while idle
    configuring IPv4 interfaces: eri0.
    Hostname: <<our domain name here>>
    The system is coming up. Please wait.
    checking ufs filesystems
    /dev/rdsk/c0t1d0s6: is stable.
    /dev/rdsk/c0t0d0s7: 52762 files, 11375733 used, 10986210 free
    /dev/rdsk/c0t0d0s7: (39522 frags, 1368336 blocks, 0.1% fragmentation)
    /dev/rdsk/c0t0d0s4: is stable.
    8/24/2005 1:14:21 GMT LOM time reference
    starting rpc services: rpcbind done.
    Setting netmask of eri0 to 255.255.255.240
    Setting default IPv4 interface for multicast: add net 224.0/4: gateway <<our domain name here>>
    syslog service starting.
    Print services started.
    There are no devices (controllers) in the system; nvutil terminated.
    There are no devices (controllers) in the system.
    fwutil failed!
    Array Monitor initiated
    Aug 23 21:14:27 /usr/lib/osa/bin/arraymon: No RAID devices found to check.
    RDAC daemons initiated
    volume management starting.
    Wnn6: Key License Server started....
    Nihongo Multi Client Server (Wnn6 R2.34)
    Finished Reading Files
    httpd starting.
    Starting nrpe: Starting mysqld daemon with databases from /usr/local/mysql/data started
    The system is ready.</i>
    The first SCSI error you see in the boot sequence:
    <i>WARNING: /pci&#64;1f,0/pci&#64;1/scsi&#64;5 (glm2):
    unexpected SCSI interrupt while idle</i>
    is sometimes replaced with:
    <i>WARNING: invalid vector intr: number 0x7df, pil 0x0</i>
    Any ideas at all? Is it the A1000, it's RAID controller, the X6541A, PROM settings? Anything???
    Thanks!
    - Matt

    If it doesn't pass at this level you have zero chance in Solaris.
    You're probably right.
    I have tried swapping the ports on both the A1000 and the SCSI card. But I can't recall the exact effects -- problem is, I am in New Hampshire and the Server is in New York. "Hands-on" debugging is a bit difficult.
    I do have auto-boot set to false, so no problem running the SCSI probe. I will probably have to do one more trip down there it looks like. Your suggestion of trying the SCSI probe with the A1000 disconnected is a good one - at least that will isolate to the A1000 or the SCSI card.
    I have a hunch there is a hardware problem on the RAID contoller card in the A1000, but no proof yet.
    Installed 300 or so patches last night. 50 or so to go. This part needs to get done anyway, even if it doesn't fix the RAID array!

  • Enterprise 420R with A1000 problem

    Dear all
    I'm getting the following error and i am able to "see" the system only from console.
    Sep 29 12:54:28 cohealths1 unix: WARNING: vxvm:vxio: Subdisk rootdisk-03 block 110672: Uncorrectable write error
    Sep 29 12:54:28 cohealths1 unix: WARNING: vxvm:vxio: Subdisk rootdisk-02 block 1039519: Uncorrectable write error
    I went to "ok" prompt and i gave the "boot" command, but i got the following errors
    {2} ok boot
    Resetting ...
    screen not found.
    Can't open input device.
    Keyboard not present. Using ttya for input and output.
    Sun Enterprise 420R (2 X UltraSPARC-II 450MHz), No Keyboard
    OpenBoot 3.23, 4096 MB memory installed, Serial #15281095.
    Ethernet address 8:0:20:e9:2b:c7, Host ID: 80e92bc7.
    Rebooting with command: boot
    Boot device: net File and args:
    Timeout waiting for ARP/RARP packet
    Timeout waiting for ARP/RARP packet
    Timeout waiting for ARP/RARP packet
    Timeout waiting for ARP/RARP packet
    {1} ok boot disk0
    Boot device: /pci@1f,4000/scsi@3/disk@0,0 File and args:
    Can't open boot device
    {1} ok boot disk1
    Boot device: /pci@1f,4000/scsi@3/disk@1,0 File and args:
    Can't open boot device
    It seems that the problem is on the two internal disks (the leds are green!!) .....
    Can anybody help me ?

    Hello,
    probe the disks from the ok-prompt (either after reset or power-up)
    setenv auto-boot? false
    reset
    probe-scsi
    (if you use probe-scsi-all the external disks and the A1000 will be probed as well, which doesn't make sense at the moment).
    If the internal disk are visible, boot into single-user mode from CD-ROM (Software 1 of Solaris 8/9, CD1 of Solaris 10) and invoke format to check the disks (-> analyze, non-destructive test).
    boot cdrom -sv
    Otherwise try if reseating the disks solves the problem.
    Is the (internal) SCSI controller visible with show-devs ?
    Running diagnostics might be a good idea.
    setenv diag-switch? true
    reset-all
    Michael

  • I/O issues with Oracle Financials on Solaris (Is this normal ?)

    Oracle Financials: Constant and High "Read" I/O:
    Problem Description:
    Most of the I/O on all of our 3 Oracle Financial 11.5.9 servers is read intensive and they are all going against 2 “HOT” applsys database files causing constant high I/O even when the application is not in use.
    Example:
    r/s      w/s      kr/s      kw/s      wait      actv      wsvc_t asvc_t      %w      %b s/w h/w trn tot device
    274.7      8.6      2197.2 19.2      0.0      0.4      0.0      1.6      0      44 0 0 0 0 c4t0d0s0
    674.7      5.6      5397.7 11.6      0.0      2.3      0.0      3.4      0      100 0 0 0 0 c6t0d0s0
    extended device statistics ---- errors ---
    r/s      w/s      kr/s      kw/s      wait      actv      wsvc_t      asvc_t      %w      %b s/w h/w trn tot device
    434.6      6.0      3476.5 16.6      0.0      0.8      0.0      1.8      0      58 0 0 0 0 c4t0d0s0
    838.1      3.4      6704.6      7.5      0.0      2.0      0.0      2.4      0      98 0 0 0 0 c6t0d0s0
    Hardware: Single Node Sun V880 – 2 Instances - SUN DAS Storage
    The A1000 DAS Array is configured for random I/O
    Operating System Modifications:
    I have toggled with the Filesystem mount options noatime,forcedirectio,logging but it did not make a difference.
    The Solaris Directory Name Lookup Cache hits is 98% - OK
    The buffer_cache_lookups & buffer_cache_hits numbers are very close - OK
    Question:
    I am pretty sure distributing the applsys tablespace and across six file systems will help?
    What is the best way to do this in Oracle Financials?
    If you are running solaris please run the following command and share your output.
    # iostat -xPne 20 | nawk '( /r\/s/ || $1 > 200 ) && ! /:/'

    Pl do not post duplicate threads - Oracle XE is not running on startup

Maybe you are looking for