Solstice DiskSuite in Solaris 8

Hi all,
I was told that DiskSuite is included in Solaris 8, does anybody know where can I find the installation files???
Is there a wizard or something like that???
Do I need a special CD???
I have installed it from the Easy Access Server CD but I don't know how to do it from the files included in Solaris 8 Media
Thanks in advance...
Marissa

had the same prob myself...
cd /cdrom/cdrom0/Solaris_8/EA/products/DiskSuite_4.2.1/sparc

Similar Messages

  • Upgrade from solaris 8 to solaris 9 with sun solstice disksuite

    Hi,
    I have to upgrade the solaris 8 with Solstice disksuite to Solaris 9 OS. Please let me know the steps for the upgrade.
    Regards
    chesun

    Yep!
    See
    http://docs.sun.com/db/doc/806-5205/6je7vd5rf?a=view
    Lee

  • How to backout a Recommended Patch cluster deployment in Solstice disksuite

    Hello Admin's,
    I am planning to use the following Plan of action for deploying the latest Solaris 8 Recommended Patch cluster on the production server's i support.My
    concern is if the patching activity fails or the application and Oracle databases dont come up after deploying the patch cluster.How do i revert back to system to its original state using the submirror's which i have detached prior to patching the system :
    1) Will shutdown the applications and the databases on the server:
    2) Will capture the output of the following commands :
    df -k
    ifconfig -a
    contents of the files /etc/passwd /etc/shadow /etc/vfstab /etc/system
    metastat -p
    netstat -rn
    prtvtoc /dev/rdsk/c1t0d0s0
    prtvtoc /dev/rdsk/c1t1d0s0
    3) We bring the system to the ok prompt
    4) We will try to boot the system from both the disk's which are part of the d10 metadevice for root filesystem
    =======================================================================================
    user1@myserver>pwd ; df -k / ; ls -lt | egrep '(c1t0d0s0|c1t1d0s0)' ; prtconf -vp | grep bootpath ; metastat d10
    /dev/dsk
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435895 4740117 43% /
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t0d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@0,0:a
    lrwxrwxrwx 1 root root 43 Jul 28 2003 c1t1d0s0 -> ../../devices/pci@1c,600000/scsi@2/sd@1,0:a
    bootpath: '/pci@1c,600000/scsi@2/disk@0,0:a'
    d10: Mirror
    Submirror 0: d11
    State: Okay
    Submirror 1: d12
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 16779312 blocks
    d11: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s0 0 No Okay
    d12: Submirror of d10
    State: Okay
    Size: 16779312 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s0 0 No Okay
    user1@myserver>
    ===================================================================================
    ok nvalias backup_root <disk path>
    Redefine the boot-device variable to reference both the primary and secondary submirrors, in the order in which you want to access them. For example:
    ok printenv boot-device
    boot-device= disk net
    ok setenv boot-device disk backup_root net
    boot-device= disk backup_root net
    In the event of primary root disk failure, the system automatically boots from the secondary submirror. To test the secondary submirror, boot the system manually, as follows:
    ok boot backup_root
    user1@myserver>metadb -i
    flags first blk block count
    a m p luo 16 1034 /dev/dsk/c1t0d0s7
    a p luo 1050 1034 /dev/dsk/c1t0d0s7
    a p luo 2084 1034 /dev/dsk/c1t0d0s7
    a p luo 16 1034 /dev/dsk/c1t1d0s7
    a p luo 1050 1034 /dev/dsk/c1t1d0s7
    a p luo 2084 1034 /dev/dsk/c1t1d0s7
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
    user1@myserver>df -k
    Filesystem kbytes used avail capacity Mounted on
    /dev/md/dsk/d10 8258597 3435896 4740116 43% /
    /dev/md/dsk/d40 2053605 929873 1062124 47% /usr
    /proc 0 0 0 0% /proc
    fd 0 0 0 0% /dev/fd
    mnttab 0 0 0 0% /etc/mnttab
    /dev/md/dsk/d30 2053605 937231 1054766 48% /var
    swap 2606008 24 2605984 1% /var/run
    swap 6102504 3496520 2605984 58% /tmp
    /dev/md/dsk/d60 13318206 8936244 4248780 68% /u01
    /dev/md/dsk/d50 5161437 2916925 2192898 58% /opt
    user1@myserver>metastat d40
    d40: Mirror
    Submirror 0: d41
    State: Okay
    Submirror 1: d42
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d41: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s4 0 No Okay
    d42: Submirror of d40
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s4 0 No Okay
    user1@myserver>metastat d30
    d30: Mirror
    Submirror 0: d31
    State: Okay
    Submirror 1: d32
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 4194828 blocks
    d31: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s3 0 No Okay
    d32: Submirror of d30
    State: Okay
    Size: 4194828 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s3 0 No Okay
    user1@myserver>metastat d50
    d50: Mirror
    Submirror 0: d51
    State: Okay
    Submirror 1: d52
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 10487070 blocks
    d51: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t0d0s5 0 No Okay
    d52: Submirror of d50
    State: Okay
    Size: 10487070 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t1d0s5 0 No Okay
    user1@myserver>metastat -p
    d10 -m d11 d12 1
    d11 1 1 c1t0d0s0
    d12 1 1 c1t1d0s0
    d20 -m d21 d22 1
    d21 1 1 c1t0d0s1
    d22 1 1 c1t1d0s1
    d30 -m d31 d32 1
    d31 1 1 c1t0d0s3
    d32 1 1 c1t1d0s3
    d40 -m d41 d42 1
    d41 1 1 c1t0d0s4
    d42 1 1 c1t1d0s4
    d50 -m d51 d52 1
    d51 1 1 c1t0d0s5
    d52 1 1 c1t1d0s5
    d60 -m d61 d62 1
    d61 1 1 c1t0d0s6
    d62 1 1 c1t1d0s6
    user1@myserver>pkginfo -l SUNWmdg
    PKGINST: SUNWmdg
    NAME: Solstice DiskSuite Tool
    CATEGORY: system
    ARCH: sparc
    VERSION: 4.2.1,REV=1999.11.04.18.29
    BASEDIR: /
    VENDOR: Sun Microsystems, Inc.
    DESC: Solstice DiskSuite Tool
    PSTAMP: 11/04/99-18:32:06
    INSTDATE: Apr 16 2004 11:10
    VSTOCK: 258-6252-11
    HOTLINE: Please contact your local service provider
    STATUS: completely installed
    FILES: 150 installed pathnames
    6 shared pathnames
    19 directories
    1 executables
    7327 blocks used (approx)
    user1@myserver>
    =======================================================================================
    5) After successfully testing the above we will bring the system to the single user mode
    # reboot -- -s
    6) Detach the following sub-mirrors :
    # metadetach -f d10 d12
    #metadetach -f d30 d32
    #metadetach -f d40 d42
    #metadetach -f d50 d52
    # metastat =====> (to check if the submirror�s are successfully detached)
    7) Applying patch on the server
    After patch installation is complete will be rebooting the server to single user mode
    # reboot -- -s
    confirming if patch installation was successful (uname �a) .
    8) Will be booting the server to the multiuser mode (init 3) and confirming with the database and the application teams if the
    Applications/databases are working fine .Once confirmed successful will be reattaching the d12 submirror.
    # metattach d12 d10
    # metattach d30 d32
    #metattach d40 d42
    # metattach d50 d52
    # metastat d10 =====> (to check the submirror is successfully reattached)
    user1@myserver>uname -a ; cat /etc/release ; date
    SunOS myserver 5.8 Generic_117350-04 sun4u sparc SUNW,Sun-Fire-V210
    Solaris 8 HW 12/02 s28s_hw1wos_06a SPARC
    Copyright 2002 Sun Microsystems, Inc. All Rights Reserved.
    Assembled 12 December 2002
    Mon Apr 14 17:10:09 BST 2008
    -----------------------------------------------------------------------------------------------------------------------------

    Recommended patch sets are and to the best of my knowledge have always been regenerated twice a month.
    I think your thinking of maintenance releases. When they generate a new CD image which can be used to do an upgrade install.
    They try to generate those every 6 months, but the schedule often slips.
    The two most recent were sol10 11/06 and sol10 8/07.

  • Solstice Disksuite 4.2.1

    I have recently upgraded to Solaris 8 from 2.6 (and Solstice Disksuite 4.1) and followed the instructions for installing Solstice Disksuite 4.2.1 on to SOlaris 8. I now have a package called SUNWmdg.2. I understand the '.2' indicates a 'duplicate' package. Should I have installed the old packages first? Nowhere in the Disksuite 4.2.1 installation manual did it say to do so. What do I do now?
    Thanks
    D

    Yes, but some versions of VxVM require a 'rootdg' remain. You may or may not prefer to simply use VxVM to mirror root.
    The packages are on disk2 of the installation media in the EA folder.
    Darren

  • Solstice DiskSuite Service

    Hi,
    Under Security, I need to disable Solstice DiskSuite Service in Solaris 8 & 9, after finding out whether its in use or not.
    Can anyone help on this
    Thanks
    Gattu

    You can find out if disksuite is in use by running the metastat command.
    I'm not sure why you consider it to be a security issue unless your referring to the various
    services that ship with it like rpc.metad and rpc.mdcommd.
    On solaris 9, all those can be commented out of /etc/inetd.conf without effecting the functioning of disksuite unless your using cluster type functionality.
    On solaris 10 they can be shutdown with svcadm.

  • Solstice DiskSuite 4.2.1 Installation

    Hi,
    I have a SUN fire 880 system running Solaris 8 and 72GBX10 Disk.
    right now 8 disk are configured in Veritas Volume Manager and one disk has root and other file system and one more disk is still free so I am planning to mirror root disk with free disk in ( Solstice DiskSuite 4.2.1) please advice me
    My Qustion is
    1>Is it possible ???
    2>From where I can get Solstice DiskSuite 4.2.1 packages???.
    Thanks in advance..
    Bhupal

    Yes, but some versions of VxVM require a 'rootdg' remain. You may or may not prefer to simply use VxVM to mirror root.
    The packages are on disk2 of the installation media in the EA folder.
    Darren

  • Solstice Backup and Solaris 8 Upgrade

    I currently have Solstice Backup v5.0.1, patched with 105658-05 on Solaris 2.6. When I upgrade to Solaris 8, do I reinstall Solstice Backup v5.0.1 or will I have to upgrade it too?
    Thanks

    Minimum release of Solstice Backup for Solaris 8 is 5.5.1. Latest is 6.1.

  • SNMP MIB for Solstice DiskSuite 4.2

    Hi all,
    Does anybody know where I can get the SNMP MIB for Solstice DiskSuite 4.2 ?
    Any help will be appreciated,
    Thanks,
    Simon

    Yes, but some versions of VxVM require a 'rootdg' remain. You may or may not prefer to simply use VxVM to mirror root.
    The packages are on disk2 of the installation media in the EA folder.
    Darren

  • Solstice DiskSuite or Veritas Volume Manager

    With regards to Solaris 8,
    I plan to use DiskSuite to create a few 0+1 volumes for our new database server. Any reason for me to consider Veritas Volume Manager instead?
    We have an E5000 running Solaris 2.6 with Veritas Volume Manager 3.2, we are only using it for 0+1 volumes(vxfs). No snaphosts, fast I/O or any other bells and whilste of VM are being used.

    apologies my original post was incorrectly posted to the wrong forum.

  • Solstice disksuite

    Hi Friends,
    I started a new work of managing database and backup servers which are running in solaris. Please tell me, through which command i will know like what type of raid is configured in solaris 8 machines using disksuite software......
    thanks & regards
    VJ.

    Hi
    Thanks for the reply. I also can able to see metastat and format. But how should we come to conclusion by seeing the output whether the metadevice configuration is stripe/concat (or) Raid 1 (or) Raid 5.
    Thanks

  • How will extend the Filesystem on solstice disksuite

    Hi Friends,
    In Solstice disk suite one of the filesystem has reached 90%, After doing house keeping finally it has reached 80%.
    However we are planned to extend the Filesystem.
    We have informed to management to procure 2* 18 Gb Hdd.
    I am not familiar in the Solstice disk suite, could you please provide me a procedure to extend the Filesystem on Solstice disk suite,
    Please find the following output below:-
    /dev/md/dsk/d8 52222754 41701831 9998696 81% /data
    # metastat d8
    d8: Mirror
    Submirror 0: d6
    State: Okay
    Submirror 1: d7
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 106070958 blocks
    d6: Submirror of d8
    State: Okay
    Size: 106070958 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c1t8d0s0 0 No Okay
    Stripe 1:
    Device Start Block Dbase State Hot Spare
    c1t9d0s0 3591 No Okay
    Stripe 2:
    Device Start Block Dbase State Hot Spare
    c1t10d0s0 4712 No Okay
    Stripe 3:
    Device Start Block Dbase State Hot Spare
    c1t11d0s0 4712 No Okay
    d7: Submirror of d8
    State: Okay
    Size: 106070958 blocks
    Stripe 0:
    Device Start Block Dbase State Hot Spare
    c5t0d0s0 0 No Okay
    Stripe 1:
    Device Start Block Dbase State Hot Spare
    c5t1d0s0 3591 No Okay
    Stripe 2:
    Device Start Block Dbase State Hot Spare
    c5t2d0s0 4712 No Okay
    Stripe 3:
    Device Start Block Dbase State Hot Spare
    c5t3d0s0 4712 No Okay
    Many Thanks
    Karthick

    Have a look for a solution here it sounds like your issue http://discussions.apple.com/thread.jspa?threadID=1780505&tstart=0, hope it helps.

  • Solstice disksuite dual scsi controller

    Is there a way to load balance across 2 scsi controllers to a raid unit with solstice? My client won't fork over the money for Veritas.
    If I can't get the load balancing, is there a way to failover the controllers? Maybe set up the second controller as a mirror.
    System info:
    SunFire V480
    Dual Sun SCSI controller
    Winchester Flashdisk RAID

    Sorry rakman2, but your suggestions will not work with that system.
    A Sun V20z is Intel Xeon based.
    No OBP, and therefore no OBP commands or utilities (such as <i>probe</i>).
    Let's just wait for <i>hero</i> to provide more information.
    Edit:
    Corrected a typo and noticed that it has been three months.
    The original poster has not returned.
    ... and an error about a ROM on a peripheral usually means unqualified 3rd party hardware.

  • Disksuite 4.2.1 wont install via jumpstart on Solaris 8 2/04

    Trying to install Disksuite on a Sun Solaris 8 image. The install works on Fujitsu hardware with an even older (2/02) Solaris 8 image for Fujitsu hardware, but for some reason its not working for a Sun V120 with Sun's 2/04 solaris 8 release.
    Here's the output from my finish script to install the packages. All other packages install just fine.
    ==============================================================================
    ACI-secure.driver: Finish script: install-disksuite.fin
    ==============================================================================
    Installing DiskSuite
    Installing DiskSuite from /a//tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages
    ==============================================================================
    Installing Package: SUNWmdr
    Root Directory: /a/
    Ask File: /a/tmp/jass-packages/noask_pkgadd
    Source Location: /a/tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages
    Options:
    ==============================================================================
    Processing package instance <SUNWmdr> from </a/tmp/jass-packages/DiskSuite_4.2.1
    /sparc/Packages>
    Solstice DiskSuite Drivers
    (sparc) 4.2.1,REV=1999.12.03.10.00
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    ## Executing checkinstall script.
    /a/tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages/SUNWmdr/install/checkinstall
    : /tmp/sh141750: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdr> failed.
    No changes were made to the system.
    ==============================================================================
    Installing Package: SUNWmdx
    Root Directory: /a/
    Ask File: /a/tmp/jass-packages/noask_pkgadd
    Source Location: /a/tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages
    Options:
    ==============================================================================
    Processing package instance <SUNWmdx> from </a/tmp/jass-packages/DiskSuite_4.2.1
    /sparc/Packages>
    Solstice DiskSuite Drivers(64-bit)
    (sparc) 4.2.1,REV=1999.11.04.18.29
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    ## Executing checkinstall script.
    /a/tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages/SUNWmdx/install/checkinstall
    : /tmp/sh142020: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdx> failed.
    No changes were made to the system.
    ==============================================================================
    Installing Package: SUNWmdu
    Root Directory: /a/
    Ask File: /a/tmp/jass-packages/noask_pkgadd
    Source Location: /a/tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages
    Options:
    ==============================================================================
    Processing package instance <SUNWmdu> from </a/tmp/jass-packages/DiskSuite_4.2.1
    /sparc/Packages>
    Solstice DiskSuite Commands
    (sparc) 4.2.1,REV=1999.11.04.18.29
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    ## Executing checkinstall script.
    /a/tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages/SUNWmdu/install/checkinstall
    : /tmp/sh142290: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdu> failed.
    No changes were made to the system.

    /a/tmp/jass-packages/DiskSuite_4.2.1/sparc/Packages/SUNWmdr/install/checkinstall
    : /tmp/sh141750: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfullyCheckinstall errors usually suggest that the package cannot be read by the user nobody.
    The FAQ mentions this with respect to patches, but it can affect packages as well:
    http://www.science.uva.nl/pub/solaris/solaris2.html#q5.59
    5.59) Patch installation often fails with "checkinstall" errors.
    Darren

  • Upgrade to Solaris 8 - Solstice Backup

    Hi,
    I currently have Solstice Backup v5.0.1, patched with 105658-05 on Solaris 2.6. When I upgrade to Solaris 8 can I reinstall Solstice Backup v5.0.1 or will I have to upgrade it too?
    Thanks
    D

    Minimum release of Solstice Backup for Solaris 8 is 5.5.1. Latest is 6.1.

  • Clustering Solaris 10 (SPARC)  with QFS 4.3

    I have searched to no avail for a solution to my error. The error is bolded and Italics in the information below. I would appreciate any assists!!
    System
    - Dual Sun-Fire-280R with external dual ported SCSI-3 disk arrays.
    - Solaris 10 Update 1 with the latest patch set (as of 5/2/06)
    - Clustering from Java Enterpriset System 2005Q4 - SPARC
    - StorEdge_QFS_4.3
    The root/boot disk is not mirrored - don't want to introduce another level
    of complication at this point.
    I followed an example in one of the docs for "HA-NFS on Volumes Controlled by Solstice DiskSuite/Solaris Volume Manager" from setting up an HA QFS file system".
    The following is additional information:#
    hosts file for PREFERRED - NOTE Secondary has same entries but PREF and SEC loghosts are switched.
    # Internet host table
    127.0.0.1 localhost
    XXX.xxx.xxx.11 PREFFERED loghost
    XXX.xxx.xxx.10 SECONDARY
    XXX.xxx.xxx.205 SECONDARY-test
    XXX.xxx.xxx.206 PREFERRED-test
    XXX.xxx.xxx.207 VIRTUAL
    Please NOTE I only have one NIC port to the public net.
    ifconfig results from the PREFERRED for the interconnects only
    eri0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
    inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255
    ether 0:3:ba:18:70:15
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127
    ether 8:0:20:9b:bc:f9
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 5
    inet 172.16.193.1 netmask ffffff00 broadcast 172.16.193.255
    ether 0:0:0:0:0:1
    lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
    inet6 ::1/128
    eri0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 3
    inet6 fe80::203:baff:fe18:7015/10
    ether 0:3:ba:18:70:15
    hme0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 4
    inet6 fe80::a00:20ff:fe9b:bcf9/10
    ether 8:0:20:9b:bc:f9
    PLEASE NOTE!! I did disable ipv6 during Solaris installation and I have modified the defaults to implement NFS - 3
    ifconfig results from the SECONDARY for the interconnects only
    eri0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3
    inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255
    ether 0:3:ba:18:86:fe
    hme0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4
    inet 172.16.1.2 netmask ffffff80 broadcast 172.16.1.127
    ether 8:0:20:ac:97:9f
    clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv4> mtu 1500 index 5
    inet 172.16.193.2 netmask ffffff00 broadcast 172.16.193.255
    ether 0:0:0:0:0:2
    lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
    inet6 ::1/128
    eri0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 3
    inet6 fe80::203:baff:fe18:86fe/10
    ether 0:3:ba:18:86:fe
    hme0: flags=2008841<UP,RUNNING,MULTICAST,PRIVATE,IPv6> mtu 1500 index 4
    inet6 fe80::a00:20ff:feac:979f/10
    ether 8:0:20:ac:97:9f
    Again - I disabled ipv6 and install time.
    I followed all instructions and below are the final scrgadm command sequences:
    scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"
    scrgadm -a -t SUNW.HAStoragePlus
    scrgadm -a -t SUNW.nfs
    scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs
    scrgadm -a -L -g nfs-rg -l VIRTUAL_HOSTNAME
    scrgadm -c -g nfs-rg -h PREFERRED_HOST,SECONDARY_HOST
    scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/qfsnfs1 -x Filesy
    stemCheckCommand=/bin/true
    scswitch -Z -g nfs-rg
    scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y Resource_dependencies=qfsnfs1-res
    PREFERRED_HOST - Some shared paths in file /global/nfs/SUNW.nfs/dfstab.nfs1-res are invalid.
    VALIDATE on resource nfs1-res, resource group nfs-rg, exited with non-zero exit status.
    Validation of resource nfs1-res in resource group nfs-rg on node PREFERRED_HOST failed.
    Below are the contents of /global/nfs/SUNW.nfs/dfstab.nfs1-res:
    share -F nfs -o rw /global/qfsnfs1
    AND Finally the results of the scstat command - same for both hosts:(root)[503]# scstat
    -- Cluster Nodes --
    Node name Status
    Cluster node: PREF Online
    Cluster node: SEC Online
    -- Cluster Transport Paths --
    Endpoint Endpoint Status
    Transport path: PREF:hme0 SEC:hme0 Path online
    Transport path: PREF:eri0 SEC:eri0 Path online
    -- Quorum Summary --
    Quorum votes possible: 3
    Quorum votes needed: 2
    Quorum votes present: 3
    -- Quorum Votes by Node --
    Node Name Present Possible Status
    Node votes: PREF 1 1 Online
    Node votes: SEC 1 1 Online
    -- Quorum Votes by Device --
    Device Name Present Possible Status
    Device votes: /dev/did/rdsk/d3s2 1 1 Online
    -- Device Group Servers --
    Device Group Primary Secondary
    Device group servers: nfs1dg PREF SEC
    Device group servers: nfsdg PREF SEC
    -- Device Group Status --
    Device Group Status
    Device group status: nfs1dg Online
    Device group status: nfsdg Online
    -- Multi-owner Device Groups --
    Device Group Online Status
    -- Resource Groups and Resources --
    Group Name Resources
    Resources: nfs-rg VIRTUAL qfsnfs1-res
    -- Resource Groups --
    Group Name Node Name State
    Group: nfs-rg PREF Online
    Group: nfs-rg SEC Offline
    -- Resources --
    Resource Name Node Name State Status Message
    Resource: VIRTUAL PREF Online Online - LogicalHo
    stname online.
    Resource: VIRTUAL SEC Offline Offline - LogicalH
    ostname offline.
    Resource: qfsnfs1-res PREF Online Online
    Resource: qfsnfs1-res SEC Offline Offline
    -- IPMP Groups --
    Node Name Group Status Adapter Status
    IPMP Group: PREF ipmp1 Online ce0 Online
    IPMP Group: SEC ipmp1 Online ce0 Online
    ALSO the system will not fail over

    Good Morning Tim:
    Below are the contents of /global/nfs/SUNW.nfs/dfstab.nfs1-res:
    share -F nfs -o rw /global/qfsnfs1
    Below are the contents of vfstab for the Preferred host:
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c1t1d0s1 - - swap - no -
    /dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 / ufs 1 no -
    #/dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s3 /globaldevices ufs 2 yes -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes size=1024M
    /dev/did/dsk/d2s3 /dev/did/rdsk/d2s3 /global/.devices/node@1 ufs 2 no global
    qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1
    Below are the contents of vfstab for the Secondary host:
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c1t1d0s1 - - swap - no -
    /dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 / ufs 1 no -
    #/dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s3 /globaldevices ufs 2 yes -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes size=1024M
    /dev/did/dsk/d20s3 /dev/did/rdsk/d20s3 /global/.devices/node@2 ufs 2 no global
    qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1
    Below are contents of /var/adm/messages from scswitch -Z -g nfs-rg through the offending scrgadm command:
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource qfsnfs1-res status on node PREFFERED_HOST change to R_FM_ONLINE
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource qfsnfs1-res status msg on node PREFFERED_HOST change to <>
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource qfsnfs1-res state on node PREFFERED_HOST change to R_MON_STARTING
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node PREFFERED_HOST change to RG_PENDING_ON_STARTED
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <hastorageplus_monitor_start> for resource <qfsnfs1-res>, resource group <nfs-rg>, timeout <90> seconds
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hastorageplus_monitor_start> completed successfully for resource <qfsnfs1-res>, resource group <nfs-rg>, time used: 0% of timeout <90 seconds>
    May 15 14:39:21 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource qfsnfs1-res state on node PREFFERED_HOST change to R_ONLINE
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 736390 daemon.notice] method <hafoip_monitor_start> completed successfully for resource <merater>, resource group <nfs-rg>, time used: 0% of timeout <300 seconds>
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 443746 daemon.notice] resource merater state on node PREFFERED_HOST change to R_ONLINE
    May 15 14:39:22 PREFFERED_HOST Cluster.RGM.rgmd: [ID 529407 daemon.notice] resource group nfs-rg state on node PREFFERED_HOST change to RG_ONLINE
    May 15 14:42:47 PREFFERED_HOST Cluster.RGM.rgmd: [ID 707948 daemon.notice] launching method <nfs_validate> for resource <nfs1-res>, resource group <nfs-rg>, timeout <300> seconds
    May 15 14:42:47 PREFFERED_HOST SC[SUNW.nfs:3.1,nfs-rg,nfs1-res,nfs_validate]: [ID 638868 daemon.error] /global/qfsnfs1 does not exist or is not mounted.
    May 15 14:42:47 PREFFERED_HOST SC[SUNW.nfs:3.1,nfs-rg,nfs1-res,nfs_validate]: [ID 792295 daemon.error] Some shared paths in file /global/nfs/admin/SUNW.nfs/dfstab.nfs1-res are invalid.
    May 15 14:42:47 PREFFERED_HOST Cluster.RGM.rgmd: [ID 699104 daemon.error] VALIDATE failed on resource <nfs1-res>, resource group <nfs-rg>, time used: 0% of timeout <300, seconds>
    If there is anything else that might help, please let me know. I am currently considering tearing the cluseter down and rebuilding it to test with a UFS filesystem to see if the problem might be with QFS.,

Maybe you are looking for