Solaris Volume Manager - mount problems

hi.
first i set up a raid on a StorEdge 5100 :
metainit d4 -r c2t1d0s0 c2t2d0s0 .............. (13 devices) -i 64k
works fine now i want to create a fs on d4:
newfs /dev/md/rdsk/d4
works fine.
i insert this line in vfstab:
/dev/md/dsk/d4 /dev/md/rdsk/d4 /d4 ufs 1 yes -
now after a reboot the system hangs in maintanance mode and want a fsck . after a fsck and control+d
the system coming up. after a next reboot The same Problem... !!
without any problems i run two mirror disksets / and /var on d0 and d3 and the swap on d1.
is it a problem with the size of 101GB on the raid5 (A5100)
yes i know i can run veritas volume manager on the A5100 (a license is on the machine) but i was not able to install the vm bacause the vm cant find the A5100 and the license. So please help me .

Duplicate slicing: prtvtoc /dev/rdsk/c1t0d0s7|fmthard -s - /dev/rdsk/c1t1d0s7
Lather, rinse, repeat for each file system:
metainit the bootable side of the mirror: metainit -f d71 1 1 c1t0d0s7.
metainit the other side: metainit d72 1 1 c1t1d0s7.
Attach one side of the mirror: metainit d70 -m d71
When done: lockfs -fa;init 6
Lather, rinse, repeat for each file system:
After reboot attach the other side of the mirror: metattach d70 d72
bash-3.00# metastat -p
d70 -m d71 d72 1
d71 1 1 c1t0d0s7
d72 1 1 c1t1d0s7
d50 -m d51 d52 1
d51 1 1 c1t0d0s5
d52 1 1 c1t1d0s5
d10 -m d11 d12 1
d11 1 1 c1t0d0s1
d12 1 1 c1t1d0s1
d0 -m d1 d2 1
d1 1 1 c1t0d0s0
d2 1 1 c1t1d0s0
d60 -m d61 d62 1
d61 1 1 c1t0d0s6
d62 1 1 c1t1d0s6
bash-3.00# cat /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/md/dsk/d10 -       -       swap    -       no      -
/dev/md/dsk/d0  /dev/md/rdsk/d0 /       ufs     1       no      -
/dev/md/dsk/d70 /dev/md/rdsk/d70        /usr    ufs     1       no      -
/dev/md/dsk/d50 /dev/md/rdsk/d50        /var    ufs     1       no      -
/dev/md/dsk/d60 /dev/md/rdsk/d60        /opt    ufs     2       yes     -
/devices        -       /devices        devfs   -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -

Similar Messages

  • Solaris Volume manager

    Hi,
    Hi,
    We are in process of developing a backup utility (CLI) using the Raid manager (Firmware operating at device level).
    Requirement
    When the user gives a mount point for backup
    -     Get the disk information that is mounted at the given mount point, if the mount point is for a metadevice then get all the disks under that metadevice.
    -     Execute the Raid manager command for disk level copying for the disks that are mounted (Primary disk) to the other disk (secondary disk), which also can be under a metadevice.
    -     Assign a backup ID, used for restoring the same when user wants
    Problem
    When a disk level copy operation is performed from a primary volume to a secondary volume which are configured by SVM, is there any way in which it is possible to configure the secondary (which now has the same data of primary) under SVM without any loss of data?

    A new feature of Solaris Volume Manager will provide the ability to import DiskSets. It should be available in a latter update release of Solaris 9.

  • How Solaris Volume Manager sync submirrors

    HI Gurus,
    I have a question on how Solaris volume manager (SVM) does re-synchronization (in raid 1). In another word, in case of one submirror was modified during boot process, how SVM detects it and how SVM validate which sumirror is gold.
    One scenario I ran into: We had a software installed, this software updated /etc/name_to_sysnum which conflicts with new Solaris 10 release. So the system could not boot any more (not even to single user mode) after software installed. This box had root disk mirrored. To fix this, we boot from CDROM and mounted 1st mirror drive root partition (c0t0d0s0) and remove the bad entry in /etc/system (we did not break the mirror to make the changes). Then the box was able to boot up. After server was up, it was found /etc/system was rolled back with bad entries. Apparently it was synced back from 2nd mirror. So now the question is how SVM decides which submirror is invalid and should be re-sync from good submirror?
    2nd scenario I saw: someone accidently added one root file system submirror into zone as a file system. But during zone installation, system got panic and rebooted. During reboot, system kept crashing. We managed to boot from network, break the mirror (update /etc/vfstab and /etc/system) and finally was able to boot the system. So in case one submirror was accidently accessed, how does SVM protect data and will the corrupted data written to disk slice synced to good submirror?
    Please share your thoughts and point me with some good references. I could not find related info in SVM doc.
    Thanks,
    Wei

    SVM doesnt "sync" disks ie copy data from one disk to another except in the case when your first setting up a mirror. Or your replacing a disk etc.
    Those are circumstances when it realises the disks are out of sync.
    Once it has a mirrored pair, it will keep them in sync since all writes will go to both sides.
    And reads take alternate blocks from both disks.
    So if the the two sides of a mirror have gotten out of sync, you will see strange results as half your content will come from one and half your content will come from the other. Even inside a single file, assuming the file is bigger than the stripe size.
    So anything writing to one side of the mirror outside of SVM's control will corrupt things. And SVM has no mechanisms for detecting this and "fixing" things up.
    So its vital to break the mirror if your going to be writing to the disk outside of SVM control.
    If your brave and the amount of changes is small, you can try to edit both sides of the mirror.
    But you have to remember that SVM works at the block level, not the filesystem level.
    So if you do anything to make the two sides even minorly different. Even something as miror as update 2 files in a different order.
    Then the layout of blocks on the disks in the 2 halves could be different. And your're screwed.
    So don't do it on any system you care about. Its really easy to make a mistake and the consequences are usually catastrophic.
    When in doubt break the mirrors. Its the only safe way.

  • Solaris Volume Manager packages installation fails

    During the OS installation phase while jumpstarting, the output is as follows for Solaris Volume Manager (see below).
    To begin with, why are SUNWmdr and SUNWmdu packages trying to install? They are not mentioned in my Profile file, neither are they included in CORE OS. Is this a bug and has anyone found a fix to it?
    /tmp/install2gaGwn/checkinstall4gaGwn: /tmp/sh68390: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdr> failed.
    No changes were made to the system.
    /tmp/installHBaqyn/checkinstallJBaqyn: /tmp/sh68530: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdu> failed.
    No changes were made to the system.
    3751 blocks
    Thanks in advance,
    David
    p.s. This is happening while installing Sol9 sparc 04/2004 version (CORE) on Blade100, V480 and V240 machines.

    <table border="0" align="center" width="90%" cellpadding="3" cellspacing="1"><tr><td class="SmallText"><b>grestep wrote on Fri, 24 September 2004 14:34</b></td></tr><tr><td class="quote">
    Unfortunately, this is a bug in the Sun jumpstart process.
    You will have to install the same packages in a postinstall phase.
    This bug has been there for years.
    It is fixed in the latest Solaris 9 cdroms dated 09/04.
    I know of no other fix except the latest cdroms.
    It is also fixed in Solaris 10.
    Solaris 8 even the most recent cdroms have the bug.
    This bug only appears in a jumpstart. It works fine in a cdrom install. I haven't investigated why the jumpstart from a jumpstart server fails, but installing the package in a postinstall phase works.
    I recommend that you download the latest Solaris 9 cdroms if it is Solaris 9 that is having the problem. For Solaris 8, postinstall phase is your only option.
    I don't remember this bug in Solaris 7.
    </td></tr></table>
    Any idea how to fix this ? Ive jumped a lot of machines using this jumpstart server using JET, always using the SDS product from JET, it always worked well, until I tried to jump this v210 recently acquired and firmware updated.
    I got those errors and have no clue how to fix this .. :
    SDS: Installing sds....
    SDS: Installing SUNWmdr from: /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc/SUNWmdr/inst all/checkinstall: /t
    mp/sh39790: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdr> failed.
    No changes were made to the system.
    SDS: SUNWmdr installation complete
    SDS: Installing SUNWmdu from: /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc/SUNWmdu/inst all/checkinstall: /t
    mp/sh40210: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdu> failed.
    No changes were made to the system.
    SDS: SUNWmdu installation complete
    SDS: Installing SUNWmdx from: /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc/SUNWmdx/inst all/checkinstall: /t
    mp/sh40640: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdx> failed.
    No changes were made to the system.
    SDS: SUNWmdx installation complete
    SDS: Installing SUNWmdnr from: /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc/SUNWmdnr/ins tall/checkinstall: /
    tmp/sh41060: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdnr> failed.
    No changes were made to the system.
    SDS: SUNWmdnr installation complete
    SDS: Installing SUNWmdnu from: /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc
    Copyright 2000 Sun Microsystems, Inc. All rights reserved.
    /a/var/opt/sun/jet/js_media/pkg/sds/4.2.1/sparc/SUNWmdnu/ins tall/checkinstall: /
    tmp/sh41460: cannot create
    pkgadd: ERROR: checkinstall script did not complete successfully
    Installation of <SUNWmdnu> failed.
    No changes were made to the system.
    SDS: SUNWmdnu installation complete
    SDS: Apply patch 108693-25
    Using : SunOS Release 5.8 Version Generic_117350-16 64-bit
    All other packages installed without problem
    Any insights ?

  • Solaris Volume Manager help needed...

    I am new to Solaris Volume Manager and need some assistance. I suspect that the mirroring was no setup correctly on a server I am concerned with. Here is some output:
    metadb -i
    flags first blk block count
    a m p luo 16 8192 /dev/dsk/c1t0d0s7
    a p luo 8208 8192 /dev/dsk/c1t0d0s7
    ================================================
    metastat
    d4: Mirror
    Submirror 0: d50
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 5905116 blocks (2.8 GB)
    d50: Submirror of d4
    State: Okay
    Size: 5905116 blocks (2.8 GB)
    Stripe 0:
    Device Start Block Dbase State Reloc Hot Spare
    c1t0d0s5 0 No Okay Yes
    d3: Mirror
    Submirror 0: d40
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 6292242 blocks (3.0 GB)
    d40: Submirror of d3
    State: Okay
    Size: 6292242 blocks (3.0 GB)
    Stripe 0:
    Device Start Block Dbase State Reloc Hot Spare
    c1t0d0s4 0 No Okay Yes
    d2: Mirror
    Submirror 0: d30
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 8389656 blocks (4.0 GB)
    d30: Submirror of d2
    State: Okay
    Size: 8389656 blocks (4.0 GB)
    Stripe 0:
    Device Start Block Dbase State Reloc Hot Spare
    c1t0d0s3 0 No Okay Yes
    d1: Mirror
    Submirror 0: d20
    State: Okay
    Pass: 0
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 33555735 blocks (16 GB)
    d20: Submirror of d1
    State: Okay
    Size: 33555735 blocks (16 GB)
    Stripe 0:
    Device Start Block Dbase State Reloc Hot Spare
    c1t0d0s1 0 No Okay Yes
    d0: Mirror
    Submirror 0: d10
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 16779312 blocks (8.0 GB)
    d10: Submirror of d0
    State: Okay
    Size: 16779312 blocks (8.0 GB)
    Stripe 0:
    Device Start Block Dbase State Reloc Hot Spare
    c1t0d0s0 0 No Okay Yes
    Device Relocation Information:
    Device Reloc Device ID
    c1t0d0 Yes id1,ssd@n20000004cf6f7b96
    =============================================
    metastat -p
    d4 -m d50 1
    d50 1 1 c1t0d0s5
    d3 -m d40 1
    d40 1 1 c1t0d0s4
    d2 -m d30 1
    d30 1 1 c1t0d0s3
    d1 -m d20 0
    d20 1 1 c1t0d0s1
    d0 -m d10 1
    d10 1 1 c1t0d0s0
    ========================================
    format of c1t0d0 (rootdisk)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 11615 - 17422 8.00GB (5808/0/0) 16779312
    1 swap wu 0 - 11614 16.00GB (11615/0/0) 33555735
    2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180
    3 home wm 17423 - 20326 4.00GB (2904/0/0) 8389656
    4 unassigned wm 20327 - 22504 3.00GB (2178/0/0) 6292242
    5 var wm 22505 - 24548 2.82GB (2044/0/0) 5905116
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 24549 - 24619 100.16MB (71/0/0) 205119
    =======================================================
    format of c1t1d0 (rootmirror)
    Current partition table (original):
    Total disk cylinders available: 24620 + 2 (reserved cylinders)
    Part Tag Flag Cylinders Size Blocks
    0 root wm 1 - 14519 20.00GB (14519/0/0) 41945391
    1 swap wu 14520 - 17423 4.00GB (2904/0/0) 8389656
    2 backup wu 0 - 24619 33.92GB (24620/0/0) 71127180
    3 - wu 0 - 0 1.41MB (1/0/0) 2889
    4 - wu 1 - 24619 33.91GB (24619/0/0) 71124291
    5 unassigned wm 0 0 (0/0/0) 0
    6 var wm 17424 - 23231 8.00GB (5808/0/0) 16779312
    7 unassigned wm 0 0 (0/0/0) 0
    Any help and/or links to information for proper setup of mirroring when using Solaris Volume Manager would be appreciated. I should mention that the system also uses Veritas Volume Manager 4.1, but only Solaris Volume Manager is in control of the mirroring.

    Duplicate slicing: prtvtoc /dev/rdsk/c1t0d0s7|fmthard -s - /dev/rdsk/c1t1d0s7
    Lather, rinse, repeat for each file system:
    metainit the bootable side of the mirror: metainit -f d71 1 1 c1t0d0s7.
    metainit the other side: metainit d72 1 1 c1t1d0s7.
    Attach one side of the mirror: metainit d70 -m d71
    When done: lockfs -fa;init 6
    Lather, rinse, repeat for each file system:
    After reboot attach the other side of the mirror: metattach d70 d72
    bash-3.00# metastat -p
    d70 -m d71 d72 1
    d71 1 1 c1t0d0s7
    d72 1 1 c1t1d0s7
    d50 -m d51 d52 1
    d51 1 1 c1t0d0s5
    d52 1 1 c1t1d0s5
    d10 -m d11 d12 1
    d11 1 1 c1t0d0s1
    d12 1 1 c1t1d0s1
    d0 -m d1 d2 1
    d1 1 1 c1t0d0s0
    d2 1 1 c1t1d0s0
    d60 -m d61 d62 1
    d61 1 1 c1t0d0s6
    d62 1 1 c1t1d0s6
    bash-3.00# cat /etc/vfstab
    #device         device          mount           FS      fsck    mount   mount
    #to mount       to fsck         point           type    pass    at boot options
    fd      -       /dev/fd fd      -       no      -
    /proc   -       /proc   proc    -       no      -
    /dev/md/dsk/d10 -       -       swap    -       no      -
    /dev/md/dsk/d0  /dev/md/rdsk/d0 /       ufs     1       no      -
    /dev/md/dsk/d70 /dev/md/rdsk/d70        /usr    ufs     1       no      -
    /dev/md/dsk/d50 /dev/md/rdsk/d50        /var    ufs     1       no      -
    /dev/md/dsk/d60 /dev/md/rdsk/d60        /opt    ufs     2       yes     -
    /devices        -       /devices        devfs   -       no      -
    ctfs    -       /system/contract        ctfs    -       no      -
    objfs   -       /system/object  objfs   -       no      -
    swap    -       /tmp    tmpfs   -       yes     -

  • Solaris volume manager and RPC

    I'm trying to run as few services as possible? Does anybody know if there are dependencies between Solaris Volume Manager (formerly DiskSuite) and rpc? I know that Solaris Volume Manager uses something called rpc.metamhd, but I'm not sure if this requires the full blown rpc.
    thanks...

    Duplicate slicing: prtvtoc /dev/rdsk/c1t0d0s7|fmthard -s - /dev/rdsk/c1t1d0s7
    Lather, rinse, repeat for each file system:
    metainit the bootable side of the mirror: metainit -f d71 1 1 c1t0d0s7.
    metainit the other side: metainit d72 1 1 c1t1d0s7.
    Attach one side of the mirror: metainit d70 -m d71
    When done: lockfs -fa;init 6
    Lather, rinse, repeat for each file system:
    After reboot attach the other side of the mirror: metattach d70 d72
    bash-3.00# metastat -p
    d70 -m d71 d72 1
    d71 1 1 c1t0d0s7
    d72 1 1 c1t1d0s7
    d50 -m d51 d52 1
    d51 1 1 c1t0d0s5
    d52 1 1 c1t1d0s5
    d10 -m d11 d12 1
    d11 1 1 c1t0d0s1
    d12 1 1 c1t1d0s1
    d0 -m d1 d2 1
    d1 1 1 c1t0d0s0
    d2 1 1 c1t1d0s0
    d60 -m d61 d62 1
    d61 1 1 c1t0d0s6
    d62 1 1 c1t1d0s6
    bash-3.00# cat /etc/vfstab
    #device         device          mount           FS      fsck    mount   mount
    #to mount       to fsck         point           type    pass    at boot options
    fd      -       /dev/fd fd      -       no      -
    /proc   -       /proc   proc    -       no      -
    /dev/md/dsk/d10 -       -       swap    -       no      -
    /dev/md/dsk/d0  /dev/md/rdsk/d0 /       ufs     1       no      -
    /dev/md/dsk/d70 /dev/md/rdsk/d70        /usr    ufs     1       no      -
    /dev/md/dsk/d50 /dev/md/rdsk/d50        /var    ufs     1       no      -
    /dev/md/dsk/d60 /dev/md/rdsk/d60        /opt    ufs     2       yes     -
    /devices        -       /devices        devfs   -       no      -
    ctfs    -       /system/contract        ctfs    -       no      -
    objfs   -       /system/object  objfs   -       no      -
    swap    -       /tmp    tmpfs   -       yes     -

  • How do you change volume permissions with Solaris Volume Manager?

    (Previously posted in "Talk to the Sysop" - no replies)
    I'm trying to set up Solaris 9 to run Oracle on raw partitions. I have my design nailed down and I have built all the raw partitions I need as soft partitions on top of RAID 1 volumes. All this is built using Solaris Volume Manager (SVM).
    However, all the partitions are still owned by root. Before I can create my Oracle database, I need to change the owner of the Oracle partitions to oracle:oinstall. The only reference I found telling me how to do this was in a Sun Blueprint and it essentially said "You can't change volume permissions directly or permanently using SVM and chown will only remain effective until the next reboot. To make the changes permanent, you must modify /etc/minor_perm". Unfortunately, I can't find an example of how to do this anywhere and the online man pages are not particularly helpful (at least not to me).
    I'd appreciate a quick pointer, either to a good online resource or, even better, a simple example. For background, the volumes Oracle needs to own are:
    /dev/md/rdsk/d101-109
    /dev/md/rdsk/d201-203
    /dev/md/rdsk/d301-303
    /dev/md/rdsk/d401-403
    /dev/md/rdsk/d501-505
    I provide this information because I'd like to assign some, but not all, of the devices under /dev/md/rdsk to the oracle user and I was hoping some smart person out there could illustrate an approach using simle regular expressions, at which I'm horribly poor.
    Thanks in advance,
    Adrian

    Ron, I feel your pain.  I just came from an HTC also and a lot of stuff with this iPhone is bugging the crap out of me.  Who makes a phone where you can't adjust the ringer and alert volumes independently?  Instead, I have to adjust the alert volume when it is active.  C'mon guys.  Get with the program.  You won a bunch of Android users over with the 4S, but you're going to chase us all back when we're done with our contract.  Frustrating.  

  • Solaris Volume Manager Enhanced Storage BUG?

    All of our 64 bi sparc machines from solaris 8 and up used the Solstise Disk Suite...now solaris volume manager. The default solaris 10 install installs the command line metadb and other tools. I did formerly enjoy the metatool command for configuring the disk with GUI. I start the Management Console then I select Enhanced Storage and the Management Console just hangs. Is this because something is missing from my path. Or do I need to install more packages? If so what are these packages and where can I get them. I am also looking for the old versions of metatool for solaris 8 and 9 and ideas where I could find these?

    All of our 64 bi sparc machines from solaris 8 and up used the Solstise Disk Suite...now solaris volume manager. The default solaris 10 install installs the command line metadb and other tools. I did formerly enjoy the metatool command for configuring the disk with GUI. I start the Management Console then I select Enhanced Storage and the Management Console just hangs. Is this because something is missing from my path. Or do I need to install more packages? If so what are these packages and where can I get them. I am also looking for the old versions of metatool for solaris 8 and 9 and ideas where I could find these?

  • Solaris Volume Manager API

    Do they provide an API for commands like growfs, metaxxxx?

    Try a download for Solaris Volume Manager guide in the pdf format from docs.sun.com .
    Sridhar

  • Solaris volume manager -- where?

    Where is the solaris volume manger for solaris 10 located?
    On what disk?
    Can I download it?

    $ pkginfo | grep -i volume
    system SUNWdthev CDE HELP VOLUMES
    system SUNWdthez Desktop Power Pack Help Volumes
    system SUNWmdar Solaris Volume Manager Assistant (Root)
    system SUNWmdau Solaris Volume Manager Assistant (Usr)
    system SUNWmdr Solaris Volume Manager, (Root)
    system SUNWmdu Solaris Volume Manager, (Usr)
    system SUNWvolr Volume Management, (Root)
    system SUNWvolu Volume Management, (Usr)
    $
    bbr

  • Luactivate error message re: Solaris Volume Manager

    Is this something I need to be concerned about?
    2009-04-02 16:40:18 # luactivate sol10_1008_stage2
    ERROR: Solaris Volume Manager, (Root) file missing: </dev/md>.
    ERROR: Solaris Volume Manager, (Root) file missing: </dev/md>.
    Activating the current boot environment <sol10_1008_stage2> for next reboot.
    The current boot environment <sol10_1008_stage2> has been activated for the next reboot.
    2009-04-02 16:40:25 #Thanks,
    Mark

    Is this something I need to be concerned about?
    2009-04-02 16:40:18 # luactivate sol10_1008_stage2
    ERROR: Solaris Volume Manager, (Root) file missing: </dev/md>.
    ERROR: Solaris Volume Manager, (Root) file missing: </dev/md>.
    Activating the current boot environment <sol10_1008_stage2> for next reboot.
    The current boot environment <sol10_1008_stage2> has been activated for the next reboot.
    2009-04-02 16:40:25 #Thanks,
    Mark

  • Hal/Dbus/gnome-volume-manager mount permission problem

    Hello,
    I have hal/dbus/gnome-volume-manager installed (of course I use gnome as DE).
    I need help in configuring the permissions for automatically mounted disk/partitions.
    When I connect an external USB HDD having 1 NTFS and 1 partition, these partitions are auto-mounted and the desktop icons appear for the both.
    As a normal user I am able to read/write FAT partition (I think, not because I have configured anything properly, but because FAT does not support permissions on filesystem), but I am not even able to read (browse) NTFS filesystem (ERORR: You do not have permission...).
    Yes, I have NTFS filesystm support enabled, because I am able to access that same NTFS partition as root.

    Well, Thanks for the reply but I had not put any static entry in fstab for this external USB HDD.
    Anyway, I figured out something (havn't tested it yet)
    When an external USB HDD is attached ... depending storage.policy and volume.policy rules in /usr/share/hal .. the hal daemon (hald) calls fstab-sync with appropriate parameters.
    fstab-sync, then, generates /etc/fstab enteries automatically (with comment=managed keyword).
    I have added a user policy so that umask and gid value is set correctly. I'll test is as soon as I'll get time.
    In the meanwhile if anybody have any other suggestions, please post it here.

  • Solaris Volume Manager or Hardware RAID?

    Hi - before I build some new Solaris servers I'd like thoughts on the following please. I've previously built our Sun servers using SVM to mirror disks and one of the reasons is when I O/S patch the server I always split the mirrors beforehand and in the event of a failure I can just boot from the untouched mirror - this method has saved my bacon on numerous occasions. However we have just got some T4-1 servers that have hardware RAID and although I like this as it moves away from SVM / software RAID and to hardware RAID I'm now thinking that I will no longer have this "backout plan" in the event of issues with the O/S updates or otherwise however unlikely.
    Can anyone please tell me if I have any other options?
    Thanks - Julian.

    Thanks - just going through the 300 page ZFS admin guide now. I want to ditch SVM as it's clunky and not very friendly whenever we have a disk failure or need to O/S patch as mentioned. One thing I have just read from the ZFS admin guide is that:
    "As described in “ZFS Pooled Storage” on page 51, ZFS eliminates the need for a separate volume
    manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of
    logical volumes, either software or hardware. This configuration is not recommended, as ZFS
    works best when it uses raw physical devices. Using logical volumes might sacrifice
    performance, reliability, or both, and should be avoided."
    So looks like I need to destroy my hardware RAID as well and just let ZFS manage it all. I'll try that, amend my JET template and kick of an install and see what it looks like.
    Thanks again - Julian.

  • Solaris volume manager changing raid0 stripe interlace size (-i 1024)

    Hi,
    I have 2 volume manager raid0 stripe volmes (d61 & d62) which are mirrored (d60 - see below), when the submirrors were created we ommited to increase the interlace size (-i 1024), can i metaclear d61 and recreate with -i 1024 and reattach/resync and then metaclear d62 and recreate with -i 1024 and reattach/resync?? i.e during the transition i would have one half of the mirror with 1 interlace size and the second half of the mirror with a different interlace size or is it best to backup the volume and recerate the whole stripe/mirror volume and the restore the data.
    Thanks in advance
    Kevin.....
    # metastat d60
    d60: Mirror
    Submirror 0: d61
    State: Okay
    Submirror 1: d62
    State: Okay
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 1432370625 blocks (683 GB)
    d61: Submirror of d60
    State: Okay
    Size: 1432370625 blocks (683 GB)
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c3t0d0s0 0 No Okay Yes
    c3t1d0s0 4375 No Okay Yes
    c3t2d0s0 4375 No Okay Yes
    c3t3d0s0 4375 No Okay Yes
    c3t4d0s0 4375 No Okay Yes
    d62: Submirror of d60
    State: Okay
    Size: 1432370625 blocks (683 GB)
    Stripe 0: (interlace: 32 blocks)
    Device Start Block Dbase State Reloc Hot Spare
    c4t0d0s0 0 No Okay Yes
    c4t1d0s0 4375 No Okay Yes
    c4t2d0s0 4375 No Okay Yes
    c4t3d0s0 4375 No Okay Yes
    c4t4d0s0 4375 No Okay Yes

    Hi,
    Sun support confirmed that you can have the 2 halfs of the mirror with different interlace sizes, this obviously is not the optimum setup, but will allow me to detach d61 and recreate with correct interlace size, reattach d61 and let it resync, then detach d62 and recreate with correct interlace size and finally reattach d62 and let that resync
    Kevin....

  • C Program APIs or "ioctls" to access volume information created using Solaris Volume Manager

    Hi,
    Does "SUNWlvm" provide any C language program APIs or "ioctls" to know
    about volumes created using "Sun Volume Manager" like volume
    components, size, type and rest. I did not find any API manual page.
    Thanks,
    Jai

    Hi,
    Does "SUNWlvm" provide any C language program APIs or "ioctls" to know
    about volumes created using "Sun Volume Manager" like volume
    components, size, type and rest. I did not find any API manual page.
    Thanks,
    Jai

Maybe you are looking for