SVM metadb in Solaris 10

Hi,
I am new to Solaris and having problem with SVM to create mirroring on M4000 server with Solaris 10.
When I enter below command to create meta db getting the following :
# metadb -a -c 3 -f c0t0d0s6 c0t1d0s6
metadb: db: c0t0d0s6: overlaps with c0t0d0s1 which is a swap device
I installed the OS on disk1 (c0t0d0) and leave disk2 (c0t1d0) untouched. Below is the partition table of the both disk from "format" command line:
Part 
Tag
Flag
Cylinders    
Size       
Blocks
  0  
root
wm
466 -  1249   
6.74GB
(784/0/0)
14140224
  1  
swap
wu  
0 -   465   
4.01GB
(466/0/0) 
8404776
  2
backup
wm  
0 - 64985 
558.89GB
(64986/0/0) 1172087496
  3 unassigned
wm  
0           
0    
(0/0/0)         
0
  4 unassigned
wm  
0           
0    
(0/0/0)         
0
  5 unassigned
wm  
0           
0    
(0/0/0)         
0
  6 unassigned
wm  
0           
0    
(0/0/0)         
0
  7  
home
wm
1250 - 64985 
548.14GB
(63736/0/0) 1149542496
Now the metadb -i command is showing the following result:
# metadb -i
metadb: db: there are no existing databases
Please help. Your help will be highly appreciated.
Do I need to create a new partition of 10M size? I think there is no unused space in my system coz I used default partition layout at the time of installation.
Can I create meta DB inside /home/export (c0t0d0s7) partition?
Thanks
johabd

Hello johabd,
The problem you face is related with the slice 6 on your spare disk c0t1d0. This slice does not have assigned cylinders. To successfully create metadb on that slice, repartition the disk so slice 6 or slice 7 has 20M (one metadb is 4M in size if I recall correctly).
You can create metadb on your current slice 7 but you will not use the space efficiently as it has 548GB.
Best practice for SVM is to create 2 copies of metadb on 2 different disks. I suggest 2 copies on c0t0d0s6 and 2 copies on c0t1d0s6.  
Regards,

Similar Messages

  • MetaDB issue - Solaris 10u8 x86

    Hi all,
    I am running Solaris 10u8 x86 on VMware.
    I am trying to configure SVM in my virtual machine and meet the following issue about metadb.
    I have 01 HDD which contains Solaris OS, swap... and 03 more HDDs for holding data. I did not do anything on the first HDD. The other 3 HDDs will be configured to be a RAID 5 volume using SVM. (I will call these 3 HDDs disk2, disk3 and disk4)
    On the disk2, I created slice 7 from cylinder 3 to cylinder 10 for holding metadb. I also created the slice 0 from cylinder 11 to the end of the HDD for to be a component of the RAID 5 volume. (In this case, the location of the slice 7 for metadb is *"before"* the location of the slice 0)
    The VTOC of the disk2 is applied to the disk3 and disk4.
    After dividing the 3 HDDs into slices. I created the metadb on the slice 7 of the 3 HDDs. After that I created the RAID 5 volume using the slice 0 of the 3 HDDs. However, after I reboot the machine, it said that my RAID 5 volume is not exist. the metadb command showed that there is no metadb.
    I tried to test another case. On the disk2, I created slice 7 from cylinder 1000 to cylinder 1200 for holding metadb. I also created the slice 0 from cylinder 1 to cylinder 999 for to be a component of the RAID 5 volume. (In this case, the location of the slice 7 for metadb is *"after"* the location of the slice 0)
    After dividing the 3 HDDs into slices. I created the metadb on the slice 7 of the 3 HDDs. After that I created the RAID 5 volume using the slice 0 of the 3 HDDs. After rebooting the system everything went fine. My metadb is exist and the RAID 5 volume can be mounted and used.
    I cannot find any reason why I could not place the slice 7 (which holds metadb) *"before"* the other slices which used to hold data.
    Please help me to explain this case.
    Thanks and BR.
    HuyNQ.

    Solaris makes one of the partitions active , Take a note of that.
    Once you are done tinkering with the partitions/ loading say Windows/Linux follow this step.
    Grab any Linux distro's boot CD and start the rescue mode. type grub and do this
    root (hd0, X) X the partiton that has linux partition minus 1
    setup (hd0)now open fdisk and change the label of the Solaris Partition to 80.
    Add an entry in /boot/grub/menu.1st on the same lines like Windows, If unsure google .
    Reboot and you are done.
    Never I said never make any other partition active other than what Solaris installation does for you.

  • ZFS root filesystem & slice 7 for metadb (SUNWjet)

    Hi,
    I'm planning to use ZFS root filesystem in Sun Cluster 3.3 environment, as written in documentation when we will use UFS share diskset then we need to create small slice for metadb on slice 7. In standar installation we can't create slice 7 when we install solaris with zfs root, then we can create it with jumpstart profile below :
    # example Jumpstart profile -- ZFS with
    # space on s7 left out of the zpool for SVM metadb
    install_type initial_install
    cluster SUNWCXall
    filesys c0t0d0s7 32
    pool rpool auto 2G 2G c0t0d0s0
    so, my question is : "when we use SUNWjet (JumpStart(tm) Enterprise Toolkit) how we can write the profile similar to above jumpstart profile"?
    Thanks very much, for your best answer.

    This can be done with JET
    You create the template as normal.
    Then create a profile file with the slice 7 line.
    Then edit the template to use it.
    see
    ---8<
    # It is also possible to append additional profile information to the JET
    # derived one. Do this using the base_config_profile_append variable, but
    # don't forget to fill out the remaining base_config_profile variables.
    base_config_profile=""
    base_config_profile_append="
    ---8<
    It is how OpsCentre (which uses JET) does it.
    JET questions are best asked on the external JET alias at yahoogorups (until the forum is setup on OTN)

  • "ludowngrade" - Sol 8 root filesystem trashed after mounting on Sol 10?

    I've been giving liveupgrade a shot and it seems to work well for upgrading Sol 8 to Sol 10 until a downgrade / rollback is attempted.
    To make a long story short, luactivating back to the old Sol 8 instance doesn't work because I haven't figured out a way to completely unencapsulate the Sol 8 SVM root metadevice without completely removing all SVM metadevices and metadbs from the system before the luupgrade, and we can only reboot once, to activate the newupgrade.
    This leaves the old Sol 8 root filesystem metadevice around after the upgrade (even though it is not mounted anywhere). After an luactivate back to the Sol 8 instance, something gets set wrong and the 5.8 kernel panics with all kinds of undefined symbol errors.
    Which leaves me no choice but to reboot in Solaris 10, and mount the old Solaris 8 filesystem, then edit the Sol 8 /etc/system and vfstab files to boot off a plain, non-SVM root filesystem.
    Here's the problem: Once I have mounted the Old Sol 8 filesystem in Sol 10, it fails fsck when booting Sol 8;
    /dev/rdsk/c1t0d0s0: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
    # fsck /dev/rdsk/c1t0d0s0
    BAD SUPERBLOCK AT BLOCK 16: BAD VALUES IN SUPER BLOCK
    LOOK FOR ALTERNATE SUPERBLOCKS WITH MKFS? y
    USE ALTERNATE SUPERBLOCK? y
    FOUND ALTERNATE SUPERBLOCK AT 32 USING MKFS
    CANCEL FILESYSTEM CHECK? n
    Fortunately, recovering the alternate superblock makes the filesystem usable in Sol 8 again. Is this supposed to happen?
    The only thing I can think of is I have logging enabled on the root FS in Sol 10, so apparently logging trashes the superblock in Sol 10 such that the FS cannot be mounted in Sol 8 without repair.
    Better yet would be a HOWTO on how to luupgrade a root filesystem encapsulated in SVM without removing the metadevices first. It seems impossible since without any fiddling, all the LU instances on the host will share the SVM metadb's on the system, which leads to problems.

    Did you upgrade the version of powerpath to a release supported on Solaris 10?

  • Jumpstart installation of flar image fails

    Guru's,
    During night time I make a flar image of the local disk of a SF480. I also created a jumstart server (Solaris 10 03/05) so I would be able to restore this image on the 480. So far not very special, I think. Now comes the difficult part: the flar image of the 480 is an image of a mirrored disk (svm). For Solaris 9 I used a postinstall script to fix the svm files like mddb.tab, vfstab etc. to mount the original slices. After the installation of a Solaris 9 image everything was ok. When I try to do the same trick on Solaris 10 the whole procedure goes fine until the boot process reports:
    Hostname: rusland
    ERROR: svc:/system/filesystem/root:default failed to mount /usr (see 'svcs -x'
    for details)
    [ system/filesystem/root:default failed fatally (see 'svcs -x' for details) ]
    Requesting System Maintenance Mode
    (See /lib/svc/share/README for more information.)
    Console login service(s) cannot run
    Root password for system maintenance (control-d to bypass):
    I did check the slice with fsck from a cdrom but there were no errors.
    Even when I try to boot with ok> boot -m milestone=none the console services don't start. Does anybody have an idea what the problem is? I think I'm pretty close but I'm probably missing a (hopefully small) part. Thanks in advance.

    Guru's,
    During night time I make a flar image of the local disk of a SF480. I also created a jumstart server (Solaris 10 03/05) so I would be able to restore this image on the 480. So far not very special, I think. Now comes the difficult part: the flar image of the 480 is an image of a mirrored disk (svm). For Solaris 9 I used a postinstall script to fix the svm files like mddb.tab, vfstab etc. to mount the original slices. After the installation of a Solaris 9 image everything was ok. When I try to do the same trick on Solaris 10 the whole procedure goes fine until the boot process reports:
    Hostname: rusland
    ERROR: svc:/system/filesystem/root:default failed to mount /usr (see 'svcs -x'
    for details)
    [ system/filesystem/root:default failed fatally (see 'svcs -x' for details) ]
    Requesting System Maintenance Mode
    (See /lib/svc/share/README for more information.)
    Console login service(s) cannot run
    Root password for system maintenance (control-d to bypass):
    I did check the slice with fsck from a cdrom but there were no errors.
    Even when I try to boot with ok> boot -m milestone=none the console services don't start. Does anybody have an idea what the problem is? I think I'm pretty close but I'm probably missing a (hopefully small) part. Thanks in advance.

  • Why parition table is different between S10 3/05 and S10 6/06?

    I want to set up RAID 1 (mirroring) using the Solaris Volume Manager (SVM) on my Solaris 10 6/06 x86 machine. I have 2 identical Seagate 40GB HDD's (model ST340016A).
    The partition table on the primary disk was already defined by Solaris 10 3/05 OS before I reinstalled the OS (Solaris 10 6/06). The partition table of the primary disk is as follows:
    leopard $ prtvtoc /dev/rdsk/c0d0s2
    * /dev/rdsk/c0d0s2 partition map
    * Dimensions:
    * 512 bytes/sector
    * 63 sectors/track
    * 32 tracks/cylinder
    * 2016 sectors/cylinder
    * 38771 cylinders
    * 38769 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * Unallocated space:
    * First Sector Last
    * Sector Count Sector
    * 20984544 57173760 78158303
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    0 2 00 6048 8390592 8396639 /
    1 3 01 8396640 2098656 10495295
    2 5 00 0 78158304 78158303
    3 7 00 10495296 2098656 12593951 /var
    4 0 00 12593952 4195296 16789247 /opt
    5 8 00 16789248 4195296 20984543 /home
    8 1 01 0 2016 2015
    9 9 01 2016 4032 6047
    It has 38771 cyclinders.
    Now I want to create exactly the same partition table on the secondary disk, but it refuses to match the same partition table to the primary disk. It stills gives 40GB, but the number of cycliners is not the same. Below is the partition table of the secondary disk.
    leopard $ prtvtoc /dev/rdsk/c0d1s2
    * /dev/rdsk/c0d1s2 partition map
    * Dimensions:
    * 512 bytes/sector
    * 63 sectors/track
    * 255 tracks/cylinder
    * 16065 sectors/cylinder
    * 4864 cylinders
    * 4862 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * Unallocated space:
    * First Sector Last
    * Sector Count Sector
    * 48195 78059835 78108029
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    2 5 01 0 78108030 78108029
    8 1 01 0 16065 16064
    9 9 00 16065 32130 48194
    leopard $
    It has 4864 cyclinders not 38771. Why?
    You can see the disk dimension is different between these 2 disks. I tried to redefine it to match the primary disk, but it keeps coming back to the different table.
    Do you know why the partition table defined by Solaris 10 6/06 differs to what Solaris 10 3/05 had defined? How do I force the parition table to match with the primary disk? Both disks must match so that SVM would work.
    Can anyone shed any light as to why it happens? Thank you for your help.
    Trevor

    I want to set up RAID 1 (mirroring) using the Solaris Volume Manager (SVM) on my Solaris 10 6/06 x86 machine. I have 2 identical Seagate 40GB HDD's (model ST340016A).
    The partition table on the primary disk was already defined by Solaris 10 3/05 OS before I reinstalled the OS (Solaris 10 6/06). The partition table of the primary disk is as follows:
    leopard $ prtvtoc /dev/rdsk/c0d0s2
    * /dev/rdsk/c0d0s2 partition map
    * Dimensions:
    * 512 bytes/sector
    * 63 sectors/track
    * 32 tracks/cylinder
    * 2016 sectors/cylinder
    * 38771 cylinders
    * 38769 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * Unallocated space:
    * First Sector Last
    * Sector Count Sector
    * 20984544 57173760 78158303
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    0 2 00 6048 8390592 8396639 /
    1 3 01 8396640 2098656 10495295
    2 5 00 0 78158304 78158303
    3 7 00 10495296 2098656 12593951 /var
    4 0 00 12593952 4195296 16789247 /opt
    5 8 00 16789248 4195296 20984543 /home
    8 1 01 0 2016 2015
    9 9 01 2016 4032 6047
    It has 38771 cyclinders.
    Now I want to create exactly the same partition table on the secondary disk, but it refuses to match the same partition table to the primary disk. It stills gives 40GB, but the number of cycliners is not the same. Below is the partition table of the secondary disk.
    leopard $ prtvtoc /dev/rdsk/c0d1s2
    * /dev/rdsk/c0d1s2 partition map
    * Dimensions:
    * 512 bytes/sector
    * 63 sectors/track
    * 255 tracks/cylinder
    * 16065 sectors/cylinder
    * 4864 cylinders
    * 4862 accessible cylinders
    * Flags:
    * 1: unmountable
    * 10: read-only
    * Unallocated space:
    * First Sector Last
    * Sector Count Sector
    * 48195 78059835 78108029
    * First Sector Last
    * Partition Tag Flags Sector Count Sector Mount Directory
    2 5 01 0 78108030 78108029
    8 1 01 0 16065 16064
    9 9 00 16065 32130 48194
    leopard $
    It has 4864 cyclinders not 38771. Why?
    You can see the disk dimension is different between these 2 disks. I tried to redefine it to match the primary disk, but it keeps coming back to the different table.
    Do you know why the partition table defined by Solaris 10 6/06 differs to what Solaris 10 3/05 had defined? How do I force the parition table to match with the primary disk? Both disks must match so that SVM would work.
    Can anyone shed any light as to why it happens? Thank you for your help.
    Trevor

  • How to use SVM metadevices with cluster - sync metadb between cluster nodes

    Hi guys,
    I feel like I've searched the whole internet regarding that matter but found nothing - so hopefully someone here can help me?!?!?
    <b>Situation:</b>
    I have a running server with Sol10 U2. SAN storage is attached to the server but without any virtualization in the SAN network.
    The virtualization is done by Solaris Volume Manager.
    The customer has decided to extend the environment with a second server to build up a cluster. According our standards we
    have to use Symantec Veritas Cluster, but I think regarding my question it doesn't matter which cluster software is used.
    The SVM configuration is nothing special. The internal disks are configured with mirroring, the SAN LUNs are partitioned via format
    and each slice is a meta device.
    d100 p 4.0GB d6
    d6 m 44GB d20 d21
    d20 s 44GB c1t0d0s6
    d21 s 44GB c1t1d0s6
    d4 m 4.0GB d16 d17
    d16 s 4.0GB c1t0d0s4
    d17 s 4.0GB c1t1d0s4
    d3 m 4.0GB d14 d15
    d14 s 4.0GB c1t0d0s3
    d15 s 4.0GB c1t1d0s3
    d2 m 32GB d12 d13
    d12 s 32GB c1t0d0s1
    d13 s 32GB c1t1d0s1
    d1 m 12GB d10 d11
    d10 s 12GB c1t0d0s0
    d11 s 12GB c1t1d0s0
    d5 m 6.0GB d18 d19
    d18 s 6.0GB c1t0d0s5
    d19 s 6.0GB c1t1d0s5
    d1034 s 21GB /dev/dsk/c4t600508B4001064300001C00004930000d0s5
    d1033 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s4
    d1032 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s3
    d1031 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s1
    d1030 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004930000d0s0
    d1024 s 31GB /dev/dsk/c4t600508B4001064300001C00004870000d0s5
    d1023 s 512MB /dev/dsk/c4t600508B4001064300001C00004870000d0s4
    d1022 s 2.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s3
    d1021 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s1
    d1020 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004870000d0s0
    d1014 s 8.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s5
    d1013 s 1.7GB /dev/dsk/c4t600508B4001064300001C00004750000d0s4
    d1012 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s3
    d1011 s 256MB /dev/dsk/c4t600508B4001064300001C00004750000d0s1
    d1010 s 4.0GB /dev/dsk/c4t600508B4001064300001C00004750000d0s0
    d1004 s 46GB /dev/dsk/c4t600508B4001064300001C00004690000d0s5
    d1003 s 6.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s4
    d1002 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s3
    d1001 s 1.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s1
    d1000 s 5.0GB /dev/dsk/c4t600508B4001064300001C00004690000d0s0
    <b>The problem is the following:</b>
    The SVM configuration on the second server (cluster node 2) must be the same for the devices d1000-d1034.
    Generally spoken the metadb needs to be in sync.
    - How can I manage this?
    - Do I have to use disk sets?
    - Will a copy of the md.cf/md.tab and an initialization with metainit do it?
    I would be great to have several options how one can manage this.
    Thanks and regards,
    Markus

    Dear Tim,
    Thank you for your answer.
    I can confirm that Veritas Cluster doesn't support SVM by default. Of course they want to sell their own volume manager ;o).
    But that wouldn't be the big problem. With SVM I expect the same behaviour as with VxVM, If I do or have to use disk sets,
    and for that I can write a custom agent.
    My problem is not the cluster implementation. It's more likely a fundamental problem with syncing the SVM config for a set
    of meta devices between two hosts. I'm far from implementing the devices into the cluster config as long as I don't know how
    how to let both nodes know about both devices.
    Currently only the hosts that initialized the volumes knows about them. The second node doesn't know anything about the
    devices d1000-d1034.
    What I need to know in this state is:
    - How can I "register" the alrady initialized meta devices d1000-d1034 on the second cluster node?
    - Do I have to use disk sets?
    - Can I only copy and paste the appropriate lines of the md.cf/md.tab
    - Generaly speaking: How can one configure SVM that different hosts see the same meta devices?
    Hope that someone can help me!
    Thanks,
    Markus

  • Can not use SVM disc mirror on Sun X2200 M2 with solaris u7

    Last year,I tried to build cluster with solaris u6, but I got this error "Insufficient metadevice database replicas located" after SVM set up.
    I tried lastest solaris 10 u7(09/05) with Sun x2200 M2 server again, but I got the same error.
    I think it maybe driver's problem,so I tested solaris 10 u7 again with one old x86 machines and this machines has pure scsi card and scsi disc.
    Also I tried it with ide drive, and everything is fine without any error message.
    I checked that solaris 10 u5 will consider sata disc on x2200 m2 as ide disc, so SVM works fine.
    But after I upgrade to solaris 10 u7,system will consider sata disc as scsi drive, and SVM will not work.
    So I can see this is a bug of solaris 10 u7?
    Edited by: cheung79 on 2009/7/3 ?? 11:04

    -bash-3.00# metadb -i
    flags first blk block count
    a m p lu 16 8192 /dev/dsk/c0t0d0s7
    a p l 8208 8192 /dev/dsk/c0t0d0s7
    a p l 16400 8192 /dev/dsk/c0t0d0s7
    M u 16 unknown /dev/dsk/c0t1d0s7
    M u 8208 unknown /dev/dsk/c0t1d0s7
    M u 16400 unknown /dev/dsk/c0t1d0s7
    r - replica does not have device relocation information
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
    I don't have "set md:mirrored_root_flag=1" line in /etc/system.
    Do you mean boot device is scsi hdd? but it is sata hdd.
    -bash-3.00# ls -l /dev/rdsk/c0t0d0s0
    lrwxrwxrwx 1 root root 51 Jul 7 17:55 /dev/rdsk/c0t0d0s0 -> ../../devices/pci@0,0/pci108e,534b@5/disk@0,0:a,raw

  • Solaris 10 X86 - Hardware RAID - SMC/SVM question...

    I have gotten back into Sun Solaris System Administration after a five year hiatus... My skills are a little rusty and some of the tools have changed, so here is are my questions...
    I have installed Solaris 10 release 1/06 on a Dell 1850 with an attached PowrVault 220v connected to a Perc 4 Di controller. The RAID is configured via BIOS interface to my liking, Solaris is installed and see's all my partitions which I created during install.
    For testing purposes, the servers internal disk is used for the OS, the PowerVault is split into 2 RAID's - one is a mirror, one is a stripe...
    The question is; do I manage the RAID using Sun Management Console and the tools OR do I use SMC?
    When I launch SMC and go into Enhanced Storage... I do not see any RAID's... If I select "Disks" I do see them, but when I select them, it wants to run "FDISK" on them... now this is OK since they are blank but I want to ensure I am not doing sometinhg I should not be concerned with...
    If the PERC controller is controlling the RAID, what do I need SMC for?

    You can use SMC for other purposes but it won't help your with RAID.
    Sol 10 1/06 has raidctl which handles LSI1030 and LSI1064 RAID�enabled controllers (from raidctl(1M)).
    Some of the PERCs (most?) are LSI but I don't know if they are chipsets used by your PoweEdge (I doubt it).
    Generally you can break it down like this for x86:
    If you are using hardware RAID with Solaris 10 x86 you have to use pre-Solaris (i.e. on the RAID controller) managment or hope that the manufacturer of the device has a Solaris managment agent/interface (good luck).
    The only exception to this that I know of is the RAID that comes with V20z, V40z, X4100, X4200.
    Otherwise you will want to go with SVM or VxVM and manage RAID within Solaris (software RAID).
    SMC etc are only going to show you stuff if SVM is involved and VxVM has its own interface, otherwise the disks are controlled by PERC and just hanging out as far as Solaris is concerned.
    Hope this helps.

  • Solaris 10 cannot mirror using SVM for SFV440 server

    I have tried to use LVM to configure the c1t2d0s0 and c1t3d0s0 but fail.
    Below are the commands I used:
    #metadb -f -a -c 4 /dev/dsk/c1t2d0s7
    #metadb -a -c 4 /dev/dsk/c1t3d0s7
    #metainit -f d10 1 1 c1t2d0s0
    #metainit -f d20 1 1 c1t3d0s0
    #metainit d0 -m d10
    (modify the /etc/vfstab file to /dev/md/dsk/d0 and /etc/md/rdsk/d0)
    #init 6
    After reboot the server, metadb are deleted. Cannot mount the /dev/dsk/d0.
    I have 8 SFV440 servers total. 7 SFV440 servers with mother-board (p/n:
    501-6910) cannot mirror. 1 SFV440 server with mother-board (p/n: 501-6344)
    is wotking fine can use LVM to mirror the data disk. All the configuration
    are same. (hardware and OS (Solaris 10 03/05)used)

    Try the following commands:
    First make sure that the metadb partitions are unmounted and big enough to hold the metadb's.
    Second make sure both sides of the mirrors are the exact same size. Mirroring will fail if the 2nd half of the mirror is smaller than the first half.
    I would then newfs and fsck the metadb partitions.
    #metadb -f -a -c 2 /dev/dsk/c1t2d0s7
    #metadb -f -a -c 2 /dev/dsk/c1t3d0s7
    # make sure the metadbs exist
    # metadb -i
    #metainit -f d10 1 1 c1t2d0s0
    #metainit -f d20 1 1 c1t3d0s0
    #metainit d0 -m d10
    (modify the /etc/vfstab file to /dev/md/dsk/d0 and /dev/md/rdsk/d0)
    # check out to see if the one way mirror exists
    metastat
    #init 6
    # Don't forget to attach the 2nd half of the mirror
    # metattach d0 d20
    # check to see that the mirror is syncing
    # metastat

  • Solaris 8 - 9 with RAID-5 SVM

    Intending on upgrading a system from Solaris 8 to Solaris 9 by doing a split of root mirrors and reinstalling Solaris 9 on one side.
    The rest of the disks are grouped together in a RAID 5 config.
    What's the best way to deal with these meta devices post upgrade, to get them into the new build? Will it need to be recreated from scratch, loosing the data on the drives, or can it be recreated in the same way as a mirror, preserving data?
    Thanks in advance.

    Easiest is probably to grab the configuration from the current machine and create a metadevice on the new system with the same parameters.
    If you create the raid5 device with the -k option, it won't zero the disks. Then you can see if the filesystem is visible on the metadevice.
    If you leave off the -k, it'll destroy the data.
    Make sure you do the disks in the correct order.
    Darren

  • Steps to repair SVM on rootdisk

    Hi - I have a T2000 server that failed to boot after patching, I booted from "disk1" so it's running again as it was prior to patching just that I have no disk redundancy now. My question is does anyone have any steps on how I can setup SVM again? At present I have:
    df -h |grep dsk
    /dev/dsk/c0t1d0s0 12G 8.8G 3.0G 75% /
    /dev/dsk/c0t1d0s3 3.9G 3.0G 897M 78% /var
    /dev/md/dsk/d7 33G 17G 15G 53% /logs
    /dev/dsk/c0t1d0s5 17G 272M 16G 2% /export
    /dev/dsk/c0t3d0s0 135G 2.9G 130G 3% /export/zones
    The only partition I didn't split prior to patching was /logs
    I have a few ideas on what I should be doing but could do with guidance if anyone has done this in the past, my main concern is I don't have console access to this server so working remotely and if I reboot this server again I need to make sure it's going to come back up.
    Any help would be appreciated.
    Thanks - Julian.

    Hello,
    As I understand, you want to mirror (RAID1) the root disk using SVM? If yes, you need a spare disk with the same size of the root disk.
    If you assume that all below file systems should be mirrored follow this procedure:
    /var
    /logs
    /export
    1. Prepare the spare disk by having the same VTOC as your current root disk:
    #prtvtoc /dev/dsk/c0t1d0s2 > /tmp/c0t1d0s2.txt
    #fmthard -s /tmp/c0t1d0s2.txt /dev/dsk/<sparedisk>
    2. Create state database replicas (if they do not exist):
    #metadb -f -a <slice> //creates 1 state replica, at least 3 must exist
    #metadb //print state database replicas
    3. Create metadevices:
    mirror /var filesystem
    #metainit -f d11 1 1 c0t1d0s3
    #metainit -f d12 1 1 <slice on spare disk>
    #metainit d10 -m d11
    edit the /etc/vfstab by replacing the /var line to use metadevices:
    /dev/md/dsk/d10 /dev/md/rdsk/d10
    4. Repeat step 3 for each non root filesystem by creating new metadevices.
    5. Reboot
    6. After reboot issue:
    #metattach d10 d12 //create a mirror volume
    For the root file system you should use the metaroot command so the system uses metadevices to boot.
    There is also useful guide on Solaris library on how to do it, but basically this is the procedure for full mirroring the root disk using SVM.
    Regards,
    Rei
    Edited by: Reidod on May 17, 2013 1:24 PM

  • Issues w/ LiveUpgrade from Solaris 10 x86 06/06 to 08/07 w/ SVMmetadevices

    Hello all...
    I am trying to upgrade a system from Sol 10 x86 6/06 to 8/07. For this example, we'll assume the following:
    * The system has only 2 disks mirrored with SVM
    * There are 0 free slices/partitions
    * /, /usr, /var are separate slices/filesystems/mirrors
    * There are 6 metadbs on 50MB slices, three on each disk
    I run the following lucreate command (after having made sure that the appropriate packages and patches exist on the system):
    lucreate -c s10u2 -n s10u4 -m /:/dev/md/dsk/d110:ufs,mirror -m /:/dev/dsk/c0d1s0:detach,attach,preserve \
    -m /usr:/dev/md/dsk/d120:ufs,mirror -m /usr:/dev/dsk/c0d1s3:detach,attach,preserve \
    -m /var:/dev/md/dsk/d130:ufs,mirror -m /var:/dev/dsk/c0d1s4:detach,attach,preserveI can then run luupgrade and every indication leads me to believe that it worked. So, I activate the new BE and init 6 and attempt to boot to the new BE. The boot ends up failing citing that it cannot mount /usr. So, I can either reboot and lumount the filesystems or boot into failsafe mode and take a look at the vfstab. The /etc/vfstab looks fine with the new metadevices identified as the /, /usr, and /var filesystems. However, those metadevices are never initialized. When I look at /etc/lvm/md.cf on the source BE, those metadevices are initialized there. However, they are not initialized in the secondary BE.
    Am I missing something somewhere? Wondering why SVM does not appear to be updated on the secondary BE to add/initialize the new metadevices in the secondary BE.
    TIA,
    Rick

    Ok...update...
    I have found that there are bugs with LU where it relates to SVM mirrored /'s and what not. So, I modified my approach to the upgrade and "deconfigured" the /, /usr, and /var mirrors. I then executed the the procedure as described in the docs to create a new boot environment on the secondary disk. It worked!
    Great! With that done, I can then remove the old boot environment and remirror the /, /usr, and /var filesystems as it was prior to the LU procedure. Wrong answer! There were two problems, the first was fairly simple to overcome....
    Problem 1) When attempting to delete the old BE, I would get the following error:
    The boot environment <s10u2> contains the GRUB menu.
    Attempting to relocate the GRUB menu.
    /usr/sbin/ludelete: lulib_relocate_grub_slice: not found
    ERROR: Cannot relocate the GRUB menu in the boot environment <s10u2>.
    ERROR: Cannot delete the boot environment <s10u2>.
    Unable to delete boot environment.Through some Googling, I found that apparently, the Solaris 10 LiveUpgrade is attempting to use a function in /usr/lib/lu/lulib that only exists in OpenSolaris (whatever happened to backporting functionality?). Ok...easy enough. Backup the library and copy it over from an OpenSolaris box. Done...and it worked, but only led to problem #2...
    Problem 2) Now when I run ludelete on the old BE, I get the following error:
    The boot environment <s10u2> contains the GTRUB menu.
    Attempting to relocte the GRUB menu.
    ERROR: No suitable candidate slice for GRUB menu on boot disk: </dev/rdsk/c0d0p0>
    INFORMATION: You will need to create a new Live Upgrade boot environment on the boot disk to find a new candidate for the grub menu.
    ERROR: Cannot relocate the GRUB menu in boot environment <s10u2>.
    Unable to delete boot environment.I have attempted to manually move/install GRUB on the secondary disk and make it the active boot disk, all to no avail. ludelete returns the same error every time.
    Thoughts? Ideas?
    TIA,
    Rick

  • SVM - "Unavailable" devices, destroyed SVM config

    I have a Solaris 10 host with mirrored SVM volumes that seems to have destroyed its SVM config. We rebooted and all the metadbs on two of the disks came up bad, and none of the DBs were online so the system came up in single user mode due to insufficient replicas:
    a m p lu 16 8192 /dev/dsk/c1t0d0s7
    a p l 8208 8192 /dev/dsk/c1t0d0s7
    a p l 16400 8192 /dev/dsk/c1t0d0s7
    a p l 24592 8192 /dev/dsk/c1t0d0s7
    a p l 16 8192 /dev/dsk/c1t1d0s7
    a p l 8208 8192 /dev/dsk/c1t1d0s7
    a p l 16400 8192 /dev/dsk/c1t1d0s7
    a p l 24592 8192 /dev/dsk/c1t1d0s7
    a u 16 8192 /dev/dsk/c1t2d0s7
    a u 8208 8192 /dev/dsk/c1t2d0s7
    a u 16400 8192 /dev/dsk/c1t2d0s7
    a u 24592 8192 /dev/dsk/c1t2d0s7
    a u 16 8192 /dev/dsk/c1t3d0s7
    a u 8208 8192 /dev/dsk/c1t3d0s7
    a u 16400 8192 /dev/dsk/c1t3d0s7
    a u 24592 8192 /dev/dsk/c1t3d0s7
    # metastat
    WARNING: Stale state database replicas. Metastat output may be inaccurate.
    So, I dropped the databases on C1T2 and C1T3 and rebooted again, now, further disaster, all the metadevices on the two physical disks that had the bad metadb's are "unavailable":
    d55: Mirror
    Submirror 0: d56
    State: Unavailable
    Submirror 1: d57
    State: Unavailable
    Pass: 4
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 146389552 blocks (69 GB)
    d56: Submirror of d55
    State: Unavailable
    Reconnect disk and invoke: metastat -i
    Size: 146389552 blocks (69 GB)
    Stripe 0:
    Device Start Block Dbase State Reloc Hot Spare
    c1t2d0s4 0 No - Yes
    The disks are spun up and seem to be fine, but when I run "metastat -i" I get
    WARNING: md d55: open error on (Unavailable)
    NOTICE: md_probe_one: err 6 mnum 55
    I can fsck the underlying /dev/dsk/cXtXdXsX filesystems just fine.
    So, what in the heck happened here?
    We have a twin of this host with exactly the same OS installation, physical hardware, and metadevices, and it's fine. I did discover one difference between the two hosts - the host with the SVM problem was mistakenly booted with local-mac-address?=false and has the two network interfaces set up to multipath, so the network connection was flapping. But that shouldn't cause disk problems?
    The two disks in question are 300GB SCSI disks, Fujitsu MAW3300NC - not official Sun disks.
    This looks similar to http://forum.java.sun.com/thread.jspa?forumID=840&threadID=5109431
    where the root cause seems to have been "WTF?" :-)

    I have a Solaris 10 host with mirrored SVM volumes that seems to have destroyed its SVM config. We rebooted and all the metadbs on two of the disks came up bad, and none of the DBs were online so the system came up in single user mode due to insufficient replicas:
    a m p lu 16 8192 /dev/dsk/c1t0d0s7
    a p l 8208 8192 /dev/dsk/c1t0d0s7
    a p l 16400 8192 /dev/dsk/c1t0d0s7
    a p l 24592 8192 /dev/dsk/c1t0d0s7
    a p l 16 8192 /dev/dsk/c1t1d0s7
    a p l 8208 8192 /dev/dsk/c1t1d0s7
    a p l 16400 8192 /dev/dsk/c1t1d0s7
    a p l 24592 8192 /dev/dsk/c1t1d0s7
    a u 16 8192 /dev/dsk/c1t2d0s7
    a u 8208 8192 /dev/dsk/c1t2d0s7
    a u 16400 8192 /dev/dsk/c1t2d0s7
    a u 24592 8192 /dev/dsk/c1t2d0s7
    a u 16 8192 /dev/dsk/c1t3d0s7
    a u 8208 8192 /dev/dsk/c1t3d0s7
    a u 16400 8192 /dev/dsk/c1t3d0s7
    a u 24592 8192 /dev/dsk/c1t3d0s7
    # metastat
    WARNING: Stale state database replicas. Metastat output may be inaccurate.
    So, I dropped the databases on C1T2 and C1T3 and rebooted again, now, further disaster, all the metadevices on the two physical disks that had the bad metadb's are "unavailable":
    d55: Mirror
    Submirror 0: d56
    State: Unavailable
    Submirror 1: d57
    State: Unavailable
    Pass: 4
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 146389552 blocks (69 GB)
    d56: Submirror of d55
    State: Unavailable
    Reconnect disk and invoke: metastat -i
    Size: 146389552 blocks (69 GB)
    Stripe 0:
    Device Start Block Dbase State Reloc Hot Spare
    c1t2d0s4 0 No - Yes
    The disks are spun up and seem to be fine, but when I run "metastat -i" I get
    WARNING: md d55: open error on (Unavailable)
    NOTICE: md_probe_one: err 6 mnum 55
    I can fsck the underlying /dev/dsk/cXtXdXsX filesystems just fine.
    So, what in the heck happened here?
    We have a twin of this host with exactly the same OS installation, physical hardware, and metadevices, and it's fine. I did discover one difference between the two hosts - the host with the SVM problem was mistakenly booted with local-mac-address?=false and has the two network interfaces set up to multipath, so the network connection was flapping. But that shouldn't cause disk problems?
    The two disks in question are 300GB SCSI disks, Fujitsu MAW3300NC - not official Sun disks.
    This looks similar to http://forum.java.sun.com/thread.jspa?forumID=840&threadID=5109431
    where the root cause seems to have been "WTF?" :-)

  • JumpStart and SVM mirroring error.

    Hi,
    In essence, I am trying to mirror a number of slices on the rootdisk and rootmirror. Most cases I have tried work, except when I try and mirror
    c0t1d0s6 to c0t0d0s7 or c0t0d0s6 to c0t1d0s7. I can succesfully do c0t0d0s7 to c0t1d0s7 and c0t1d0s7 to c0t1d0s7.
    (obviously this may not be best practice, but I don't see any reason I shouldnt be able to do it)
    The error below appears spurious as the disk layout is actually identical, even though the error message indicates they are not.
    Here is the disk part of the profile causing the issues.
    metadb          c0t0d0s3 size 8192 count 4
    metadb          c0t1d0s3 size 8192 count 4
    filesys mirror:d10 c0t0d0s0 c0t1d0s0 1024 /
    filesys mirror:d20 c0t0d0s1 c0t1d0s1 4000 swap
    filesys mirror:d30 c0t0d0s4 c0t1d0s4 5128 /var
    filesys mirror:d40 c0t0d0s5 c0t1d0s5 5000 /u02
    filesys mirror:d50 c0t1d0s6 c0t0d0s7 1005 /u04The error message is included below.
            - Selecting all disks
            - Configuring boot device
            - Configuring SVM State Database Replica on  (c0t0d0s3)
            - Configuring SVM State Database Replica on  (c0t1d0s3)
            - Configuring SVM Mirror Volume d10 on / (c0t0d0s0)
            - Configuring SVM Mirror Volume d10 on  (c0t1d0s0)
            - Configuring SVM Mirror Volume d20 on swap (c0t0d0s1)
            - Configuring SVM Mirror Volume d20 on SWAP_MIRROR (c0t1d0s1)
            - Configuring SVM Mirror Volume d30 on /var (c0t0d0s4)
            - Configuring SVM Mirror Volume d30 on  (c0t1d0s4)
            - Configuring SVM Mirror Volume d40 on /u02 (c0t0d0s5)
            - Configuring SVM Mirror Volume d40 on  (c0t1d0s5)
            - Configuring SVM Mirror Volume d50 on /u04 (c0t1d0s6)
            - Configuring SVM Mirror Volume d50 on  (c0t0d0s7)
    ERROR: SVM mirroring is not possible. submirror2 (c0t0d0s7) is smaller than the
    submirror1 (c0t1d0s6)
    Disk layout for selected disks
        Disk c0t0d0
                                 Solaris Slice Table
              Slice     Start  Cylinder         MB  Preserve  Directory
                  0       403       104       1034        no  /
                  1         0       403       4005        no  swap
                  2         0     14087     139990        no  overlap
                  3       507         2         20        no
                  4       509       517       5138        no  /var
                  5      1026       504       5009        no  /u02
                  7      1530       102       1014        no
              Usable space: 139990 MB (14087 cylinders)   Free space: 123772 MB
        Disk c0t1d0
                                 Solaris Slice Table
              Slice     Start  Cylinder         MB  Preserve  Directory
                  0       403       104       1034        no
                  1         0       403       4005        no
                  2         0     14087     139990        no  overlap
                  3       507         2         20        no
                  4       509       517       5138        no
                  5      1026       504       5009        no
                  6      1530       102       1014        no  /u04
              Usable space: 139990 MB (14087 cylinders)   Free space: 123772 MB
    Verifying disk configuration
    Verifying space allocation
            - Total software size:  205.85 Mbytes
    ERROR: System installation failedAny ideas or hints or should I just file this as a bug? I hope I'm not missing something blindingly obvious...
    thanks
    Mike

    It works again now but if it happens again I have
    Mac: MacBook Pro 13 Inch Mid 2010
    Display: I-INC iC194D
    OS: Lion
    Mirror and etend did do the same thing and is working correctly now.

Maybe you are looking for

  • ALV report i want to display in each P.O , i want to display item details

    Hi all, in alv report i want to display in each P.O , i want to display item details. is it possible without hierarchical display.

  • Add customer button on header level for BUS2201(PO) - SRM 7.0

    Hi, I want to define a customer button (with action) within PO on header level. Web Dynpro           FPM_OIF_COMPONENT Configuration          /SAPSRM/WDCC_FPM_OIF_PO_PURCH Task: Add a customer button besides standard Export button. I assume-afterward

  • Send Changes for IDOC ORDRSP

    Hi Gurus   Can anyone help me in sending the changes for the IDOC for transactional data. For Sales Order IDOC -- ORDRSP. When newly sales order is created, all the sales order information should be sent. --This is not a problem. When changes are don

  • SAP Netweaver BW 7 ETL manuals

    Hi Some one can give me the URL or instructions to access at ETL (Netweaver 7) instructions' manuals? Thanks a lot

  • Move vacation pix from laptop to iM

    I found a couple threads that were close but none matched my exact query. Thanks for any help. Mike Questions: 1) How do I move 400 pix from my son's macbook pro (in an aperture project called "Disney") to my iMac aperture library? (I want the raw fi