Solaris 10 (sparc) + ZFS boot + ZFS zonepath + liveupgrade

I would like to set up a system like this:
1. Boot device on 2 internal disks in ZFS mirrored pool (rpool)
2. Non-global zones on external storage array in individual ZFS pools e.g.
zone alpha has zonepath=/zones/alpha where /zones/alpha is mountpoint for ZFS dataset alpha-pool/root
zone bravo has zonepath=/zones/bravo where /zones/bravo is mountpoint for ZFS dataset bravo-pool/root
3. Ability to use liveupgrade
I need the zones to be separated on external storage because the intent is to use them in failover data services within Sun Cluster (er, Solaris Cluster).
With Solaris 10 10/08, it looks like I can do 1 & 2 but not 3 or I can do 1 & 3 but not 2 (using UFS instead of ZFS).
Am I missing something that would allow me to do 1, 2, and 3? If not is such a configuration planned to be supported? Any guess at when?
--Frank                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Nope, that is still work in progress. Quite frankly I wonder if you would even want such a feature considering the way the filesystem works. It is possible to recover if your OS doesn't boot anymore by forcing your rescue environment to import the zfs pool, but its less elegant than merely mounting a specific slice.
I think zfs is ideal for data and data-like places (/opt, /export/home, /opt/local) but I somewhat question the advantages of moving slices like / or /var into it. Its too early to draw conclusions since the product isn't ready yet, but at this moment I'd only think off disadvantages.

Similar Messages

  • ZFS boot device - UFS external storage - Solaris volume manger

    Hi All,
    If a system is running ZFS boot can one use Solaris Volume Manager on external UFS storage devices? If so, where do you store the metadb?

    Should work, even though you need to have a slice somewhere where you can store the metadb's.
    Perhaps you can store them on the external storage? Unless they are frequently removed.
    .7/M.

  • Need Best Practice for creating BE in ZFS boot environment with zones

    Good Afternoon -
    I have a Sparc system with ZFS Root File System and Zones. I need to create a BE for whenever we do patching or upgrades to the O/S. I have run into issues when testing booting off of the newBE where the zones did not show up. I tried to go back to the original BE by running the luactivate on it and received errors. I did a fresh install of the O/S from cdrom on a ZFS filesystem. Next ran the following commands to create the zones, and then create the BE, then activate it and boot off of it. Please tell me if there are any steps left out or if the sequence was incorrect.
    # zfs create –o canmount=noauto rpool/ROOT/S10be/zones
    # zfs mount rpool/ROOT/S10be/zones
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z1
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z2
    # zfs mount rpool/ROOT/s10be/zones/z1
    # zfs mount rpool/ROOT/s10be/zones/z2
    # chmod 700 /zones/z1
    # chmod 700 /zones/z2
    # zonecfg –z z1
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z1
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zonecfg –z z2
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z2
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zoneadm –z z1 install
    # zoneadm –z z2 install
    # zlogin –C –e 9. z1
    # zlogin –C –e 9. z2
    Output from zoneadm list -v:
    # zoneadm list -v
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    2 z1 running /zones/z1 native shared
    4 z2 running /zones/z2 native shared
    Now for the BE create:
    # lucreate –n newBE
    # zfs list
    rpool/ROOT/newBE 349K 56.7G 5.48G /.alt.tmp.b-vEe.mnt <--showed this same type mount for all f/s
    # zfs inherit -r mountpoint rpool/ROOT/newBE
    # zfs set mountpoint=/ rpool/ROOT/newBE
    # zfs inherit -r mountpoint rpool/ROOT/newBE/var
    # zfs set mountpoint=/var rpool/ROOT/newBE/var
    # zfs inherit -r mountpoint rpool/ROOT/newBE/zones
    # zfs set mountpoint=/zones rpool/ROOT/newBE/zones
    and did it for the zones too.
    When ran the luactivate newBE - it came up with errors, so again changed the mountpoints. Then rebooted.
    Once it came up ran the luactivate newBE again and it completed successfully. Ran the lustatus and got:
    # lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    s10s_u8wos_08a yes yes no no -
    newBE yes no yes no -
    Ran init 0
    ok boot -L
    picked item two which was newBE
    then boot.
    Came up - but df showed no zones, zfs list showed no zones and when cd into /zones nothing there.
    Please help!
    thanks julie

    The issue here is that lucreate add's an entry to the vfstab in newBE for the zfs filesystems of the zones. You need to lumount newBE /mnt then edit /mnt/etc/vfstab and remove the entries for any zfs filesystems. Then if you luumount it you can continue. It's my understanding that this has been reported to Sun, and, the fix is in the next release of Solaris.

  • How to back up a ZFS boot disk ?

    Hello all,
    I have just installed Solaris 10 update 6 (10/08) on a Sparc machine (an Ultra 45 workstation) using ZFS for the boot disk.
    Now I want to port a custom UFS boot disk backup script to ZFS.
    Basically, this script copies the boot disk to a secondary disk and makes the secondary disk bootable.
    With UFS, I had to play with the vfstab a bit to allow booting off the secondary disk, but this is not necessary with ZFS.
    How can I perform such a backup of my ZFS boot disk ?
    I tried the following (source disk: c1t0d0, target disk: c1t1d0):
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 110G 118G 94K /rpool
    rpool/ROOT 4.58G 118G 18K legacy
    rpool/ROOT/root 4.58G 25.4G 4.50G /
    rpool/ROOT/root/var 79.2M 4.92G 79.2M /var
    rpool/dump 16.0G 118G 16.0G -
    rpool/export 73.3G 63.7G 73.3G /export
    rpool/homelocal 21.9M 20.0G 21.9M /homelocal
    rpool/swap 16G 134G 16K -
    # zfs snapshot -r rpool@today
    # zpool create -f -R /mnt rbackup c1t1d0
    # zfs send -R rpool@today | zfs receive -F -d rbackup               <- This one fails (see below)
    # installboot /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    The send/receive command fails after transfering the "/" filesystem (4.5 GB) with the following error message:
    cannot mount '/mnt': directory is not empty
    There may be some kind of unwanted recursion here (trying to back up the backup or something) but I cannot figure it out.
    I tried a workaround: creating the mount point outside the snapshot:
    zfs snapshot -r rpool@today
    mkdir /var/tmp/mnt
    zpool create -f -R /var/tmp/mnt rbackup c1t1d0
    zfs send -R rpool@today | zfs receive -F -d rbackup
    But it still fails, this time with mounting "/var/tmp/mnt".
    So how does one back up the ZFS boot disk to a secondary disk in a live environment ?

    OK, this post requires some clarification.
    First, thanks to robert.cohen and rogerfujii for giving some elements.
    The objective is to make a backup of the boot disk on another disk of the same machine. The backup must be bootable just like the original disk.
    The reason for doing this instead of (or, even better, in addition to) mirroring the boot disk is to be able to quickly recover a stable operating system in case anything gets corrupted on the boot disk. Corruption includes hardware failures, but also any software corruption which could be caused by a virus, an attacker or an operator mistake (rm -rf ...).
    After doing lots of experiments, I found two potential solutions to this need.
    Solution 1 looks like what rogerfujii suggested, albeit with a few practical additions.
    It consists in using ZFS mirroring and breaking up the mirror after resilvering:
         - Configure the backup disk as a mirror of the boot disk :
         zpool attach -f rpool <boot disk>s0 <backup disk>s0
         - Copy the boot block to the backup disk:
         installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
         - Monitor the mirror resilvering:
         zpool status rpool
         - Wait until the "action" field disappears (this can be scripted).
         - Prevent any further resilvering:
         zpool offline rpool <backup disk>s0
         Note: this step is mandatory because detaching the disk without offlining it first results in a non bootable backup disk.
         - Detach the backup disk from the mirror:
         zpool detach rpool <backup disk>s0
         POST-OPERATIONS:
         After booting on the backup disk, assuming the main boot disk is unreachable:
         - Log in as super-user.
         - Detach the main boot disk from the mirror
         zpool detach rpool <boot disk>s0
    This solution has many advantages, including simplicity and using no dirty tricks. However, it has two major drawbacks:
    - When booting on the backup disk, if the main boot disk is online, it will be resilvered with the old data.
    - There is no easy way to access the backup disk data without rebooting.
    So if you accidentally lose one file on the boot disk, you cannot easily recover it from the backup.
    This is because the pool name is the same on both disks, therefore effectively preventing any pool import.
    Here is now solution 2, which I favor.
    It is more complex and dependent on the disk layout and ZFS implementation changes, but overall offers more flexibility.
    It may need some additions if there are other disks than the boot disk with ZFS pools (I have not tested that case yet).
    ***** HOW TO BACKUP A ZFS BOOT DISK TO ANOTHER DISK *****
    1. Backup disk partitioning
    - Clean up ZFS information from the backup disk:
    The first and last megabyte of the backup disk, which hold ZFS information (plus other stuff) are erased:
    dd if=/dev/zero seek=<backup disk #blocks minus 2048> count=2048 of=/dev/rdsk/<backup disk>s2
    dd if=/dev/zero count=2048 of=/dev/rdsk/<backup disk>s2
    - Label and partition the backup disk in SMI :
    format -e <backup disk>
         label
         0          -> SMI label
         y
         (If more questions asked: press Enter 3 times.)
         partition
         (Create a single parition, number 0, filling the whole disk)
         label
         0
         y
         quit
         quit
    2. Data copy
    - Create the target ZFS pool:
    zpool create -f -o failmode=continue -R /mnt -m legacy rbackup <backup disk>s0
    Note: the chosen pool name is here "rbackup".
    - Create a snapshot of the source pool :
    zfs snapshot -r rpool@today
    - Copy the data :
    zfs send -R rpool@today | zfs receive -F -d rbackup
    - Remove the snapshot, plus its copy on the backup disk :
    zfs destroy -r rbackup@today
    zfs destroy -r rpool@today
    3. Backup pool reconfiguration
    - Edit the following files:
    /mnt/etc/vfstab
    /mnt/etc/power.conf
    /mnt/etc/dumpadm.conf
    In these files, replace the source pool name "rpool" with the backup pool name "rbackup".
    - Remove the ZFS mount list:
    rm /mnt/etc/zfs/zpool.cache
    4. Making the backup disk bootable
    - Note the name of the current boot filesystem:
    df -k /
    E.g.:
    # df -k /
    Filesystem kbytes used avail capacity Mounted on
    rpool/ROOT/root 31457280 4726390 26646966 16% /
    - Configure the boot filesystem on the backup pool:
    zpool set bootfs=rbackup/ROOT/root rbackup
    Note: "rbackup/ROOT/root" is derived from the main boot filesystem name "rpool/ROOT/root".
    - Copy the ZFS boot block to the backup disk:
    installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
    5. Cleaning up
    - Detach the target pool:
    zpool export rbackup
    I hope this howto will be useful to those like me who need to change all their habits while migrating to ZFS.
    Regards.
    HL

  • ZFS boot partition?

    Hi All,
    I'm currently being asked to look creating a number of servers with mirroring. Ideally the whole server will use ZFS which would mean it would also need to boot from ZFS.
    Looking at the OpenSolaris website:
    http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
    It looks like it's possible to do this. My problem is that I need to use vanilla Solaris on SPARC, and some bits and pieces from those instructions don't work.
    Is this even possible? If it's not, what's the best I can hope for?
    I'm currently testing on x86 (in VMWare), I'll be deploying on SPARC.
    Thanks for any help.....

    Darren_Dunham wrote:
    Boot procedures on x86 and SPARC are still quite different. So your testing on one doesn't necessarily help on the other. You'll note that the link you show is only for x86.
    A new boot method for SPARC is being developed that will work with ZFS, but it hasn't been released the way the x86 side has. So you can't boot from ZFS on SPARC just yet.
    DarrenI've also had issues with ZFS root in the X86 world. I won't go into the details as I'm not 100% certain whether it was the Nevada Build I was using (76) or if it was user error.
    The current roadmap has ZFS boot for mainline Solaris as coming in either U5 or U6 according to a briefing we had recently. That puts it out anywhere from mid to very late 2008.
    Dev Express will likely have it sooner, but we're back to "Is this production"? If it is, I would wait for it to be solid and stable and in main-line Solaris.
    Cheers,

  • Zfs boot block size and ufs boot block size

    Hi,
    In Solaris UFS file system , boot block  resides in 1 to 15 sectors, and each sector is 512 bytes total makes 7680 bytes
    bash-3.2# pwd
    /usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/ufs
    bash-3.2# ls -ltr
    total 16
    -r--r--r--   1 root     sys        7680 Sep 21  2008 bootblk
    for zfs file system the boot block size is
    bash-3.2# pwd
    /usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/zfs
    bash-3.2# ls -ltr
    total 32
    -r--r--r--   1 root     sys        15872 Jan 11  2013 bootblk
    when we install zfs bootblk on disk using the install boot command ,how many sectors it will use to write the bootblk?
    Thanks,
    SriKanth Muvva

    Thanks for your reply.
    my query is when  zfs  boot block size is 16K, and on disk 1 to 15 sectors(here boot block going to be installed) make around 8K,
    it mean in the 16K,it writes only 8K on the  disk
    if you don't mid will you please explain me  in depth
    I m referring the doc for UFS, page no 108 ,kernel bootstrap and initialization  (its old and its for Solaris 8)
    http://books.google.co.in/books?id=r_cecYD4AKkC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
    please help me to find  a doc for kernel bootstrap and initialization for Solaris 10 with zfs and  boot archive
    Thanks in advance .
    Srikanth

  • Installing Solaris 10 with RAID + ZFS ?

    Hi, i've recently built a home-made server for racking with a local co-location provider, I've currently got Fedora 9 on it, but would really wish to have Solaris 10 to get ZFS support.
    Got a couple of questions before i wreck my fedora 9 install, if anyone would be so kind,
    I've dowloaded the Solaris 10 5/08 iso and burned that to dvd ready for install. Now when I do install Solaris will it reformat the drives as default with ZFS support or is this something that has to be done afterwards, also with the x86 install will it be able to boot from ZFS?
    Idealy I've got two identical drives on there for RAID 1 so for the entire machine to be using ZFS would be the ideal setup, running in headless with just SSH to connect.
    Any thoughts?

    If you are running Solaris 10 U5, you can only have mirroring using svm, plus the root file system must be UFS.
    However, you can make a slice in a zfs partition an mirror that (I wouldn't recommend using svm to mirror on
    top of ZFS).
    Opensolaris 2008.05 does zfs boot, and Nevada could do zfs boot since Build72 ( http://sol10frominnerspace.blogspot.com/2007/09/setup-zfs-boot-for-build-72.html ) - though bits and pieces were
    not working o rmissing (like swap in a zvol). The big roll in was on Build94 ( http://www.opensolaris.org/os/community/on/flag-days/91-95/ )
    so if you get the latest, your chances of running into any issues are lower.
    -r

  • [SOLVED] Installing on ZFS root: "ZFS: cannot find bootfs" on boot.

    I have been experimenting with ZFS filesystems on external HDDs for some time now to get more comfortable with using ZFS in the hopes of one day reinstalling my system on a ZFS root.
    Today, I tried installing a system on an USB external HDD, as my first attempt to install on ZFS (I wanted to try in a safe, disposable environment before I try this on my main system).
    My partition configuration (from gdisk):
    Command (? for help): p
    Disk /dev/sdb: 3907024896 sectors, 1.8 TiB
    Logical sector size: 512 bytes
    Disk identifier (GUID): 2FAE5B61-CCEF-4E1E-A81F-97C8406A07BB
    Partition table holds up to 128 entries
    First usable sector is 34, last usable sector is 3907024862
    Partitions will be aligned on 8-sector boundaries
    Total free space is 0 sectors (0 bytes)
    Number Start (sector) End (sector) Size Code Name
    1 34 2047 1007.0 KiB EF02 BIOS boot partition
    2 2048 264191 128.0 MiB 8300 Linux filesystem
    3 264192 3902828543 1.8 TiB BF00 Solaris root
    4 3902828544 3907024862 2.0 GiB 8300 Linux filesystem
    Partition #1 is for grub, obviously. Partition #2 is an ext2 partition that I mount on /boot in the new system. Partition #3 is where I make my ZFS pool.
    Partition #4 is an ext4 filesystem containing another minimal Arch system for recovery and setup purposes. GRUB is installed on the other system on partition #4, not in the new ZFS system.
    I let grub-mkconfig generate a config file from the system on partition #4 to boot that. Then, I manually edited the generated grub.cfg file to add this menu entry for my ZFS system:
    menuentry 'ZFS BOOT' --class arch --class gnu-linux --class gnu --class os {
    load_video
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    insmod ext2
    set root='hd0,gpt2'
    echo 'Loading Linux core repo kernel ...'
    linux /vmlinuz-linux zfs=bootfs zfs_force=1 rw quiet
    echo 'Loading initial ramdisk ...'
    initrd /initramfs-linux.img
    My ZFS configuration:
    # zpool list
    NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
    External2TB 1.81T 6.06G 1.81T 0% 1.00x ONLINE -
    # zpool status :(
    pool: External2TB
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    External2TB ONLINE 0 0 0
    usb-WD_Elements_1048_575836314135334C32383131-0:0-part3 ONLINE 0 0 0
    errors: No known data errors
    # zpool get bootfs
    NAME PROPERTY VALUE SOURCE
    External2TB bootfs External2TB/ArchSystemMain local
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    External2TB 14.6G 1.77T 30K none
    External2TB/ArchSystemMain 293M 1.77T 293M /
    External2TB/PacmanCache 5.77G 1.77T 5.77G /var/cache/pacman/pkg
    External2TB/Swap 8.50G 1.78T 20K -
    The reason for the above configuration is that after I get this system to work, I want to install a second system in the same zpool on a different dataset, and have them share a pacman cache.
    GRUB "boots" successfully, in that it loads the kernel and the initramfs as expected from the 2nd GPT partition. The problem is that the kernel does not load the ZFS:
    ERROR: device '' not found. Skipping fsck.
    ZFS: Cannot find bootfs.
    ERROR: Failed to mount the real root device.
    Bailing out, you are on your own. Good luck.
    and I am left in busybox in the initramfs.
    What am I doing wrong?
    Also, here is my /etc/fstab in the new system:
    # External2TB/ArchSystemMain
    #External2TB/ArchSystemMain / zfs rw,relatime,xattr 0 0
    # External2TB/PacmanCache
    #External2TB/PacmanCache /var/cache/pacman/pkg zfs rw,relatime,xattr 0 0
    UUID=8b7639e2-c858-4ff6-b1d4-7db9a393578f /boot ext4 rw,relatime 0 2
    UUID=7a37363e-9adf-4b4c-adfc-621402456c55 none swap defaults 0 0
    I also tried to boot using "zfs=External2TB/ArchSystemMain" in the kernel options, since that was the more logical way to approach my intention of having multiple systems on different datasets. It would allow me to simply create separate grub menu entries for each, with different boot datasets in the kernel parameters. I also tried setting the mount points to "legacy" and uncommenting the zfs entries in my fstab above. That didn't work either and produced the same results, and that was why I decided to try to use "bootfs" (and maybe have a script for switching between the systems by changing the ZFS bootfs and mountpoints before reboot, reusing the same grub menuentry).
    Thanks in advance for any help.
    Last edited by tajjada (2013-12-30 20:03:09)

    Sounds like a zpool.cache issue. I'm guessing your zpool.cache inside your arch-chroot is not up to date. So on boot the ZFS hook cannot find the bootfs. At least, that's what I assume the issue is, because of this line:
    ERROR: device '' not found. Skipping fsck.
    If your zpool.cache was populated, it would spit out something other than an empty string.
    Some assumptions:
    - You're using the ZFS packages provided by demizer (repository or AUR).
    - You're using the Arch Live ISO or some version of it.
    On cursory glance your configuration looks good. But verify anyway. Here are the steps you should follow to make sure your zpool.cache is correct and up to date:
    Outside arch-chroot:
    - Import pools (not using '-R') and verify the mountpoints.
    - Make a copy of the /etc/zfs/zpool.cache before you export any pools. Again, make a copy of the /etc/zfs/zpool.cache before you export any pools. The reason for this is once you export a pool the /etc/zfs/zpool.cache gets updated and removes any reference to the exported pool. This is likely the cause of your issue, as you would have an empty zpool.cache.
    - Import the pool containing your root filesystem using the '-R' flag, and mount /boot within.
    - Make sure to copy your updated zpool.cache to your arch-chroot environment.
    Inside arch-chroot:
    - Make sure your bootloader is configured properly (i.e. read 'mkinitcpio -H zfs').
    - Use the 'udev' hook and not the 'systemd' one in your mkinitcpio.conf. The zfs-utils package does not have a ported hook (as of 0.6.2_3.12.6-1).
    - Update your initramfs.
    Outside arch-chroot:
    - Unmount filesystems.
    - Export pools.
    - Reboot.
    Inside new system:
    - Make sure to update the hostid then rebuild your initramfs. Then you can drop the 'zfs_force=1'.
    Good luck. I enjoy root on ZFS myself. However, I wouldn't recommend swap on ZFS. Despite what the ZoL tracker says, I still ran into deadlocks on occasion (as of a month ago). However, I cannot say definitely the cause of the issue; but it resolved when I moved swap off ZFS to a dedicated partition.
    Last edited by NVS (2013-12-29 14:56:44)

  • ZFS boot and other goodies

    Hi everyone,
    With the new funky OS-features in Solaris 10/08, does anyone know if such features are going to get support in the OSP/SUNWjet/N1SPS? ZFS boot would be nice, for a change :)
    I haven't seen any updated versions of the OSP plugin for N1SPS for quite a while now, is it still under development?
    Cheers,
    Ino!~

    Hi Ino,
    as far as I know (and I might be mistaken) OSP is not under any active development and all bare metal OS provisioning activities are now domain of xVM Ops Center, which is built on top of Jet, which does support ZFS root/boot installation already.
    If you want to get hacky, you can replace the SUNWjet package on your Jet server by hand (pkgrm/pkgadd), put there the fresh one and SPS/OSP should happily work with it (read: I have not tested it myself)...
    If you want to get supported, then go the xVM OC 2.0 way...
    HTH,
    Martin

  • CLUSTERING WITH ZFS BOOT DISK

    hi guys,
    i'm looking for create a new cluster on two standalone server
    the two server boot with a rpool zfs, and i don't know if in installation procedure the boot disk was layered with a dedicated slice for global device.
    Is possible to install SunCluster with a rpool boot zfs disk?
    What do i have to do?
    Alessio

    Hi!
    I am have 10 node Sun Cluster.
    All nodes have zfs rpool with mirror.
    is better create mirror zfs disk boot after installation of Sun Cluster or not?I create zfs mirror when install Solaris 10 OS.
    But I don't see any problems to do this after installation of Sun Cluster or Solaris 10.
    P.S. And you may use UFS global with ZFS root.
    Anatoly S. Zimin

  • ZFS Snapshots/ZFS Clones of Database on sun/solaris

    Our production database is on Sun/Solaris 10 (SunOS odin 5.10 Generic_127127-11 sun4u sparc SUNW,SPARC-Enterprise) with oracle 10.1.0 . It is about 1TB in size. We have also created our MOCK and DEVELOPMENT databases from the Production database. To save disk space, we created these databases as ZFS Snapshots/ZFS Clones at the OS level and are using less than 10GB each being clones as on now. Now I want to upgrade the production database from oracle 10.1 to 11.2 but I don't want to upgrade the MOCK and DEVELOPMENT databases for the time being and want them to continue to run as clones on 10.1. After upgrade, Prod will run from 11g oracle tree one one machine and MOCK/DEVL on 10g tree on another machine. Will the upgrade of Production from 10.1 to 11.2 INVALIDATE the cloned MOCK and DEVELOPMENT databases?? There might be data types/features in 11g which do not exist in 10g.
    Below are the links to the documentation we used to create the snapshots.
    http://docs.huihoo.com/opensolaris/solaris.../html/ch06.html
    http://docs.huihoo.com/opensolaris/solaris...ml/ch06s02.html

    Hi,
    The mentioned links in the post is not working.
    I would suggest u to raise an Official S.R. with http://support.oracle.com prior upgrading your database.
    Also you can try this out with 10g db installation on TEST machine and create databases as ZFS Snapshots/ZFS Clones at the OS level for MOCK. Then upgrade the 10g database and test it.
    Refer:
    *429825.1 -- Complete Checklist for Manual Upgrades to 11gR1*
    *837570.1 -- Complete Checklist for Manual Upgrades to 11gR2*
    Regards,
    X A H E E R

  • W2100z Supplemental 2.5 XpReburn produces non-bootable ISO in Solaris Sparc

    Just posting in case helps someone else...
    The XpReburn script from the W2100z Supplemental 2.5 CD is commented as being able to run on Solaris Sparc:
    # This script was developed on Solaris X86. It will run on Solaris Sparc
    # and Red Hat Enterprise Linux 3. It is not guaranteed to work unmodified  
    # accross all versions of Linux. This script is experimental. Some end user
    # MODIFICations may be necessary for it to run on other linux distributions 
    # and system configurations.Running XpReburn under Solaris Sparc 10 produced an .ISO image apparently without error, but the resulting image would not boot.
    Found the cause of the non-bootable image was the following section of XpReburn. It was using dd and od to extract 16-bit LBA values. On Solaris Sparc it ended up reading little-endian values as big-endian, hence extracting the boot loader from the wrong part of the XP CD. To get it to work under Solaris Sparc added conv=swab twice as shown below:
    # Read the boot loader from the CD into the image directory as a file. This process takes
    # several steps because the boot record has to be consulted to locate the RBA (the
    # starting LBA of the boot catalog). The boot record always resides at LBA 17
    ${ECHO} "Reading WinXP Boot Loader from CD..."
    ${ECHO} "  Looking up boot catalog lba from boot record..."
    ${DD} if=$CD_DEVICE bs=2048 count=1 skip=17 | ${DD} bs=71 skip=1 count=1 | ${DD} conv=swab bs=4 count=1 of=$SCRIPTDIR/foo
    BOOTCATLBA=`od -d $SCRIPTDIR/foo | ${AWK} '{print $2}'`
    ${ECHO} "  Boot Catalog is at LBA $BOOTCATLBA, looking up boot image LBA..."
    ${DD} if=$CD_DEVICE bs=2048 count=1 skip=$BOOTCATLBA | ${DD} bs=40 skip=1 count=1 | ${DD} conv=swab bs=4 count=1 of=$SCRIPTDIR/foo
    BOOTIMGLBA=`od -d $SCRIPTDIR/foo | ${AWK} '{print $2}'`
    ${ECHO} "  Reading Boot Image from LBA $BOOTIMGLBA..."
    ${DD} if=$CD_DEVICE bs=2048 count=1 skip=$BOOTIMGLBA | ${DD} bs=2048 count=1 of=$IMAGEDIR/$BOOTIMAGE
    ${RM} -f $SCRIPTDIR/fooWith the above mod, XpReburn generated an image which booted the XP Pro installer and allowed XP to be installed.

    MAAL wrote:
    have thought about submitting this hint/article to the BigAdmin community ?
    http://www.sun.com/bigadmin/common/submittals.html
    Michael,
    Reading the submission guidelines it appears BigAdmin is for you own work:
    By submitting content to the BigAdmin portal, you are attesting that your submission is your own non-confidential property and does not infringe the intellectual property rights of any party.
    Whereas my "hint" above is really a bug report / enhancement request for a Sun supplied script.
    I only have a basic support plan so don't think I can raise a bug report.
    Also:
    1) The modification listed was a quick hack to get the script to work under Sparc Solaris, but which would have caused the script to generate a non-bootable ISO if run under X86 Solaris.
    2) If you run the script on a ZFS filesystem you get an error that the filesystem isn't local - the script should be updated to consider a ZFS filesystem as local.
    3) If the script is run on a Solaris system with two optical drives, the script fails with an error because fails to correctly read the volume name of the XP CD. Can't remember the details, but just hacked the script to work.
    So, so for now will just leave the "hint" in this thread.

  • ODSM installation failing on Solaris Sparc

    Hi Guys,
    we are trying to install ODSM on a Solaris server (Solaris Sparc 11). However the installer is throwing the following error while creating domain -
    [2013-01-10T15:50:52.888-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationStatus] onConfigurationStatus: 92185386-b8be-44a0-9a5f-0d0bc9657eb4
    [2013-01-10T15:50:52.888-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationStatus] [OOB IDM CONFIG EVENT] onConfigurationStatus -> Description: Starting Domain.
    [2013-01-10T15:50:52.888-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationStatus] [OOB IDM CONFIG EVENT] onConfigurationStatus -> State: START
    [2013-01-10T15:50:52.889-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationStatus] [OOB IDM CONFIG EVENT] onConfigurationStatus -> Component Name : StartDomain
    [2013-01-10T15:50:52.889-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationStatus] [OOB IDM CONFIG EVENT] onConfigurationStatus -> Component Type : WLSDomain
    [2013-01-10T15:50:52.889-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationStatus] ________________________________________________________________________________
    [2013-01-10T15:50:52.889-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationStatus] [OOB IDM CONFIG EVENT] onConfigurationStatus ->92185386-b8be-44a0-9a5f-0d0bc9657eb4 StatusMsg:Starting Domain.
    [2013-01-10T15:50:52.890-06:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] reportStartConfigAction: EXIT........
    [2013-01-10T17:04:38.884-06:00] [as] [ERROR] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0]
    [2013-01-10T17:04:38.886-06:00] [as] [ERROR] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [[
    oracle.as.provisioning.util.ConfigException:
    Error while starting the domain.
    Cause:
    Starting the Admin_Server timed out.
    Action:
    See logs for more details.
    at oracle.as.provisioning.util.ConfigException.createConfigException(ConfigException.java:123)
    at oracle.as.provisioning.weblogic.ASDomain.startDomain(ASDomain.java:3150)
    at oracle.as.provisioning.weblogic.ASDomain.startDomain(ASDomain.java:3040)
    at oracle.as.provisioning.engine.WorkFlowExecutor._startAdminServer(WorkFlowExecutor.java:1645)
    at oracle.as.provisioning.engine.WorkFlowExecutor._createDomain(WorkFlowExecutor.java:635)
    at oracle.as.provisioning.engine.WorkFlowExecutor.executeWLSWorkFlow(WorkFlowExecutor.java:391)
    at oracle.as.provisioning.engine.Config.executeConfigWorkflow_WLS(Config.java:866)
    at oracle.as.idm.install.config.BootstrapConfigManager.doExecute(BootstrapConfigManager.java:690)
    at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:371)
    at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
    at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
    at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
    at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:64)
    at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:160)
    at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
    at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
    at java.lang.Thread.run(Thread.java:662)
    [2013-01-10T17:04:38.888-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationError] [OOB IDM CONFIG EVENT] onConfigurationError -> configGUID 92185386-b8be-44a0-9a5f-0d0bc9657eb4
    [2013-01-10T17:04:38.889-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationError] [OOB IDM CONFIG EVENT] ErrorID: 35091
    [2013-01-10T17:04:38.889-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationError] [OOB IDM CONFIG EVENT] Description: [[
    Error while starting the domain.
    Cause:
    An error occurred while starting the domain.
    Action:
    See logs for more details.
    [2013-01-10T17:04:38.891-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationError] ________________________________________________________________________________
    [2013-01-10T17:04:38.892-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationError] [OOB IDM CONFIG EVENT] onConfigurationError -> eventResponse ==oracle.as.provisioning.engine.ConfigEventResponse@50cb14aa
    [2013-01-10T17:04:38.892-06:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [OOB IDM CONFIG EVENT] onConfigurationError -> Configuration Status: -1
    [2013-01-10T17:04:38.892-06:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [OOB IDM CONFIG EVENT] onConfigurationError -> Asking User for RETRY or ABORT
    [2013-01-10T17:04:38.893-06:00] [as] [NOTIFICATION] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [OOB IDM CONFIG EVENT] onConfigurationError -> ActionStep:Create_Domain
    [2013-01-10T17:04:38.895-06:00] [as] [TRACE] [] [oracle.as.provisioning] [tid: 12] [ecid: 0000JkaSLdW7a695Nf4Eye1GvnD8000003,0] [SRC_CLASS: oracle.as.idm.install.config.event.IdMProvisionEventListener] [SRC_METHOD: onConfigurationError] [OOB IDM CONFIG EVENT] onConfigurationError -> wait for User Input ....
    [2013-01-10T17:21:34.980-06:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 11] [ecid: 0000JkaOCte7a695Nf4Eye1GvnD8000002,0] Writing profile to file:/u01/app/oraInventory/logs/installProfile2013-01-10_03-31-48PM.log
    [2013-01-10T17:21:34.981-06:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 11] [ecid: 0000JkaOCte7a695Nf4Eye1GvnD8000002,0] outputFile:/u01/app/oraInventory/logs/installProfile2013-01-10_03-31-48PM.log
    [2013-01-10T17:21:34.981-06:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 11] [ecid: 0000JkaOCte7a695Nf4Eye1GvnD8000002,0] in writeProfile method..
    [2013-01-10T17:21:34.982-06:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 11] [ecid: 0000JkaOCte7a695Nf4Eye1GvnD8000002,0] Adding Element:INTERVIEW_TIME_ID for writing.
    [2013-01-10T17:21:34.983-06:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 11] [ecid: 0000JkaOCte7a695Nf4Eye1GvnD8000002,0] Adding Element:COPY_TIME_ID for writing.
    [2013-01-10T17:21:34.983-06:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 11] [ecid: 0000JkaOCte7a695Nf4Eye1GvnD8000002,0] Adding Element:LINK_TIME_ID for writing.
    [2013-01-10T17:21:34.983-06:00] [as] [NOTIFICATION] [] [oracle.as.install.engine.modules.statistics] [tid: 11] [ecid: 0000JkaOCte7a695Nf4Eye1GvnD8000002,0] Adding Element:CONFIGURATION_TIME_ID for writing.
    We couldn't find any way forward for this error Can anyone please advise if they have seen this error in their environment and what is the way forward? Thanks

    khaleel2 wrote:
    Hi Gurus,
    Too frequent INS-32025 errors. Tried everything possible, finally found in oraInstall2012-05-06_07-50-25PM.err file......
    ---# Begin Stacktrace #---------------------------
    ID: oracle.install.driver.oui.OUISetupDriver:13
    oracle.cluster.verification.VerificationException: An internal error occurred within cluster verification framework
    <Line 206, Column 12>: XML-20211: (Fatal Error) '--' is not allowed in comments.
    <Line 206, Column 12>: XML-20211: (Fatal Error) '--' is not allowed in comments.
    at oracle.ops.verification.framework.util.VerificationUtil.isPreReqSupported(VerificationUtil.java:4505)
    at oracle.ops.verification.framework.util.VerificationUtil.isPreReqSupported(VerificationUtil.java:4443)
    at oracle.cluster.verification.ClusterVerification.isPreReqSupported(ClusterVerification.java:6382)
    at oracle.install.driver.oui.OUISetupDriver.verifyEnvironment(OUISetupDriver.java:299)
    at oracle.install.driver.oui.OUISetupDriver.load(OUISetupDriver.java:422)
    Please help soon. Appreciated if you give main points instead of providing document links.errors indicate that cluster (RAC) is involved.
    At which step in cluster configuration, does this failure occur?

  • Creating Custom Solaris 9 x86 boot CDs

    Does anyone know how to create custome Solaris 9 boot CDs for the x86 platform? I want to try and remove some of the packages that I don't need and see if I can get the boot CDs down to 1 CD (as I did with the Solaris Sparc version). I tried the steps layed out in "Building a bootable JumpStart Installation CD-ROM", but the CDs do not appear to be in the correct format. If I put these new CDs into a SPARC station, they appear to be layed out correctly. After some research I determined that the x86 CDs use El Torito with the .boot-image in the s2 slice, but it still doesn't work.
    Here are the steps I have tried:
    From the Jumpstart whitepaper
    1. Extract slice 2 from the CD: find . -print | cpio -pudm /bicd/s2
    2. Stop volume management: /etc/init.d/volmgt stop
    3. Extract slice 0 from the CD: dd if=/dev/dsk/c1t0d0s0 of=/bicd/s0.image
    4. Modify /bicd/s2
    5. Make an ISO file: mkisofs -R -d -L -l -o /bicd/s9.image -B /bicd/s0.image /bicd/s2
    6. Burn the CD: cdrw -i s9.image
    I modified step 5 for an El Torito image:
    Make an ISO file: mkisofs -R -d -L -l -o /bicd/s9.image -B /bicd/s0.image -b .boot-image /bicd/s2
    The Jumstart whitepaper says to extract the vtoc separate, then calculate the cylinder boundaries and include it on the new CD. Whenever I include the extracted vtoc, the CD is completely unreadable. If I leave this step out of the Solaris Sparc version, everything works okay.
    Any help would be appreciated. Thanks.

    Does anyone know how to create custome Solaris 9 boot CDs for the x86 platform? I want to try and remove some of the packages that I don't need and see if I can get the boot CDs down to 1 CD (as I did with the Solaris Sparc version). I tried the steps layed out in "Building a bootable JumpStart Installation CD-ROM", but the CDs do not appear to be in the correct format. If I put these new CDs into a SPARC station, they appear to be layed out correctly. After some research I determined that the x86 CDs use El Torito with the .boot-image in the s2 slice, but it still doesn't work.
    Here are the steps I have tried:
    From the Jumpstart whitepaper
    1. Extract slice 2 from the CD: find . -print | cpio -pudm /bicd/s2
    2. Stop volume management: /etc/init.d/volmgt stop
    3. Extract slice 0 from the CD: dd if=/dev/dsk/c1t0d0s0 of=/bicd/s0.image
    4. Modify /bicd/s2
    5. Make an ISO file: mkisofs -R -d -L -l -o /bicd/s9.image -B /bicd/s0.image /bicd/s2
    6. Burn the CD: cdrw -i s9.image
    I modified step 5 for an El Torito image:
    Make an ISO file: mkisofs -R -d -L -l -o /bicd/s9.image -B /bicd/s0.image -b .boot-image /bicd/s2
    The Jumstart whitepaper says to extract the vtoc separate, then calculate the cylinder boundaries and include it on the new CD. Whenever I include the extracted vtoc, the CD is completely unreadable. If I leave this step out of the Solaris Sparc version, everything works okay.
    Any help would be appreciated. Thanks.

  • Oracle 8.1.6 on Solaris (SPARC) & EJB's

    Hi all,
    I would like to know if anybody is successfully using Oracle 8.1.6 on Solaris (SPARC) and EJB's or any other JServer Technology? If you have any advice for implementing JServer technologies on Solaris, please feel free to elaborate.
    Thanks in advance,
    Rob
    null

    Could you tell us how to do that????
    We all met the ORA-01034 problem!
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Wile E. Coyote:
    Yes, that's no problem
    I've installed 8.1.6 on several machines running Solaris 8 without any problem<HR></BLOCKQUOTE>
    null

Maybe you are looking for

  • Post Event Handler

    Hi, I have created a post event handler and registered it. Then i checked in the PLUGINS table, my event handler exist there. But this event handler never called after create user. I have added like: <action-handler class="xxxxx" entity-type="User" o

  • How do i update a new iPad air with everything on my old iPad gen1?

    Just got a new ipad air to replace my ipad gen1.  How do I get all my apps, pics, video, etc...from the old ipad to the new one?  Does Cloud simply replicate everything?

  • Form runtime in Forms9i

    How can i run a compiled form (.fmx) from outside the form builder in forms9i? for forms 6i we use the ifrun60.exe file to run the compiled form from the desktop. what should i do in forms 9i to run a form from desktop? regards george

  • Faulty slideshow transitions in iPhoto 5.0.4

    Since upgrading to iPhoto 5.0.4 I am unable to successfully run any slideshow transitions except "Dissolve" on my PowerBook G4 17" with the 23" cinema display. ALL the transitions on my 20" FlatPanel work fine. I have tried re-installing 5.0.4, resta

  • SONY PDW-HD1500 not mounting on Lion 10.7.4

    Hi everyone, For some reason my mac doesn't seem to be mounting my PDW-HD1500 to my desktop. I have tried it on some of my other macs with exactly the same software but no results. I have tried 4 different versions of Sony XDCAM transfer and nothing.