Zfs boot block size and ufs boot block size

Hi,
In Solaris UFS file system , boot block  resides in 1 to 15 sectors, and each sector is 512 bytes total makes 7680 bytes
bash-3.2# pwd
/usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/ufs
bash-3.2# ls -ltr
total 16
-r--r--r--   1 root     sys        7680 Sep 21  2008 bootblk
for zfs file system the boot block size is
bash-3.2# pwd
/usr/platform/SUNW,SPARC-Enterprise-T5220/lib/fs/zfs
bash-3.2# ls -ltr
total 32
-r--r--r--   1 root     sys        15872 Jan 11  2013 bootblk
when we install zfs bootblk on disk using the install boot command ,how many sectors it will use to write the bootblk?
Thanks,
SriKanth Muvva

Thanks for your reply.
my query is when  zfs  boot block size is 16K, and on disk 1 to 15 sectors(here boot block going to be installed) make around 8K,
it mean in the 16K,it writes only 8K on the  disk
if you don't mid will you please explain me  in depth
I m referring the doc for UFS, page no 108 ,kernel bootstrap and initialization  (its old and its for Solaris 8)
http://books.google.co.in/books?id=r_cecYD4AKkC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false
please help me to find  a doc for kernel bootstrap and initialization for Solaris 10 with zfs and  boot archive
Thanks in advance .
Srikanth

Similar Messages

  • ZFS boot and other goodies

    Hi everyone,
    With the new funky OS-features in Solaris 10/08, does anyone know if such features are going to get support in the OSP/SUNWjet/N1SPS? ZFS boot would be nice, for a change :)
    I haven't seen any updated versions of the OSP plugin for N1SPS for quite a while now, is it still under development?
    Cheers,
    Ino!~

    Hi Ino,
    as far as I know (and I might be mistaken) OSP is not under any active development and all bare metal OS provisioning activities are now domain of xVM Ops Center, which is built on top of Jet, which does support ZFS root/boot installation already.
    If you want to get hacky, you can replace the SUNWjet package on your Jet server by hand (pkgrm/pkgadd), put there the fresh one and SPS/OSP should happily work with it (read: I have not tested it myself)...
    If you want to get supported, then go the xVM OC 2.0 way...
    HTH,
    Martin

  • ZFS boot device - UFS external storage - Solaris volume manger

    Hi All,
    If a system is running ZFS boot can one use Solaris Volume Manager on external UFS storage devices? If so, where do you store the metadb?

    Should work, even though you need to have a slice somewhere where you can store the metadb's.
    Perhaps you can store them on the external storage? Unless they are frequently removed.
    .7/M.

  • How to back up a ZFS boot disk ?

    Hello all,
    I have just installed Solaris 10 update 6 (10/08) on a Sparc machine (an Ultra 45 workstation) using ZFS for the boot disk.
    Now I want to port a custom UFS boot disk backup script to ZFS.
    Basically, this script copies the boot disk to a secondary disk and makes the secondary disk bootable.
    With UFS, I had to play with the vfstab a bit to allow booting off the secondary disk, but this is not necessary with ZFS.
    How can I perform such a backup of my ZFS boot disk ?
    I tried the following (source disk: c1t0d0, target disk: c1t1d0):
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 110G 118G 94K /rpool
    rpool/ROOT 4.58G 118G 18K legacy
    rpool/ROOT/root 4.58G 25.4G 4.50G /
    rpool/ROOT/root/var 79.2M 4.92G 79.2M /var
    rpool/dump 16.0G 118G 16.0G -
    rpool/export 73.3G 63.7G 73.3G /export
    rpool/homelocal 21.9M 20.0G 21.9M /homelocal
    rpool/swap 16G 134G 16K -
    # zfs snapshot -r rpool@today
    # zpool create -f -R /mnt rbackup c1t1d0
    # zfs send -R rpool@today | zfs receive -F -d rbackup               <- This one fails (see below)
    # installboot /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    The send/receive command fails after transfering the "/" filesystem (4.5 GB) with the following error message:
    cannot mount '/mnt': directory is not empty
    There may be some kind of unwanted recursion here (trying to back up the backup or something) but I cannot figure it out.
    I tried a workaround: creating the mount point outside the snapshot:
    zfs snapshot -r rpool@today
    mkdir /var/tmp/mnt
    zpool create -f -R /var/tmp/mnt rbackup c1t1d0
    zfs send -R rpool@today | zfs receive -F -d rbackup
    But it still fails, this time with mounting "/var/tmp/mnt".
    So how does one back up the ZFS boot disk to a secondary disk in a live environment ?

    OK, this post requires some clarification.
    First, thanks to robert.cohen and rogerfujii for giving some elements.
    The objective is to make a backup of the boot disk on another disk of the same machine. The backup must be bootable just like the original disk.
    The reason for doing this instead of (or, even better, in addition to) mirroring the boot disk is to be able to quickly recover a stable operating system in case anything gets corrupted on the boot disk. Corruption includes hardware failures, but also any software corruption which could be caused by a virus, an attacker or an operator mistake (rm -rf ...).
    After doing lots of experiments, I found two potential solutions to this need.
    Solution 1 looks like what rogerfujii suggested, albeit with a few practical additions.
    It consists in using ZFS mirroring and breaking up the mirror after resilvering:
         - Configure the backup disk as a mirror of the boot disk :
         zpool attach -f rpool <boot disk>s0 <backup disk>s0
         - Copy the boot block to the backup disk:
         installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
         - Monitor the mirror resilvering:
         zpool status rpool
         - Wait until the "action" field disappears (this can be scripted).
         - Prevent any further resilvering:
         zpool offline rpool <backup disk>s0
         Note: this step is mandatory because detaching the disk without offlining it first results in a non bootable backup disk.
         - Detach the backup disk from the mirror:
         zpool detach rpool <backup disk>s0
         POST-OPERATIONS:
         After booting on the backup disk, assuming the main boot disk is unreachable:
         - Log in as super-user.
         - Detach the main boot disk from the mirror
         zpool detach rpool <boot disk>s0
    This solution has many advantages, including simplicity and using no dirty tricks. However, it has two major drawbacks:
    - When booting on the backup disk, if the main boot disk is online, it will be resilvered with the old data.
    - There is no easy way to access the backup disk data without rebooting.
    So if you accidentally lose one file on the boot disk, you cannot easily recover it from the backup.
    This is because the pool name is the same on both disks, therefore effectively preventing any pool import.
    Here is now solution 2, which I favor.
    It is more complex and dependent on the disk layout and ZFS implementation changes, but overall offers more flexibility.
    It may need some additions if there are other disks than the boot disk with ZFS pools (I have not tested that case yet).
    ***** HOW TO BACKUP A ZFS BOOT DISK TO ANOTHER DISK *****
    1. Backup disk partitioning
    - Clean up ZFS information from the backup disk:
    The first and last megabyte of the backup disk, which hold ZFS information (plus other stuff) are erased:
    dd if=/dev/zero seek=<backup disk #blocks minus 2048> count=2048 of=/dev/rdsk/<backup disk>s2
    dd if=/dev/zero count=2048 of=/dev/rdsk/<backup disk>s2
    - Label and partition the backup disk in SMI :
    format -e <backup disk>
         label
         0          -> SMI label
         y
         (If more questions asked: press Enter 3 times.)
         partition
         (Create a single parition, number 0, filling the whole disk)
         label
         0
         y
         quit
         quit
    2. Data copy
    - Create the target ZFS pool:
    zpool create -f -o failmode=continue -R /mnt -m legacy rbackup <backup disk>s0
    Note: the chosen pool name is here "rbackup".
    - Create a snapshot of the source pool :
    zfs snapshot -r rpool@today
    - Copy the data :
    zfs send -R rpool@today | zfs receive -F -d rbackup
    - Remove the snapshot, plus its copy on the backup disk :
    zfs destroy -r rbackup@today
    zfs destroy -r rpool@today
    3. Backup pool reconfiguration
    - Edit the following files:
    /mnt/etc/vfstab
    /mnt/etc/power.conf
    /mnt/etc/dumpadm.conf
    In these files, replace the source pool name "rpool" with the backup pool name "rbackup".
    - Remove the ZFS mount list:
    rm /mnt/etc/zfs/zpool.cache
    4. Making the backup disk bootable
    - Note the name of the current boot filesystem:
    df -k /
    E.g.:
    # df -k /
    Filesystem kbytes used avail capacity Mounted on
    rpool/ROOT/root 31457280 4726390 26646966 16% /
    - Configure the boot filesystem on the backup pool:
    zpool set bootfs=rbackup/ROOT/root rbackup
    Note: "rbackup/ROOT/root" is derived from the main boot filesystem name "rpool/ROOT/root".
    - Copy the ZFS boot block to the backup disk:
    installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<backup disk>s0
    5. Cleaning up
    - Detach the target pool:
    zpool export rbackup
    I hope this howto will be useful to those like me who need to change all their habits while migrating to ZFS.
    Regards.
    HL

  • Need Best Practice for creating BE in ZFS boot environment with zones

    Good Afternoon -
    I have a Sparc system with ZFS Root File System and Zones. I need to create a BE for whenever we do patching or upgrades to the O/S. I have run into issues when testing booting off of the newBE where the zones did not show up. I tried to go back to the original BE by running the luactivate on it and received errors. I did a fresh install of the O/S from cdrom on a ZFS filesystem. Next ran the following commands to create the zones, and then create the BE, then activate it and boot off of it. Please tell me if there are any steps left out or if the sequence was incorrect.
    # zfs create –o canmount=noauto rpool/ROOT/S10be/zones
    # zfs mount rpool/ROOT/S10be/zones
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z1
    # zfs create –o canmount=noauto rpool/ROOT/s10be/zones/z2
    # zfs mount rpool/ROOT/s10be/zones/z1
    # zfs mount rpool/ROOT/s10be/zones/z2
    # chmod 700 /zones/z1
    # chmod 700 /zones/z2
    # zonecfg –z z1
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z1
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zonecfg –z z2
    Myzone: No such zone configured
    Use ‘create’ to begin configuring a new zone
    Zonecfg:myzone> create
    Zonecfg:myzone> set zonepath=/zones/z2
    Zonecfg:myzone> verify
    Zonecfg:myzone> commit
    Zonecfg:myzone>exit
    # zoneadm –z z1 install
    # zoneadm –z z2 install
    # zlogin –C –e 9. z1
    # zlogin –C –e 9. z2
    Output from zoneadm list -v:
    # zoneadm list -v
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    2 z1 running /zones/z1 native shared
    4 z2 running /zones/z2 native shared
    Now for the BE create:
    # lucreate –n newBE
    # zfs list
    rpool/ROOT/newBE 349K 56.7G 5.48G /.alt.tmp.b-vEe.mnt <--showed this same type mount for all f/s
    # zfs inherit -r mountpoint rpool/ROOT/newBE
    # zfs set mountpoint=/ rpool/ROOT/newBE
    # zfs inherit -r mountpoint rpool/ROOT/newBE/var
    # zfs set mountpoint=/var rpool/ROOT/newBE/var
    # zfs inherit -r mountpoint rpool/ROOT/newBE/zones
    # zfs set mountpoint=/zones rpool/ROOT/newBE/zones
    and did it for the zones too.
    When ran the luactivate newBE - it came up with errors, so again changed the mountpoints. Then rebooted.
    Once it came up ran the luactivate newBE again and it completed successfully. Ran the lustatus and got:
    # lustatus
    Boot Environment Is Active Active Can Copy
    Name Complete Now On Reboot Delete Status
    s10s_u8wos_08a yes yes no no -
    newBE yes no yes no -
    Ran init 0
    ok boot -L
    picked item two which was newBE
    then boot.
    Came up - but df showed no zones, zfs list showed no zones and when cd into /zones nothing there.
    Please help!
    thanks julie

    The issue here is that lucreate add's an entry to the vfstab in newBE for the zfs filesystems of the zones. You need to lumount newBE /mnt then edit /mnt/etc/vfstab and remove the entries for any zfs filesystems. Then if you luumount it you can continue. It's my understanding that this has been reported to Sun, and, the fix is in the next release of Solaris.

  • ZFS boot partition?

    Hi All,
    I'm currently being asked to look creating a number of servers with mirroring. Ideally the whole server will use ZFS which would mean it would also need to boot from ZFS.
    Looking at the OpenSolaris website:
    http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
    It looks like it's possible to do this. My problem is that I need to use vanilla Solaris on SPARC, and some bits and pieces from those instructions don't work.
    Is this even possible? If it's not, what's the best I can hope for?
    I'm currently testing on x86 (in VMWare), I'll be deploying on SPARC.
    Thanks for any help.....

    Darren_Dunham wrote:
    Boot procedures on x86 and SPARC are still quite different. So your testing on one doesn't necessarily help on the other. You'll note that the link you show is only for x86.
    A new boot method for SPARC is being developed that will work with ZFS, but it hasn't been released the way the x86 side has. So you can't boot from ZFS on SPARC just yet.
    DarrenI've also had issues with ZFS root in the X86 world. I won't go into the details as I'm not 100% certain whether it was the Nevada Build I was using (76) or if it was user error.
    The current roadmap has ZFS boot for mainline Solaris as coming in either U5 or U6 according to a briefing we had recently. That puts it out anywhere from mid to very late 2008.
    Dev Express will likely have it sooner, but we're back to "Is this production"? If it is, I would wait for it to be solid and stable and in main-line Solaris.
    Cheers,

  • Convert ZFS root file system to UFS with data.

    Hi, I would need to covert my ZFS root file systems to UFS and boot from the other disk as a slice (/dev/dsk/c1t0d0s0)
    I am ok to split the hard disk from root pool mirror. any ideas on how this can be acheived?
    Please sugget. Thanks,

    from the same document that was quoted above in the Limitations section:
    Limitations
    Version 2.0 of the Oracle VM Server for SPARC P2V Tool has the following limitations:
    Only UFS file systems are supported.
    Only plain disks (/dev/dsk/c0t0d0s0), Solaris Volume Manager metadevices (/dev/md/dsk/dNNN), and VxVM encapsulated boot disks are supported on the source system.
    During the P2V process, each guest domain can have only a single virtual switch and virtual disk server. You can add more virtual switches and virtual disk servers to the domain after the P2V conversion.
    Support for VxVM volumes is limited to the following volumes on an encapsulated boot disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris 9 operating systems that use VxVM.
    You cannot convert Solaris 10 systems that are configured with zones.

  • Zfs on solaris 10 and home directory creation

    I am using samba and a root preexec script to automatically create individual ZFS filesystem home directories with quotas on a Solaris 10 server that is a member of a Windows 2003 domain.
    There are about 60,000 users in Active Directory.
    My question is about best practice.
    I am worried about the overhead of having 60,000 ZFS filesytems to mount and run on Solaris 10 ?
    Edited by: fatfish on Apr 29, 2010 2:51 AM

    Testing results as follows -
    Solaris 10 10/09 running as VM on Vmware ESX server with 7 GB RAM 1 CPU 64 bit.
    ZFS pool created with three 50 GB FC LUNS from our SAN (Hardware RAID5). There are shared to ESX server and presented to the Solaris VM as Raw Device Mappings (no VMFS).
    I set up a simple script to create 3000 ZFS filesystem home directories
    #!/usr/bin/bash
    for i in {1..3000}
    do
    zfs create tank/users/test$i
    echo "$i created"
    done
    The first 1000 created very quickly.
    By the time I reached about 2000 each filesystem was taking almost 5 seconds to create. Way too long. I gave up after about 2500.
    So I rebooted.
    The 2500 ZFS filesystems mounted in about 4 seconds, so no problem there.
    The problem I have is why do the ZFS file system creation time drop of and become unworkable? I tried again to add to the pool after reboot and there was the same slow creation time.
    Am I better off with just one ZFS file system with 60,000 userquotas applied and lots of ordinary user home directories created under that with mkdir?

  • Can /globaldevices be UFS on a zvol in zfs boot environment.

    I have tested using a zvol to "host" a UFS file system for the /globaldevices rather than using the lofi method and all seems to work properly.
    Will Sun support this in a production environment?
    Thanks!

    Currently Sun do not support this in any environment, development, production or otherwise. I think the important word here is 'seems'. Unfortunately, that isn't quite good enough. We'd have to put it through a lot more testing to confirm that there aren't any problems using this configuration.
    I do have a vague recollection that there might be an issue with boot ordering here which might be why we don't support it.
    Anyway, unfortunately, your answer for now is no.
    Tim
    ---

  • Solaris 10 (sparc) + ZFS boot + ZFS zonepath + liveupgrade

    I would like to set up a system like this:
    1. Boot device on 2 internal disks in ZFS mirrored pool (rpool)
    2. Non-global zones on external storage array in individual ZFS pools e.g.
    zone alpha has zonepath=/zones/alpha where /zones/alpha is mountpoint for ZFS dataset alpha-pool/root
    zone bravo has zonepath=/zones/bravo where /zones/bravo is mountpoint for ZFS dataset bravo-pool/root
    3. Ability to use liveupgrade
    I need the zones to be separated on external storage because the intent is to use them in failover data services within Sun Cluster (er, Solaris Cluster).
    With Solaris 10 10/08, it looks like I can do 1 & 2 but not 3 or I can do 1 & 3 but not 2 (using UFS instead of ZFS).
    Am I missing something that would allow me to do 1, 2, and 3? If not is such a configuration planned to be supported? Any guess at when?
    --Frank                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Nope, that is still work in progress. Quite frankly I wonder if you would even want such a feature considering the way the filesystem works. It is possible to recover if your OS doesn't boot anymore by forcing your rescue environment to import the zfs pool, but its less elegant than merely mounting a specific slice.
    I think zfs is ideal for data and data-like places (/opt, /export/home, /opt/local) but I somewhat question the advantages of moving slices like / or /var into it. Its too early to draw conclusions since the product isn't ready yet, but at this moment I'd only think off disadvantages.

  • CLUSTERING WITH ZFS BOOT DISK

    hi guys,
    i'm looking for create a new cluster on two standalone server
    the two server boot with a rpool zfs, and i don't know if in installation procedure the boot disk was layered with a dedicated slice for global device.
    Is possible to install SunCluster with a rpool boot zfs disk?
    What do i have to do?
    Alessio

    Hi!
    I am have 10 node Sun Cluster.
    All nodes have zfs rpool with mirror.
    is better create mirror zfs disk boot after installation of Sun Cluster or not?I create zfs mirror when install Solaris 10 OS.
    But I don't see any problems to do this after installation of Sun Cluster or Solaris 10.
    P.S. And you may use UFS global with ZFS root.
    Anatoly S. Zimin

  • ZFS boot record creating problem....

    Hi
    I m using zfs as a root file system and mirrored it an other disk using Solaris 10 on SPARC T3-1 machine and
    when i m trying to write the boot record on zfs root mirror drive (2nd drive) by using this command:-
    installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
    but i m getting this message:
    /usr/platform/uname -i/lib/fs/zfs/bootblk: File not found
    any good suggestions??????????

    Revised somewhat...
    I copied and pasted your syntax and applied to one of my spare disks and it completed without error.
    The error message is somewhat alarming if the /usr/platform/`uname -i`/lib/fs/zfs/bootblk file
    is missing on your system.
    On another Solaris 10 system, you might determine which pkg provides this file or whether a symlink
    is missing.
    # pkgchk -l -p /usr/platform/`uname -i`/lib/fs/zfs/bootblk
    Thanks,
    Cindy
    Edited by: cindys on May 8, 2012 10:13 AM

  • Zfs boot issues

    Hi,
    I've got next issues: when I try to boot my server I'm getting following message:
    Executing last command: boot disk11
    Boot device: /pci@8f,2000/scsi@1/disk@1,0  File and args:
    SunOS Release 5.10 Version Generic_147440-26 64-bit
    Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
    NOTICE: Can not read the pool label from '/pci@8f,2000/scsi@1/disk@1,0:a'
    NOTICE: spa_import_rootpool: error 5
    Cannot mount root on /pci@8f,2000/scsi@1/disk@1,0:a fstype zfs
    panic[cpu0]/thread=1810000: vfs_mountroot: cannot mount root
    000000000180d950 genunix:vfs_mountroot+370 (1898400, 18c2000, 0, 1295400, 1299000, 1)
      %l0-3: 00000300038a6008 000000000188f9e8 000000000113a400 00000000018f8c00
      %l4-7: 0000000000000600 0000000000000200 0000000000000800 0000000000000200
    000000000180da10 genunix:main+120 (189c400, 18eb000, 184ee40, 0, 1, 18f5800)
      %l0-3: 0000000000000001 0000000070002000 0000000070002000 0000000000000000
      %l4-7: 0000000000000000 000000000181d400 000000000181d6a8 0000000001297c00
    skipping system dump - no dump device configured
    rebooting...
    Resetting ...
    I'm able to boot server in failsafe mode and successfuly import zfs-pool. The pool is fine.
    Here is zfs pool configuration:
    # zpool status
      pool: zfsroot
    state: ONLINE
    scan: scrub repaired 0 in 0h35m with 0 errors on Thu Jan 30 13:10:36 2014
    config:
            NAME          STATE     READ WRITE CKSUM
            zfsroot       ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c2t0d0s2  ONLINE       0     0     0
                c2t1d0s2  ONLINE       0     0     0
    errors: No known data errors
    Could somebody help?
    Thanks in advance.
    PS:
    SunOS Release 5.10 Version Generic_147440-01 64-bit
    Server: Fujitsu PRIMEPOWER850 2-slot 12x SPARC64 V
    OBP: 3.21.9-1

    Here are
    format output
    AVAILABLE DISK SELECTIONS:
           0. c0t1d0 <FUJITSU-MAP3735NC-3701 cyl 24345 alt 2 hd 8 sec 737>
              /pci@87,2000/scsi@1/sd@1,0
           1. c2t0d0 <COMPAQ-BD14689BB9-HPB1 cyl 65533 alt 2 hd 5 sec 875>
              /pci@8f,2000/scsi@1/sd@0,0
           2. c2t1d0 <COMPAQ-BD14689BB9-HPB1 cyl 65533 alt 2 hd 5 sec 875>
              /pci@8f,2000/scsi@1/sd@1,0
    show-disks in OBP:
    {0} ok show-disks
    a) /pci@8d,4000/scsi@3,1/disk
    b) /pci@8d,4000/scsi@3/disk
    c) /pci@8f,2000/scsi@1,1/disk
    d) /pci@8f,2000/scsi@1/disk
    e) /pci@85,4000/scsi@3,1/disk
    f) /pci@85,4000/scsi@3/disk
    g) /pci@87,2000/scsi@1,1/disk
    h) /pci@87,2000/scsi@1/disk
    q) NO SELECTION
    Enter Selection, q to quit:
    boot-device  value is "disk1:c" but I switch off autoboot (for debug purposes) and boot server manualy with command "boot disk11..."
    Here is devalias output:
    {0} ok devalias
    tape                     /pci@87,2000/scsi@1,1/tape@5,0
    cdrom                    /pci@87,2000/scsi@1,1/disk@4,0:f
    disk11                   /pci@8f,2000/scsi@1/disk@1,0
    disk10                   /pci@8f,2000/scsi@1/disk@0,0
    disk1                    /pci@87,2000/scsi@1/disk@1,0
    disk0                    /pci@87,2000/scsi@1/disk@0,0
    disk                     /pci@87,2000/scsi@1/disk@0,0
    scsi                     /pci@87,2000/scsi@1
    obp-net                  /pci@87,4000/network@1,1
    net                      /pci@87,4000/network@1,1
    ttyb                     /pci@87,4000/ebus@1/FJSV,se@14,400000:b
    ttya                     /pci@87,4000/ebus@1/FJSV,se@14,400000:a
    scf                      /pci@87,4000/ebus@1/FJSV,scfc@14,200000

  • ZFS work with Ldom and zvol REFER size can't reduce

    Anyone can help to take a look following:
    I do the zfs simulation below and show below output.
    1) Create image file
    root@solaris:/a# mkfile 200m img.img
    2)Create test pool use img.img file
    root@solaris:/a# zpool create -f test /a/img.img
    root@solaris:/a# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool1 3.79G 11.8G 92.5K /rpool1
    rpool1/ROOT 2.24G 11.8G 31K legacy
    rpool1/ROOT/solaris 2.24G 11.8G 2.23G /
    rpool1/dump 768M 11.8G 768M -
    rpool1/export 89.5K 11.8G 32K /export
    rpool1/export/home 57.5K 11.8G 57.5K /export/home
    rpool1/swap 817M 12.5G 123M -
    test 91K 163M 31K /test
    3) Create zvol on the test pool
    root@solaris:/a# zfs create -V 100m test/zvol
    root@solaris:/a# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool1 3.79G 11.8G 92.5K /rpool1
    rpool1/ROOT 2.24G 11.8G 31K legacy
    rpool1/ROOT/solaris 2.24G 11.8G 2.23G /
    rpool1/dump 768M 11.8G 768M -
    rpool1/export 89.5K 11.8G 32K /export
    rpool1/export/home 57.5K 11.8G 57.5K /export/home
    rpool1/swap 817M 12.5G 123M -
    test 155M 7.89M 31K /test
    test/zvol 103M 163M 16K -
    root@solaris:/a# zpool status
    pool: rpool1
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    rpool1 ONLINE 0 0 0
    c7t0d0s0 ONLINE 0 0 0
    errors: No known data errors
    pool: test
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    test ONLINE 0 0 0
    /a/img.img ONLINE 0 0 0
    errors: No known data errors
    4) Use the zvol to create apool
    root@solaris:/a# zpool create -f apool /dev/zvol/dsk/test/zvol
    root@solaris:/a# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    apool 92.5K 63.4M 31K /apool
    rpool1 3.79G 11.8G 92.5K /rpool1
    rpool1/ROOT 2.24G 11.8G 31K legacy
    rpool1/ROOT/solaris 2.24G 11.8G 2.23G /
    rpool1/dump 768M 11.8G 768M -
    rpool1/export 89.5K 11.8G 32K /export
    rpool1/export/home 57.5K 11.8G 57.5K /export/home
    rpool1/swap 817M 12.5G 123M -
    test 155M 7.88M 31K /test
    test/zvol 155M 162M 1.13M -
    root@solaris:/a# zpool status
    pool: apool
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    apool ONLINE 0 0 0
    /dev/zvol/dsk/test/zvol ONLINE 0 0 0
    errors: No known data errors
    pool: rpool1
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    rpool1 ONLINE 0 0 0
    c7t0d0s0 ONLINE 0 0 0
    errors: No known data errors
    pool: test
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    test ONLINE 0 0 0
    /a/img.img ONLINE 0 0 0
    errors: No known data errors
    5) Make file on the /apool
    root@solaris:/apool# mkfile 10m test
    root@solaris:/apool# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    apool 9.39M 54.1M 9.28M /apool
    rpool1 3.79G 11.8G 92.5K /rpool1
    rpool1/ROOT 2.24G 11.8G 31K legacy
    rpool1/ROOT/solaris 2.24G 11.8G 2.23G /
    rpool1/dump 768M 11.8G 768M -
    rpool1/export 89.5K 11.8G 32K /export
    rpool1/export/home 57.5K 11.8G 57.5K /export/home
    rpool1/swap 817M 12.5G 123M -
    test 155M 7.55M 31K /test
    test/zvol 103M 151M 11.5M -
    6) Remove 20m test on the apool
    root@solaris:/apool# rm test
    root@solaris:/apool# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    apool 316K 63.1M 31K /apool
    rpool1 3.79G 11.8G 92.5K /rpool1
    rpool1/ROOT 2.24G 11.8G 31K legacy
    rpool1/ROOT/solaris 2.24G 11.8G 2.23G /
    rpool1/dump 768M 11.8G 768M -
    rpool1/export 89.5K 11.8G 32K /export
    rpool1/export/home 57.5K 11.8G 57.5K /export/home
    rpool1/swap 817M 12.5G 123M -
    test 155M 7.53M 31K /test
    test/zvol 103M 150M 12.9M -
    7) Make snapshot on the zvol
    root@solaris:/apool# zfs snapshot test/zvol@snap
    root@solaris:/# zfs send test/zvol@snap > /tmp/zvol.snap
    root@solaris:/tmp# du -sh *
    14M zvol.snap
    ** The snap file and the REFER size are same, but the real size on the apool is 31K, how can I update the zvol REFER size***

    Hi
    Please show resut for zfs list
    Check Refer for:
    apool
    test/zvol@snap
    test/zvol
    You make backup of test/zvol@snap not apool.
    Refer size for it wil different.
    In case You need backup of apool You should make:
    root@solaris:/apool# zfs snapshot apool@snap
    root@solaris:/# zfs send apool@snap > /tmp/apool.snapUsed space on device is not corresponede used space on FS, becouse device do not know what realy need.
    It's only know what writen on it some time.
    Regards.

  • Monitoring ZFS arc cache value and application interaction

    Hello,
    I have a question regarding the interaction between the ZFS ARC cache memory consumption and application memory interactions running on a server. I am running into a situation where I believe the primary enterprise application using an Oracle DB running on the server is not able to access any additional RAM when it requires more memory. I have read ZFS documentation and it is supposed to release ram for applications, but I am unsure if this is happening. How can I test this? I am running a T5120 Solaris 10, 32 GB of ram.
    The primary application documentation says to size the server for 16 GB, it is usually running with 6 GB on a day to day basis. That is 17% of the memory. ZFS ARC cache is always around the 69-70% of the memory (23GB). I need to figure out if I am not seeing additional memory utilisation on the part of the application because it is poorly written and the ARC is not being released when requested. That is my running theory.
    Also, polling the server for data, I found a high rate of page faults/second. The server seams to be running from 350 to 650 PageFaults/second and can spike to 1.8k during periodes of high load.
    Is lowering the maximum value of the ARC the only option? This is suggested when using a DB. I would appreciate any suggestions. This is my first run in with ZFS.
    Page Summary                Pages                MB  %Tot
    Kernel                     319422              2495    8%
    ZFS File Data             2890547             22582   70%
    Anon                       703625              5497   17%
    Exec and libs               28374               221    1%
    Page cache                  35260               275    1%
    Free (cachelist)             9518                74    0%
    Free (freelist)            114959               898    3%
    Total                     4101705             32044
    Physical                  4070897             31803

    From my experience, ZFS arc cache doesn't always release memory when there is contention (theoretically yes).
    I always reduce the ZFS cache to bare minimal if the app data is on a shared storage like SAN or in other words, you don't use JBODs for your app data (be it DB or a file server). If you do use local storage for your app then should give it a reasonable amount of memory for better performance.

Maybe you are looking for

  • Trailing Underscore added to folder name

    In using Visio 2013 to create an HTM file from a VSD, a folder is also created, containing many little files used to display the HTM in a browser. Example. VSD named DaleTest, HTM file named DaleTest.HTM, folder named DaleTest_files. I am working rem

  • Customer empties returnable packaging

    Hi guys, I am facing with an issue regarding customer empties returns. The scenario is as below: Scenario: The customer provides the company with empty barrels. The company then fills the barrels and returns the filled barrels to the specific custome

  • Macbook (1,83GHZ Duo 1GB RAM ...) + World of Warcraft

    hi, does anyone of you know if I m able to run World of Warcraft on an Macbook without graphic problems, or better said 'hardware lags'? a PC haha, maybe a mac soon   Windows XP Pro   sometimes it *****

  • Squidoo not saving on ONE Mac only

    Hello, I've started a couple of lens's on Squidoo.com. The problem is that I cannot edit and save my lens on my primary computer - an eMac running OS X 10.4.9 - using Safari, Firefox, Shirra or any other browser. However, I can edit and save it on my

  • Extension Manager javascript error 2

    I am getting so many errors with DW8 I am not sure what to do. Everytime I try to open a page, when I try to cut and paste... Oh my! HELP PLEASE! Thanks!