SunStorage 7110 + zfs snapshot

I have a snapshot of a zfs file system on a server and I am trying to send it to the storage array using this zfs command. Does anyone know why I get the output shown below?
zfs send testpool/[email protected] | ssh [email protected] zfs recv -F pool-0/[email protected]
aksh: invalid command "zfs recv -F pool-0/[email protected]"
aksh: invalid command "IûIûIÊ(Ê(IIIIf:ÿÿ Á!III:I:I:I:IûIIûIIIISISISISIÿIIIIIIIIIIIIIIIûIp}ûI A
@# |øI"
aksh: invalid command "Ûÿÿÿÿÿÿ"
aksh: invalid command "ûI"
This keeps repeating....I did a ctrl+c to stop it.

Hi Robert,
Thanks for your response. I suspected this might be the case, but it seems like I get conflicting information from the Sun website. It still says recommended and security patches are free everywhere I looked except when I went to download them. We got this machine in October and I obtained and installed a recommended patch cluster as well as a bunch of ZFS patches (it might have even been early November, shortly before the update), using only a valid account with no contract.
It would have been nice to know the policy on patch clusters was changing shortly, since now I want to use the snapshots as a backup for users.
For us at least, an upgrade install would be a royal pain in the butt, since this machine is sitting in a data center in the basement and that would entail me signing in there and sitting on the floor while it installs from DVD media.

Similar Messages

  • Zfs snapshots and booting ...

    Hello,
    In solaris 9, filesystem snapshots did not survive reboots. Do zfs snapshots in solaris 10 persist across reboots ?
    Can I boot off of a zfs partition ?
    thanks.

    Does this mean that when new machines appear with zfs
    support, OR when I can update my PROM, that I will be
    able to boot a zfs partition ?ZFS isn't out yet, so your question is premature. We'll get a look at it within a few weeks, hopefully.
    However, a few months ago it was widely reported by the developers that the initial release would not have boot support. Who knows if this has changed or not.
    I don't see any particular reason that PROM or hardware support is required, it should just need a bootloader that understands ZFS. I don't think that there's any UFS support in the existing proms. Just stuff that understands the VTOC label and how to load and execute a few blocks from a particular slice.
    Darren

  • Zfs snapshot question

    Hi guys,
    I will really appreciate it if someone can answer this for me.
    I do understand that you can use snapshots to back up file systems. But they also use up pool space when their file systems grow.
    So, is it necessary to create zfs snapshots even when you already have a full system back up in place?
    Thank you very much for your kind explanation.
    Arrey

    985798 wrote:
    So, is it necessary to create zfs snapshots even when you already have a full system back up in place? Nobody will force you to create or keep snapshots and if you are happy with taking "classic" backups then there may be no need for additional snapshots. And since snapshots will also take up space in your pool, it is usually a good idea to keep them only for a short period and delete them periodically. I like to use snapshots for two purposes:
    - create a snapshot, then write that snaptshot to take and destroy it afterwards. that way, you can guarantee that this tape backup is consistent
    - create snapshot at regular intervals and keep them around for a few days so that if I need to restore a file from just a day ago I don't have to go back to tapes but can rather fetch it from the snapshot. So that would be in addition to regular backups
    cheers
    bjoern

  • ZFS snapshot and SCP

    Hi,
    Any one can share the difference between the usage of ZFS snapshot/restoring the data and ordinary SCP to other host.
    Regards
    Siva

    The idea is that when you create a clone, it is lightweight and based on the snapshot. That's what makes it so fast. You're not copying every block in the filesystem. So the snapshot is what ties together the parent filesystem and the clone.
    For the clone to be independent, you'd have to copy all the blocks. There's no option to do that within the clone process. So as long as both the parent filesystem and the clone filesystem are around, the snapshot has to exist as well.
    Darren

  • Receiving zfs snapshots from remote system

    Hi guys,
    if you create zfs snapshots on systemA and then send them to systemB, how do you recover the snapshots from systemB back to systemA?
    Thanks guys .

    The same way, just do a send/receive from B to A. But that would create a new filesystem, not get you the snapshot within the original zfs back.
    cheers
    bjoern

  • Zfs snapshot of "zoned" ZFS dataset

    I have a ZFS (e.g. tank/zone1/data) which is delegated to a zone as a dataset.
    As root in the global zone, I can "zfs snapshot" and "zfs send" this ZFS:
    zfs snapshot tank/zone1/data and zfs send tank/zone1/data without any problem. When I "zfs allow" another user (e.g. amanda) with:
    zfs allow -ldu amanda mount,create,rename,snapshot,destroy,send,receivethis user amanda CAN DO zfs snapshot and zfs send on ZFS filesystems in the global zone, but it can not do these commands for the delegated zone (whilst root can do it) and I get a permission denied. A truss shows me:
    ioctl(3, ZFS_IOC_SNAPSHOT, 0x080469D0)          Err#1 EPERM [sys_mount]
    fstat64(2, 0x08045BF0)                          = 0
    cannot create snapshot 'tank/zone1/data@test'write(2, " c a n n o t   c r e a t".., 53) = 53Which setting am I missing to allow to do this for user amanda?
    Anyone experiencing the same?
    Regards,
    Marcel

    Hi Robert,
    Thanks for your response. I suspected this might be the case, but it seems like I get conflicting information from the Sun website. It still says recommended and security patches are free everywhere I looked except when I went to download them. We got this machine in October and I obtained and installed a recommended patch cluster as well as a bunch of ZFS patches (it might have even been early November, shortly before the update), using only a valid account with no contract.
    It would have been nice to know the policy on patch clusters was changing shortly, since now I want to use the snapshots as a backup for users.
    For us at least, an upgrade install would be a royal pain in the butt, since this machine is sitting in a data center in the basement and that would entail me signing in there and sitting on the floor while it installs from DVD media.

  • ZFS Snapshots/ZFS Clones of Database on sun/solaris

    Our production database is on Sun/Solaris 10 (SunOS odin 5.10 Generic_127127-11 sun4u sparc SUNW,SPARC-Enterprise) with oracle 10.1.0 . It is about 1TB in size. We have also created our MOCK and DEVELOPMENT databases from the Production database. To save disk space, we created these databases as ZFS Snapshots/ZFS Clones at the OS level and are using less than 10GB each being clones as on now. Now I want to upgrade the production database from oracle 10.1 to 11.2 but I don't want to upgrade the MOCK and DEVELOPMENT databases for the time being and want them to continue to run as clones on 10.1. After upgrade, Prod will run from 11g oracle tree one one machine and MOCK/DEVL on 10g tree on another machine. Will the upgrade of Production from 10.1 to 11.2 INVALIDATE the cloned MOCK and DEVELOPMENT databases?? There might be data types/features in 11g which do not exist in 10g.
    Below are the links to the documentation we used to create the snapshots.
    http://docs.huihoo.com/opensolaris/solaris.../html/ch06.html
    http://docs.huihoo.com/opensolaris/solaris...ml/ch06s02.html

    Hi,
    The mentioned links in the post is not working.
    I would suggest u to raise an Official S.R. with http://support.oracle.com prior upgrading your database.
    Also you can try this out with 10g db installation on TEST machine and create databases as ZFS Snapshots/ZFS Clones at the OS level for MOCK. Then upgrade the 10g database and test it.
    Refer:
    *429825.1 -- Complete Checklist for Manual Upgrades to 11gR1*
    *837570.1 -- Complete Checklist for Manual Upgrades to 11gR2*
    Regards,
    X A H E E R

  • Is there any way to refresh the zfs snapshot other than creating another on

    Hi,
    I want to use the zfs send / recv replication, after I create the snapshot and do send/ recv to the remote filesystem, is there any way to do send and recv from the same snapshot after making some changes to the original filesystem other than creating another snapshot every time before send / recv
    Thanking you
    Ushas Symon

    No, you'd have to take another snapshot and then send it, but you don't need to save all the snapshots after you are done. If you script it out to take snapshots fairly often during the day, the amount of data would be small. You can also send to a file and then backup that file to tape or whatever, you wouldn't have to save a million snapshots. You can also look into OpenSolaris' Time Slider function.

  • ZFS clones and snapshot... can't delete snapshot were clone is based on

    root@solaris [/] # zfs list -r
    NAME   USED  AVAIL  REFER  MOUNTPOINT
    home   100K  9,78G    21K  /datahome
    root@solaris [/] # zpool list
    NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    home  9,94G   108K  9,94G     0%  ONLINE  -
    root@solaris [/] # zfs create home/test
    root@solaris [/] # zfs snapshot home/test@today
    root@solaris [/] # zfs clone home/test@today home/myclone
    root@solaris [/] # zfs list -r
    NAME              USED  AVAIL  REFER  MOUNTPOINT
    home              138K  9,78G    23K  /datahome
    home/myclone         0  9,78G    21K  /datahome/myclone
    home/test          21K  9,78G    21K  /datahome/test
    home/test@today      0      -    21K  -
    root@solaris [/] # zfs promote home/myclone
    root@solaris [/] # zfs list -r
    NAME                 USED  AVAIL  REFER  MOUNTPOINT
    home                 140K  9,78G    24K  /datahome
    home/myclone          21K  9,78G    21K  /datahome/myclone
    home/myclone@today      0      -    21K  -
    home/test               0  9,78G    21K  /datahome/test
    root@solaris [/] # zfs destroy home/myclone
    cannot destroy 'home/myclone': filesystem has children
    use '-r' to destroy the following datasets:
    home/myclone@today
    root@solaris [/] # zfs destroy home/myclone@today
    cannot destroy 'home/myclone@today': snapshot has dependent clones
    use '-R' to destroy the following datasets:
    home/test
    root@solaris [/] #Why can't I destroy a snapshot? home/myclone is now a volume that is not linked to home/test.
    So I would expect to be able to delete the snapshot from myclone.
    Maybe I misunderstand something about how this works or I have the wrong expectations.
    I would expect a clone to be something like a copy that is independent of the volume being cloned.

    The idea is that when you create a clone, it is lightweight and based on the snapshot. That's what makes it so fast. You're not copying every block in the filesystem. So the snapshot is what ties together the parent filesystem and the clone.
    For the clone to be independent, you'd have to copy all the blocks. There's no option to do that within the clone process. So as long as both the parent filesystem and the clone filesystem are around, the snapshot has to exist as well.
    Darren

  • Cloning a ZFS rooted zone does a copy rather than snapshot and clone?

    Solaris 10 05/08 and 10/08 on SPARC
    When I clone an existing zone that is stored on a ZFS filesystem the system creates a copy rather than take a ZFS snapshot and clone as the documentation suggests;
    Using ZFS to Clone Non-Global Zones and Other Enhancements
    Solaris 10 6/06 Release: When the source zonepath and the target zonepath both reside on ZFS and are in the same pool,
    zoneadm clone now automatically uses the ZFS clone feature to clone a zone. This enhancement means that zoneadm
    clone will take a ZFS snapshot of the source zonepath and set up the target zonepathCurrently I have a ZFS root pool for the global zone, the boot environment is s10u6;
    rpool 10.4G 56.5G 94K /rpool
    rpool/ROOT 7.39G 56.5G 18K legacy
    rpool/ROOT/s10u6 7.39G 56.5G 6.57G /
    rpool/ROOT/s10u6/zones 844M 56.5G 27K /zones
    rpool/ROOT/s10u6/zones/moetutil 844M 56.5G 844M /zones/moetutil
    My first zone is called moetutil and is up and running. I create a new zone ready to clone the original one;
    -bash-3.00# zonecfg -z newzone 'create; set autoboot=true; set zonepath=/zones/newzone; add net; set address=192.168.0.10; set physical=ce0; end; verify; commit; exit'
    -bash-3.00# zoneadm list -vc
    ID NAME STATUS PATH BRAND IP
    0 global running / native shared
    - moetutil installed /zones/moetutil native shared
    - newzone configured /zones/newzone native shared
    Now I clone it;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning zonepath /zones/moetutil...
    I'm expecting to see;
    -bash-3.00# zoneadm -z newzone clone moetutil
    Cloning snapshot rpool/ROOT/s10u6/zones/moetutil@SUNWzone1
    Instead of copying, a ZFS clone has been created for this zone.
    What am I missing?
    Thanks
    Mark

    Hi Mark,
    Sorry, I don't have an answer but I'm seeing the exact same behavior - also with S10u6. Please let me know if you get an answer.
    Thanks!
    Dave

  • How to snapshot zfs volume

    Hi All,
    did anyone have expereience with ZFS volume snapshot and clone? what I am doing here is: create a zfs volume and present it as root disk to LDOM guest domain. I like to have the flexibility to snapshot and clone the volume before any maintenance. We have tested using image files already. We are comparing performance and maintenance complexity of using both image file and zfs volume.
    Any suggestion is welcome.
    Thanks,

    Thanks! could you please provide more details? zfs snapshot of file system is straightforward. But to snapshot volumes (which underline are 0 & 1s), it does not seem to be that easy. FYI, the zfs volume is only partitioned inside guest domain. From ldom controller domain, prtvtoc shows only one slice. That means inside LDOM controller domain, this volume just looks like raw device.

  • How to enable snapshot-schedule in Soalris 10

    Hi,
    I have installed Solaris 10 with ZFS as root file system.
    In the same ref. I would like to enable the automatic snapshot schedule rpool.
    While checking I found GUI option only to do the same, can any one help me out to enable the above option with command line.
    Thanks in advance.
    Note: I do not have GUI access of the server.
    Thanks
    Rajan

    Hi,
    What about using cron?
    To snapshot everyday at 19:00 all zfs under zones_pool, we just add these lines in crontab:
    00 19 * * 1 /usr/sbin/zfs destroy -r zones_pool@monday_19:00;/usr/sbin/zfs snapshot -r zones_pool@monday_19:00
    00 19 * * 2 /usr/sbin/zfs destroy -r zones_pool@tuesday_19:00;/usr/sbin/zfs snapshot -r zones_pool@tuesday_19:00
    00 19 * * 3 /usr/sbin/zfs destroy -r zones_pool@wednesday_19:00;/usr/sbin/zfs snapshot -r zones_pool@wednesday_19:00
    00 19 * * 4 /usr/sbin/zfs destroy -r zones_pool@thursday_19:00;/usr/sbin/zfs snapshot -r zones_pool@thursday_19:00
    00 19 * * 5 /usr/sbin/zfs destroy -r zones_pool@friday_19:00;/usr/sbin/zfs snapshot -r zones_pool@friday_19:00
    Marco

  • ZFS mount point - problem

    Hi,
    We are using ZFS to take the sanpshots in our solaris 10 servers. I have the problem when using ZFS mount options.
    The solaris server we are used as,
    SunOS emch-mp89-sunfire 5.10 Generic_127127-11 sun4u sparc SUNW,Sun-Fire-V440Sys
    tem = SunOS
    Steps:
    1. I have created the zfs pool named as lmspool
    2. Then created the file system lmsfs
    3. Now I want to set the mountpoint for this ZFS file system (lmsfs) as "/opt/database" directory (which has some .sh files).
    4. Then need to take the snapshot of the lmsfs filesystem.
    5. For the mountpoint set, I tried two ways.
    1. zfs set mountpoint=/opt/database lmspool/lmsfs
    it returns the message "cannot mount '/opt/database/': directory is not empty
    property may be set but unable to remount filesystem".
    If I run the same command in second time, the mount point set properly and then I taken the snapshot of the ZFS filesystem (lmsfs). After done some modification in the database directory (delete some files), then I rollback the snapshot but the original database directory was not recovered. :-(
    2. In second way, I used the "legacy" option for mounting.
    # zfs set mountpoint=legacy lmspool/lmsfs
    # mount -F zfs lmspool/lmsfs /opt/database
    After run this command, I cant able to see the files of the database directory inside the /opt. So I cant able to modify anything inside the /opt/database directory.
    Please someone suggest me the solution for this problem. or anyother ways to take the ZFS snapshot with mounting point in UFS file system?..
    Thanks,
    Muthukrishnan G

    You'll have to explain the problem clearer. What exactly is the problem? What is "the original database directory"? The thing with the .sh files? Why are you trying to mount onto something with files in it in the first place?

  • Best practise for zone ZFS backups

    We have an M5000 server that will eventually have at least 40 zones configured. The zone root and data files will be on ZFS. There is a requirement to run daily and weekly backups. We also have Netbackup as an enterprise backup system.
    My plan was to run daily zfs snapshots and to weekly "send" the snapshot to disk to be picked up by Netbackup.
    Questions have been raised over the possible disk performance hit of keeping 7 snapshots per filesystem online. Has anyone experienced this or can suggest an alternative backup procedure?
    Thanks.

    You should be aware that using zones on zfs currently prevents liveupgrade from being used.
    So it will make future patching trickier.

  • ZFS Root Pool Restore of a EFI labelled Disk

    Hi Team
    Please let me know the procedure for backup and restore of a EFI labelled root pool using zfs send/receive.
    Note - original operating system is installed on a t5 server with latest firmware,here the default disk label will be EFl instead of SMI as in the case of earlier firmware version.
    operation system is Solaris 11.1.
    Also need to know how to expand lun which is formatted with EFI labelled disk without losing its data.
    Expecting a positive response soon
    Regards
    Arun

    Hi ,
    What you need to do is very easy here is a procedure that i use:
    1)  make a snapshot off the rpool
           zfs snapshot -r rpool@<snapshotname>
    2) send that snapshot somewhere safe
           zpool destroy rpool/dump<snapshotname>
           zpool destroy rpool/swap@<snapshotname>
           zpool send -R rpool@<snapshotname | gzip > /net/<ipaddress>/<share>/<snapshotname>.gz
    3) Once the above is done you can do the following.
         Boot from DVD make sure you ahe a disk available and start creating the rpool.
         The rpool can be created with EFI or SMI label
         so for example to use EFI label zpool create c0d0
                               to use SMI label zpool create c0d0s0 => make sure that the disk is labeled and that all cylinders are in s0.
    4) create a new boot env
          zpool create rpool <disk>
    5) import the data again.
         gzcat /mnt/<snapshotname>.qz | zfs receive -Fv rpool
          zfs create -V 4G rpool/dump
         zfs create -V 4G rpool/swap
    6) check a list off bootenv
            beadm list
            beadm mount <bootenv> /tmp/mnt
            bootadm install-bootloader -P rpool
           devfsadm -Cn -r /tmp/mnt
           touch /tmp/mnt/reconfigure
           beadm umount <bootenv>
           beadm activate <bootenv>
    This is for Solaris 11 but it also works for Solaris 10 only the last part number 6 is different.
    I need to look this up again but if i remember again you need to set the following for solaris 10 bootfs that needs to be set on the rpool
    If you want i have a script that makes a backup off the rpool towards a nfs share.
    Hope this helps
    Regards
    Filip

Maybe you are looking for