Zfs filesystem screwed up?
Hi,
I am running S10U3 (all patches applied).
Today by mistake I extracted a big (4.5G) tar archive into my home directory (on ZFS) which ran out of space and the tar command terminated with the error "Disk quota exceeded" (it should have been something like "No space left on device" ?)
I think the zfs filesystem got screwed. Now I am unable to delete any file with rm as unlink(2) fails with error 49 (EDQUOT).
I can't login because there is no space on left on /home.
I even tried to delete files as root but I still get EDQUOT.
Files can be read though.
I tried zpool scrub (not sure what that does) and it shows no errors.
zpool status shows no errors either.
I am confident that my drive is not faulty.
Restarting the system didn't help either.
I had put all my important stuff on that zfs FS thinking that it would be safe but I never expected that such a problem would ever occur.
What should I do? Any suggestions?
Is zfs completely reliable or are there any known problems?
Robert,
ZFS uses atomic operations to update filesytem metadata.This is implemented as follows. When a directory is updated a shadow copy of it and all its parents is created all the way to the root "superblock".
Then the existing superblock is swapped for the shadow superblock as an atomic operation.
A file deletion is an metadata operation like any other and requires making shadow copies
So what I think has happened is that the filesystem is so full that it can't find space to make the shadow copies to allow a delete.
Thanks for the explanation, probably that's what happened but I would consider it a very weak design if a user can cripple the FS just by filling it up.
So one way out is if you can add an extra device even a small one to the pool.That will give you enough space to delete.
Of course since you can never remove a device from a pool you'll be stuck with it.
I would have certainly liked to do this but this is just my desktop computer and I have only 1 hard disc with no extra space.
You could try asking on the opensolaris zfs forum's.They might have a special technique for dealing with it
The guys at the opensolaris forums don't like to answer Solaris problems but anyway I will give it a try.
Thankfully, I lost no data because I had backups and because the damaged ZFS was readable, so the only damage done was a loss of confidence in ZFS.
Similar Messages
-
Does SAP support Solaris 10 ZFS filesystem when using DB2 V9.5 FP4?
Hi,
I'm installing NW7 (BI usage). SAPINST has failed in the step "ABAP LOAD due to the DB2 error message
"Unsupported file system type zfs for Direct I/O". It appears my Unix Admin must have decided to set these filesystems as ZFS on this new server.
I have several questions requiring your expertise.
1) Does SAP support ZFS filesystems on Solaris 10 (SPARC hardware)? I can not find any reference in SDN or Service Market Place? Any reference will be much appreciated.
2) How can I confirm my sapdata fielsystems are ZFS?
3) What actions do you recommend for me to resolve the SAPINST errors? Do I follow the note "Note 995050 - DB6: NO FILE SYSTEM CACHING for Tablespaces" to disable "Direct I/O" for all DB2 tablespaces? I have seen Markus Doehr's forum Link:[ thread|Re: DB2 on Solaris x64 - ZFS as filesystem possible?; but it does not state exactly how he overcame the error.
regards
BennyHi Frank,
Thanks for your input.
I have also found the command "zfs list" that would display any ZFS filesystems.
We have also gone back to UFS as the ZFS deployment schedule does not meet this particular SAP BW implementation timeline.
Has any one come across any SAP statement that states NW7 can be deployed with ZFS for DB2 database on Solaris SPARC platform. If not, I'll open an OSS message.
regards
Benny -
How to count number of files on zfs filesystem
Hi all,
Is there a way to count the number of files on a zfs filesystem similar to how "df -o i /ufs_filesystm" works? I am looking for a way to do this without using find as I suspect there are millions of files on a zfs filesystem that is causing slow performance sometimes on a particular zfs file system
Thanks.So I have finished 90% of my testing and I have accepted _df -t /filesystem | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'_ as acceptable in the absence of a known built in zfs method. My main conern was with the reduction of available files from the df -t output as more files were added. I used a one liner for loop to just create empty files to conserve on space used up so I would have a better chance of seeing what happens if the available files reached 0.
root@fj-sol11:/zfstest/dir4# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
_5133680_
root@fj-sol11:/zfstest/dir4# df -t /zfstest
/zfstest (pool1 ): 7237508 blocks *7237508* files
total: 10257408 blocks 12372310 files
root@fj-sol11:/zfstest/dir4#
root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
_6742772_
root@fj-sol11:/zfstest/dir7# df -t /zfstest
/zfstest (pool1 ): 6619533 blocks *6619533* files
total: 10257408 blocks 13362305 files
root@fj-sol11:/zfstest/dir7# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
_7271716_
root@fj-sol11:/zfstest/dir7# df -t /zfstest
/zfstest (pool1 ): 6445809 blocks *6445809* files
total: 10257408 blocks 13717010 files
root@fj-sol11:/zfstest# df -t /zfstest | awk ' { if ( NR==1) F=$(NF-1) ; if ( NR==2) print $(NF-1) - F }'
_12359601_
root@fj-sol11:/zfstest# df -t /zfstest
/zfstest (pool1 ): 4494264 blocks *4494264* files
total: 10257408 blocks 16853865 files
I noticed the total files kept increasing and the creation of 4 millions files (4494264) after the above example was taking up more time than I had after already creating 12 million plus ( _12359601_ ) which took 2 days on a slow machine on and off (mostly on). If anyone has any idea of creating them quicker than "touch filename$loop" in a for loop let me know :)
In the end I decided to use a really small file system 100mb on a virtual machine to test what happens as the free files approached 0. Turns out if never does ... it somehow increased
bash-3.00# df -t /smalltest/
/smalltest (smalltest ): 31451 blocks *31451* files
total: 112640 blocks 278542 files
bash-3.00# pwd
/smalltest
bash-3.00# mkdir dir4
bash-3.00# cd dir4
bash-3.00# for arg in {1..47084}; do touch file$arg; done <--- I created 47084 files here, more that the free listed above ( *31451* )
bash-3.00# zfs list smalltest
NAME USED AVAIL REFER MOUNTPOINT
smalltest 47.3M 7.67M 46.9M /smalltest
bash-3.00# df -t /smalltest/
/smalltest (smalltest ): 15710 blocks *15710* files
total: 112640 blocks 309887 files
bash-3.00#
The other 10% of my testing will be to see what happens when I try to a find on 12 million plus files and try to pipe it to wc -l :) -
Hi
in zone:
bash-3.00# reboot
[NOTICE: Zone rebooting]
SunOS Release 5.10 Version Generic_144488-17 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Hostname: dbspfox1
Reading ZFS config: done.
Mounting ZFS filesystems: (1/10)cannot mount '/zonedev/dbspfox1/biblio/P622/dev': directory is not empt(10/10 )
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
Nov 4 10:07:33 svc.startd[12427]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" fa iled with exit status 95.
Nov 4 10:07:33 svc.startd[12427]: system/filesystem/local:default failed fatally: transitioned to maintenanc e (see 'svcs -xv' for details)
For sure the directory in not empty, but the others too are not empty.
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zonedev 236G 57.6G 23K /zonedev
zonedev/dbspfox1 236G 57.6G 1.06G /zonedev/dbspfox1
zonedev/dbspfox1/biblio 235G 57.6G 23K /zonedev/dbspfox1/biblio
zonedev/dbspfox1/biblio/P622 235G 57.6G 10.4G /zonedev/dbspfox1/biblio/P622
zonedev/dbspfox1/biblio/P622/31mars 81.3G 57.6G 47.3G /zonedev/dbspfox1/biblio/P622/31mars
zonedev/dbspfox1/biblio/P622/31mars/data 34.0G 57.6G 34.0G /zonedev/dbspfox1/biblio/P622/31mars/data
zonedev/dbspfox1/biblio/P622/dev 89.7G 57.6G 50.1G /zonedev/dbspfox1/biblio/P622/dev
zonedev/dbspfox1/biblio/P622/dev/data 39.6G 57.6G 39.6G /zonedev/dbspfox1/biblio/P622/dev/data
zonedev/dbspfox1/biblio/P622/preprod 53.3G 57.6G 12.9G /zonedev/dbspfox1/biblio/P622/preprod
zonedev/dbspfox1/biblio/P622/preprod/data 40.4G 57.6G 40.4G /zonedev/dbspfox1/biblio/P622/preprod/data
bash-3.00# svcs -xv
svc:/system/filesystem/local:default (local file system mounts)
State: maintenance since Fri Nov 04 10:07:33 2011
Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
See: http://sun.com/msg/SMF-8000-KS
See: /var/svc/log/system-filesystem-local:default.log
Impact: 33 dependent services are not running:
svc:/system/webconsole:console
svc:/system/filesystem/autofs:default
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/milestone/multi-user-server:default
svc:/application/autoreg:default
svc:/application/stosreg:default
svc:/application/graphical-login/cde-login:default
svc:/application/cde-printinfo:default
svc:/network/smtp:sendmail
svc:/application/management/seaport:default
svc:/application/management/snmpdx:default
svc:/application/management/dmi:default
svc:/application/management/sma:default
svc:/network/sendmail-client:default
svc:/network/ssh:default
svc:/system/sysidtool:net
svc:/network/rpc/bind:default
svc:/network/nfs/nlockmgr:default
svc:/network/nfs/client:default
svc:/network/nfs/status:default
svc:/network/nfs/cbd:default
svc:/network/nfs/mapid:default
svc:/network/inetd:default
svc:/system/sysidtool:system
svc:/system/postrun:default
svc:/system/filesystem/volfs:default
svc:/system/cron:default
svc:/application/font/fc-cache:default
svc:/system/boot-archive-update:default
svc:/network/shares/group:default
svc:/network/shares/group:zfs
svc:/system/sac:default
svc:/network/rpc/gss:default (Generic Security Service)
State: uninitialized since Fri Nov 04 10:07:31 2011
Reason: Restarter svc:/network/inetd:default is not running.
See: http://sun.com/msg/SMF-8000-5H
See: man -M /usr/share/man -s 1M gssd
Impact: 17 dependent services are not running:
svc:/network/nfs/client:default
svc:/system/filesystem/autofs:default
svc:/system/webconsole:console
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/milestone/multi-user-server:default
svc:/application/autoreg:default
svc:/application/stosreg:default
svc:/application/graphical-login/cde-login:default
svc:/application/cde-printinfo:default
svc:/network/smtp:sendmail
svc:/application/management/seaport:default
svc:/application/management/snmpdx:default
svc:/application/management/dmi:default
svc:/application/management/sma:default
svc:/network/sendmail-client:default
svc:/network/ssh:default
svc:/application/print/server:default (LP print server)
State: disabled since Fri Nov 04 10:07:31 2011
Reason: Disabled by an administrator.
See: http://sun.com/msg/SMF-8000-05
See: man -M /usr/share/man -s 1M lpsched
Impact: 1 dependent service is not running:
svc:/application/print/ipp-listener:default
svc:/network/rpc/smserver:default (removable media management)
State: uninitialized since Fri Nov 04 10:07:32 2011
Reason: Restarter svc:/network/inetd:default is not running.
See: http://sun.com/msg/SMF-8000-5H
See: man -M /usr/share/man -s 1M rpc.smserverd
Impact: 1 dependent service is not running:
svc:/system/filesystem/volfs:default
svc:/network/rpc/rstat:default (kernel statistics server)
State: uninitialized since Fri Nov 04 10:07:31 2011
Reason: Restarter svc:/network/inetd:default is not running.
See: http://sun.com/msg/SMF-8000-5H
See: man -M /usr/share/man -s 1M rpc.rstatd
See: man -M /usr/share/man -s 1M rstatd
Impact: 1 dependent service is not running:
svc:/application/management/sma:default
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/ 59G 1.1G 58G 2% /
/dev 59G 1.1G 58G 2% /dev
/lib 261G 7.5G 253G 3% /lib
/platform 261G 7.5G 253G 3% /platform
/sbin 261G 7.5G 253G 3% /sbin
/usr 261G 7.5G 253G 3% /usr
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
mnttab 0K 0K 0K 0% /etc/mnttab
objfs 0K 0K 0K 0% /system/object
swap 2.1G 248K 2.1G 1% /etc/svc/volatile
fd 0K 0K 0K 0% /dev/fd
swap 2.1G 0K 2.1G 0% /tmp
swap 2.1G 16K 2.1G 1% /var/run
zonedev/dbspfox1/biblio
293G 23K 58G 1% /zonedev/dbspfox1/biblio
zonedev/dbspfox1/biblio/P622
293G 10G 58G 16% /zonedev/dbspfox1/biblio/P622
zonedev/dbspfox1/biblio/P622/31mars
293G 47G 58G 46% /zonedev/dbspfox1/biblio/P622/31mars
zonedev/dbspfox1/biblio/P622/31mars/data
293G 34G 58G 38% /zonedev/dbspfox1/biblio/P622/31mars/data
zonedev/dbspfox1/biblio/P622/dev/data
293G 40G 58G 41% /zonedev/dbspfox1/biblio/P622/dev/data
zonedev/dbspfox1/biblio/P622/preprod
293G 13G 58G 19% /zonedev/dbspfox1/biblio/P622/preprod
zonedev/dbspfox1/biblio/P622/preprod/data
293G 40G 58G 42% /zonedev/dbspfox1/biblio/P622/preprod/data
What i missed? what happen with zfs dev directory?
thank you
WalterHi
I finally found the problem.
ZFS naming restrictions:
names must begin with a letter
Walter -
What is the best way to backup ZFS filesystem on solaris 10?
Normally on Linux environment, I'd use mondorescue to create image (full & incremental) so it can be easily restored (full or file/folders) to a new similar server environment for restore purposes in case of disaster.
I'd like to know the best way to backup a ZFS filesystem to a SAN storage and to restore it from there with minimal downtime. Preferrably with tools already available on Solaris 10.
Thanks.the plan is to backup whole OS, and configuration files
2 servers to be backed up
server A zpool:
- rootpool
- usr
- usrtmp
server B zpool:
- rootpool
- usr
- usrtmp
if we were to cut hardware cost, it is possible to back up to samba share?
any suggestions? -
DskPercent not returned for ZFS filesystems?
Hello.
I'm trying to monitor the space usage of some ZFS filesystems on a Solaris 10 10/08 (137137-09) Sparc system with SNMP. I'm using the Systems Management Agent (SMA) agent.
To monitor the stuff, I added the following to /etc/sma/snmp/snmpd.conf and restarted svc:/application/management/sma:default:
# Bug in SMA?
# Reporting - NET-SNMP, Solaris 10 - has a bug parsing config file for disk space.
# -> http://forums.sun.com/thread.jspa?threadID=5366466
disk /proc 42% # Dummy Wert; wird fälschlicherweise ignoriert werden...
disk / 5%
disk /tmp 10%
disk /apps 4%
disk /data 3%Now I'm checking what I get via SNMP:
--($ ~)-- snmpwalk -v2c -c public 10.0.1.26 dsk
UCD-SNMP-MIB::dskIndex.1 = INTEGER: 1
UCD-SNMP-MIB::dskIndex.2 = INTEGER: 2
UCD-SNMP-MIB::dskIndex.3 = INTEGER: 3
UCD-SNMP-MIB::dskIndex.4 = INTEGER: 4
UCD-SNMP-MIB::dskPath.1 = STRING: /
UCD-SNMP-MIB::dskPath.2 = STRING: /tmp
UCD-SNMP-MIB::dskPath.3 = STRING: /apps
UCD-SNMP-MIB::dskPath.4 = STRING: /data
UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/md/dsk/d200
UCD-SNMP-MIB::dskDevice.2 = STRING: swap
UCD-SNMP-MIB::dskDevice.3 = STRING: apps
UCD-SNMP-MIB::dskDevice.4 = STRING: data
UCD-SNMP-MIB::dskMinimum.1 = INTEGER: -1
UCD-SNMP-MIB::dskMinimum.2 = INTEGER: -1
UCD-SNMP-MIB::dskMinimum.3 = INTEGER: -1
UCD-SNMP-MIB::dskMinimum.4 = INTEGER: -1
UCD-SNMP-MIB::dskMinPercent.1 = INTEGER: 5
UCD-SNMP-MIB::dskMinPercent.2 = INTEGER: 10
UCD-SNMP-MIB::dskMinPercent.3 = INTEGER: 4
UCD-SNMP-MIB::dskMinPercent.4 = INTEGER: 3
UCD-SNMP-MIB::dskTotal.1 = INTEGER: 25821143
UCD-SNMP-MIB::dskTotal.2 = INTEGER: 7150560
UCD-SNMP-MIB::dskTotal.3 = INTEGER: 0
UCD-SNMP-MIB::dskTotal.4 = INTEGER: 0
UCD-SNMP-MIB::dskAvail.1 = INTEGER: 13584648
UCD-SNMP-MIB::dskAvail.2 = INTEGER: 6471520
UCD-SNMP-MIB::dskAvail.3 = INTEGER: 0
UCD-SNMP-MIB::dskAvail.4 = INTEGER: 0
UCD-SNMP-MIB::dskUsed.1 = INTEGER: 11978284
UCD-SNMP-MIB::dskUsed.2 = INTEGER: 679040
UCD-SNMP-MIB::dskUsed.3 = INTEGER: 0
UCD-SNMP-MIB::dskUsed.4 = INTEGER: 0
UCD-SNMP-MIB::dskPercent.1 = INTEGER: 47
UCD-SNMP-MIB::dskPercent.2 = INTEGER: 9
UCD-SNMP-MIB::dskPercent.3 = INTEGER: 0
UCD-SNMP-MIB::dskPercent.4 = INTEGER: 0
UCD-SNMP-MIB::dskPercentNode.1 = INTEGER: 9
UCD-SNMP-MIB::dskPercentNode.2 = INTEGER: 0
UCD-SNMP-MIB::dskPercentNode.3 = INTEGER: 0
UCD-SNMP-MIB::dskPercentNode.4 = INTEGER: 0
UCD-SNMP-MIB::dskErrorFlag.1 = INTEGER: noError(0)
UCD-SNMP-MIB::dskErrorFlag.2 = INTEGER: noError(0)
UCD-SNMP-MIB::dskErrorFlag.3 = INTEGER: noError(0)
UCD-SNMP-MIB::dskErrorFlag.4 = INTEGER: noError(0)
UCD-SNMP-MIB::dskErrorMsg.1 = STRING:
UCD-SNMP-MIB::dskErrorMsg.2 = STRING:
UCD-SNMP-MIB::dskErrorMsg.3 = STRING:
UCD-SNMP-MIB::dskErrorMsg.4 = STRING: As expected, dskPercent.1 and dskPercent.2 (ie. */* and */tmp*) returned good values. But why did Solaris/SNMP/??? return 0 for dskPercent.3 (*/apps*) and dskPercent.4 (*/data*)? Those directories are on two ZFS which are on seperate zpools:
--($ ~)-- zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
apps 39.2G 20.4G 18.9G 51% ONLINE -
data 136G 121G 15.2G 88% ONLINE -
--($ ~)-- zfs list apps data
NAME USED AVAIL REFER MOUNTPOINT
apps 20.4G 18.3G 20K /apps
data 121G 13.1G 101K /dataOr is it supposed to be that way? I'm pretty much confused, because I found some blog posting by a guy called asyd at http://sysadmin.asyd.net/home/en/blog/asyd/zfs+snmp. Copying from there:
snmpwalk -v2c -c xxxx katsuragi.global.asyd.net UCD-SNMP-MIB::dskTable
UCD-SNMP-MIB::dskPath.1 = STRING: /
UCD-SNMP-MIB::dskPath.2 = STRING: /home
UCD-SNMP-MIB::dskPath.3 = STRING: /data/pkgsrc
UCD-SNMP-MIB::dskDevice.1 = STRING: /dev/dsk/c1d0s0
UCD-SNMP-MIB::dskDevice.2 = STRING: data/home
UCD-SNMP-MIB::dskDevice.3 = STRING: data/pkgsrc
UCD-SNMP-MIB::dskTotal.1 = INTEGER: 1017935
UCD-SNMP-MIB::dskTotal.2 = INTEGER: 0
UCD-SNMP-MIB::dskTotal.3 = INTEGER: 0
UCD-SNMP-MIB::dskAvail.1 = INTEGER: 755538
UCD-SNMP-MIB::dskAvail.2 = INTEGER: 0
UCD-SNMP-MIB::dskAvail.3 = INTEGER: 0
UCD-SNMP-MIB::dskPercent.1 = INTEGER: 21
UCD-SNMP-MIB::dskPercent.2 = INTEGER: 18
UCD-SNMP-MIB::dskPercent.3 = INTEGER: 5What I find confusing, are his dskPercent.2 and dskPercent.3 outputs - for him, he got back dskPercent for what seems to be directories on ZFS filesystems.
Because of that I'm wondering about how it is supposed to be - should Solaris return dskPercent values for ZFS?+
Thanks a lot,
AlexanderI don't have the ability to test out my theory, but I suspect that you are using the default mount created for the zpools you've created (apps & data) as opposed to specific ZFS files systems, which is what the asyd blog shows.
Remember, the elements reported on in the asyd blog ARE zfs file systems; they are not just directories. They are indeed mountpoints, and it is reporting the utilization of those zfs file systems in the pool ("data") on which they are constructed. In the case of /home, the administrator has specifically set the mountpoint of the ZFS file system to be /home instead of the default /data/home.
To test my theory, instead of using your zpool default mount point, try creating a zfs file system under each of your pools and using that as the entry point for your application to write data into the zpools. I suspect you will be rewarded with the desired result: reporting of "disk" (really, pool) percent usage. -
Mount options for ZFS filesystem on Solaris 10
Do you know some recomendation
about mount options for SAP on Oracle
with data on ZFS filesystem?
Also recomended block size required.
We assume that file system with datafiles has 8kb block size
and offline redologs has default (128kB).
But what about ONLINE REDOLOGS?
Best regards
AndySUN Czech installed new production HW for one Czech customer with ZFS filesystem on data-, redo- and archivelog files.
Now we have performance problem and currently there is no SAP recomendation
for ZFS file system.
The HW which are by benchmark about tvice power has worst responses than
old hardware.
a) There is bug in Solaris 10 - ZFS buffers once allocated are not released
(generally we do not want to use buffering due to prevence of double
buffering)
b) ZFS buffers takes about 20GB (32GB total) of memory on DB server
and we are not able to define huge shared pool and db cache. (it may be possible
to set special parameter in /etc/system to reduce maximum size of ZFS buffers to e.g. 4GB )
c) We are looking for proven mount option for ZFS to enable asynchronious/concurent io for database filesystems
d) There is no proven clear answer for support of ZFS/SOLARIS/Oracle/SAP.
SAP says It is Oracle problem, Oracle does not certify filesystems from Jan2007
any more and says ask your OS provider and SUN looks happy, but performance
goes down and it is not so funny for system with 1TG DB with over 30GB grow
per month.
Andy -
Confused about ZFS filesystems created with Solaris 11 Zone
Hello.
Installing a blank Zone in Solaris *10* with "zonepath=/export/zones/TESTvm01" just creates one zfs filesystem:
+"zfs list+
+...+
+rzpool/export/zones/TESTvm01 4.62G 31.3G 4.62G /export/zones/TESTvm01"+
Doing the same steps with Solaris *11* will ?create? more filesystems:
+"zfs list+
+...+
+rpool/export/zones/TESTvm05 335M 156G 32K /export/zones/TESTvm05+
+rpool/export/zones/TESTvm05/rpool 335M 156G 31K /rpool+
+rpool/export/zones/TESTvm05/rpool/ROOT 335M 156G 31K legacy+
+rpool/export/zones/TESTvm05/rpool/ROOT/solaris 335M 156G 310M /export/zones/TESTvm05/root+
+rpool/export/zones/TESTvm05/rpool/ROOT/solaris/var 24.4M 156G 23.5M /export/zones/TESTvm05/root/var+
+rpool/export/zones/TESTvm05/rpool/export 62K 156G 31K /export+
+rpool/export/zones/TESTvm05/rpool/export/home 31K 156G 31K /export/home"+
I dont understand why Solaris 11 is doing that. Just one FS (like in Solaris 10) would be better for my setup. I want to configure all created volumes by myself.
Is it possible to deactivate this automatic "feature"?There are several reasons that it works like this, all guided by the simple idea "everything in a zone should work exactly like it does in the global zone, unless that is impractical." By having this layout we get:
* The same zfs administrative practices within a zone that are found in the global zone. This allows, for example, compression, encryption, etc. of parts of the zone.
* beadm(1M) and pkg(1) are able to create boot environments within the zone, thus making it easy to keep the global zone software in sync with non-global zone software as the system is updated (equivalent of patching in Solaris 10). Note that when Solaris 11 updates the kernel, core libraries, and perhaps other things, a new boot environment is automatically created (for the global zone and each zone) and the updates are done to the new boot environment(s). Thus, you get the benefits that Live Upgrade offered without the severe headaches that sometimes come with Live Upgrade.
* The ability to have a separate /var file system. This is required by policies at some large customers, such as the US Department of Defense via the DISA STIG.
* The ability to perform a p2v of a global zone into a zone (see solaris(5) for examples) without losing the dataset hierarchy or properties (e.g. compression, etc.) set on datasets in that hierarchy.
When this dataset hierarchy is combined with the fact that the ZFS namespace is virtualized in a zone (a feature called "dataset aliasing"), you see the same hierarchy in the zone that you would see in the global zone. Thus, you don't have confusing output from df saying that / is mounted on / and such.
Because there is integration between pkg, beadm, zones, and zfs, there is no way to disable this behavior. You can remove and optionally replace /export with something else if you wish.
If your goal is to prevent zone administrators from altering the dataset hierarchy, you may be able to accomplish this with immutable zones (see zones admin guide or file-mac-profile in zonecfg(1M)). This will have other effects as well, such as making all or most of the zone unwritable. If needed, you can add fs or dataset resources which will not be subject to file-mac-profile and as such will be writable. -
Slow down in zfs filesystem creation
Solaris 10 10/09 running as VM on Vmware ESX server with 7 GB RAM 1 CPU 64 bit
I wondered if anyone had seen the following issue, or indeed could see if they could replicate it -
Try creating a script that creates thousands of ZFS filesystems in one pool.
For example -
#!/usr/bin/bash
for i in {1..3000}
do
zfs create tank/users/test$i
echo "$i created"
done
I have found that after about 1000 filesystems the creation time slows down massively and it can take up to 4 seconds for,each new filesystem to create within the pool.
If I do the same for ordinary directories (mkdir) then I have no delays at all.
I was under the impression that ZFS filesystem were as easy to create as directories (folders), but this does not seem to be the case.
This sounds like it could be a bug. I have been able to replicate it several times on my system, but need others to verify this.Might be worth raising on the open solaris forums where theres a least a chance it will be read by a ZFS developer.
-
If zfs manages /etc in a separate ZFS filesystem it fails to boot
In my most recent installation I wanted to keep /ect in a separed ZFS filesystem to keep it with a higher compression.
But Arch fails to boot, it seems dbus needs the /etc directory mounted before the zfs daemon actually mounts it.
Do anyone had this problem? It is possible to mount the partitions before?
Thanks
Last edited by ezzetabi (2013-02-17 15:29:23)It happened to me ...
And he does not serve doing a RESET to her BIOS, you must take out the battery for some ten minutes, in my case, I tried with half hour and it worked.
Try with that and you tell us. -
ZFS ACL screwed up after migrating system
Hi, I started my ZFS server with NexentaStor to serve Windows clients via CIFS; I upgraded from NexentaStor from 1.0.4 to 1.1.7 no problem; switched from NexentaStor to Solaris Express B109 no problem; with all the past migrations, I did not do clean export/import; then I decided to upgrade from B109 to B114, I exported, imported again with the ZFS web gui, and all the original folders and files cannot show proper security settings in Windows client anymore, e.g., in Windows machine when I tried to see the security settings for a file or folder, it will say "unable to display security information". Does anyone has any experience like this? How do I start the diagnosing process?
Robert,
ZFS uses atomic operations to update filesytem metadata.This is implemented as follows. When a directory is updated a shadow copy of it and all its parents is created all the way to the root "superblock".
Then the existing superblock is swapped for the shadow superblock as an atomic operation.
A file deletion is an metadata operation like any other and requires making shadow copies
So what I think has happened is that the filesystem is so full that it can't find space to make the shadow copies to allow a delete.
Thanks for the explanation, probably that's what happened but I would consider it a very weak design if a user can cripple the FS just by filling it up.
So one way out is if you can add an extra device even a small one to the pool.That will give you enough space to delete.
Of course since you can never remove a device from a pool you'll be stuck with it.
I would have certainly liked to do this but this is just my desktop computer and I have only 1 hard disc with no extra space.
You could try asking on the opensolaris zfs forum's.They might have a special technique for dealing with it
The guys at the opensolaris forums don't like to answer Solaris problems but anyway I will give it a try.
Thankfully, I lost no data because I had backups and because the damaged ZFS was readable, so the only damage done was a loss of confidence in ZFS. -
ZFS Filesystem for FUSE/Linux progressing
About
ZFS is an advanced modern filesystem from Sun Microsystems, originally designed for Solaris/OpenSolaris.
This project is a port of ZFS to the FUSE framework for the Linux operating system.
It is being sponsored by Google, as part of the Google Summer of Code 2006 program.
Features
ZFS has many features which can benefit all kinds of users - from the simple end-user to the biggest enterprise systems. ZFS list of features:
Provable integrity - it checksums all data (and meta-data), which makes it possible to detect hardware errors (hard disk corruption, flaky IDE cables..). Read how ZFS helped to detect a faulty power supply after only two hours of usage, which was previously silently corrupting data for almost a year!
Atomic updates - means that the on-disk state is consistent at all times, there's no need to perform a lengthy filesystem check after forced reboots/power failures.
Instantaneous snapshots and clones - it makes it possible to have hourly, daily and weekly backups efficiently, as well as experiment with new system configurations without any risks.
Built-in (optional) compression
Highly scalable
Pooled storage model - creating filesystems is as easy as creating a new directory. You can efficiently have thousands of filesystems, each with it's own quotas and reservations, and different properties (compression algorithm, checksum algorithm, etc..).
Built-in stripes (RAID-0), mirrors (RAID-1) and RAID-Z (it's like software RAID-5, but more efficient due to ZFS's copy-on-write transactional model).
Among others (variable sector sizes, adaptive endianness, ...)
http://www.wizy.org/wiki/ZFS_on_FUSE
http://developer.berlios.de/project/sho … up_id=6836One workaround for this test was to drop down to NFSv3. That's fine for testing, but when I get ready to roll this thing into production, I hope there are no problems doing v4 from my NetApp hardware.
-
I have 5 history databases that total about 2.2 Terabytes. They use about 15 filesystems, but I can start with 2 that are isolated to only one database. I have asked for ZFS compression to be set on the filesystems, but since these are read-only tablespaces, I do not think any compression will happen. Can I simply offline, copy and rename the files from one filesystem to the other to make the compression happen?
just rsync the files to a compressed Zpool. do this using shadow migration, and you only loose access to the data for a few seconds.
1) make new dataset with compression
2) enable shadow migration between the new and old
3) change the database to use the new location
4) watch as data is automatically copied and compressed :-)
the the down side, you need extra space to pull the off. -
Failover on zone cluster configured for apache on zfs filesystem takes 30 M
Hi all
I have configured zone cluster for apache service, i have used ZFS file-system as high available storage.
The failover takes around 30mts which is not acceptable. my configuration steps are outlined as below
1) configured a 2 node physical cluster.
2) configured a quorum server.
3) configured a zone cluster.
4) created a resource group in the zone cluster.
5) created a resource for logical hostname and added to the above resource group
6) created a resource for Highavailable storage ( ZFS here) and added to the above resource group
7) created a resource for apache and added to the above resource group
the failover is taking 30mts of time and shows "pending offline/online" most of the time
I reduced the number of retry's to 1 , but of no use
Any help will be appreciated
Thanks in advance
SidSorry guys for the late reply,
I tried to switch the owners of RG to both the nodes simultaniously,which is taking reasonable time.But the failover for a dry run is taking 30mts
The same setup with SVM is working fine, but i want to have ZFS in my zone cluster
Thanks in advance
Sid -
Keeping mountpoints/attributes on a replicated (zoned=on) zfs filesystem
Hi,
I'm having to identical servers, one which is active and the other acts as a hot failover. Data on both servers should be identical. Both share the same IPs and hostnames. To ensure that I have data integrity I'm sending ZFS snapshots from the active server to the failover many times an hour, and one full snapshot every 24hrs.
The layout is like this (both servers are evidently identical)
global zone, mainly unused except by sysadmins, with one "production" zone
- in the zone I have a zpool :
- in the zpool I have pool/prodServer, mounted in /zones/prodSrv, which contains my production zone
- I also have pool/home-prodSrv , which is delegated to the zone, and mounted in /export/home in the zone.
It is important that I separate the /export/home from other data.
Since pool/home-prodSrv is set with the attribute zoned=on , it is not seen in the global with df -h (it is, of course seen with zfs list).
To replicate my data, in both global zones (active server and failover) I have a user called zfsman who sends / receive snapshots like this :
[from the active zone] :
sudo zfs snapshot pool/prodServer@full
ssh zfsman@FAILOVER sudo zfs destroy pool/prodServer (so that the full will succeed)
sudo zfs send pool/prodServer@full | ssh zfsman@FAILOVER sudo zfs recv pool/prodServer
Now, if I shutdown the active zone on the main server and start it up on the failover, I get a problem :
- pool/home-prodSrv is mounted over /export/home in the global zone
- quota/reservation/other attributes are unset
- pool/home-prodSrv is mounted in pool/prodServer in the production zone instead of /export/home
I tried this, from the active zone :
ssh zfsman@FAILOVER sudo zfs set mountpoint=/export/home pool/prodServer (it will complain that /export/home is already mounted, but no matter)
ssh zfsman@FAILOVER sudo zfs set zoned=on pool/prodServer
It works, but it just doesn't look/feel clean...
I'm obviously missing the way to keep the attributes on the receiving end as they are set in the sending zone.
Any idea, anybody, how to fix this ?
Regards,
Jeff
Edited by: J.F.Gratton on Nov 15, 2008 11:33 AMHi,
I'm having to identical servers, one which is active and the other acts as a hot failover. Data on both servers should be identical. Both share the same IPs and hostnames. To ensure that I have data integrity I'm sending ZFS snapshots from the active server to the failover many times an hour, and one full snapshot every 24hrs.
The layout is like this (both servers are evidently identical)
global zone, mainly unused except by sysadmins, with one "production" zone
- in the zone I have a zpool :
- in the zpool I have pool/prodServer, mounted in /zones/prodSrv, which contains my production zone
- I also have pool/home-prodSrv , which is delegated to the zone, and mounted in /export/home in the zone.
It is important that I separate the /export/home from other data.
Since pool/home-prodSrv is set with the attribute zoned=on , it is not seen in the global with df -h (it is, of course seen with zfs list).
To replicate my data, in both global zones (active server and failover) I have a user called zfsman who sends / receive snapshots like this :
[from the active zone] :
sudo zfs snapshot pool/prodServer@full
ssh zfsman@FAILOVER sudo zfs destroy pool/prodServer (so that the full will succeed)
sudo zfs send pool/prodServer@full | ssh zfsman@FAILOVER sudo zfs recv pool/prodServer
Now, if I shutdown the active zone on the main server and start it up on the failover, I get a problem :
- pool/home-prodSrv is mounted over /export/home in the global zone
- quota/reservation/other attributes are unset
- pool/home-prodSrv is mounted in pool/prodServer in the production zone instead of /export/home
I tried this, from the active zone :
ssh zfsman@FAILOVER sudo zfs set mountpoint=/export/home pool/prodServer (it will complain that /export/home is already mounted, but no matter)
ssh zfsman@FAILOVER sudo zfs set zoned=on pool/prodServer
It works, but it just doesn't look/feel clean...
I'm obviously missing the way to keep the attributes on the receiving end as they are set in the sending zone.
Any idea, anybody, how to fix this ?
Regards,
Jeff
Edited by: J.F.Gratton on Nov 15, 2008 11:33 AM
Maybe you are looking for
-
Character mode Report in Landscape Format
Hi All, I want to print my character mode report in landscape format. But I am not getting it. Everytime I get print as Portrait. I am printing from live previewer of report builder. I am using Oracle 9i and Report builder 6i (6.0.8.11.3). I have don
-
I cannot get a JTEXTArea to print in LANDSCAPE
How do I get this to work properly. I need the JTextArea to print out in Landscape not portrait. public void printReport() { display.getText(); PrinterJob job = PrinterJob.getPrinterJob(); PageFor
-
Blurry images in Lightroom monitor
I just imported my first pictures in Lightroom. They are blurry or pixielated. I am wondering if my images (D800 camera w/36mp) are not compatible with a laptop with a likely low resolution monitor. If I am correct, please advise if there is a pardon
-
SSRS Application to Retrieve Reports from the Report Server
I have an application that is done using SSRS on visual studio and it connects to the report server on SQL server 2008 environment. I've been changing the existing reports using visual studio 2008 and modifying the reports on the report server. How d
-
Sample schema manual installation
I installed the oracle 11g release2 and unchecked to install the sample schema when installation process, after installation completed, I want to install the sample schema manually, as document said all scripts will be in $ORACLE_HOME/demo/schema dir