Mounting zfs in Solaris 10
Currently I have three hard drives with:
Disk1: Solaris 10
Disk2: SXDE
Disk3: OpenSolaris (ZFS)
What is the proper way to mount Disk3 from Solaris 10 or SXDE
Hi
I think that the command that you need is 'zpool import' (on its own it will list pools that are available to import).
You may have problems if the version of ZFS on disk 3 is higher than the drivers on either Solaris 10 or SXDE (I recently tried importing an SXCE created ZFS partition into FreeBSD 7, but it failed for this reason).
Paul
Similar Messages
-
Hi
in zone:
bash-3.00# reboot
[NOTICE: Zone rebooting]
SunOS Release 5.10 Version Generic_144488-17 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Hostname: dbspfox1
Reading ZFS config: done.
Mounting ZFS filesystems: (1/10)cannot mount '/zonedev/dbspfox1/biblio/P622/dev': directory is not empt(10/10 )
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: exit status 1
Nov 4 10:07:33 svc.startd[12427]: svc:/system/filesystem/local:default: Method "/lib/svc/method/fs-local" fa iled with exit status 95.
Nov 4 10:07:33 svc.startd[12427]: system/filesystem/local:default failed fatally: transitioned to maintenanc e (see 'svcs -xv' for details)
For sure the directory in not empty, but the others too are not empty.
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zonedev 236G 57.6G 23K /zonedev
zonedev/dbspfox1 236G 57.6G 1.06G /zonedev/dbspfox1
zonedev/dbspfox1/biblio 235G 57.6G 23K /zonedev/dbspfox1/biblio
zonedev/dbspfox1/biblio/P622 235G 57.6G 10.4G /zonedev/dbspfox1/biblio/P622
zonedev/dbspfox1/biblio/P622/31mars 81.3G 57.6G 47.3G /zonedev/dbspfox1/biblio/P622/31mars
zonedev/dbspfox1/biblio/P622/31mars/data 34.0G 57.6G 34.0G /zonedev/dbspfox1/biblio/P622/31mars/data
zonedev/dbspfox1/biblio/P622/dev 89.7G 57.6G 50.1G /zonedev/dbspfox1/biblio/P622/dev
zonedev/dbspfox1/biblio/P622/dev/data 39.6G 57.6G 39.6G /zonedev/dbspfox1/biblio/P622/dev/data
zonedev/dbspfox1/biblio/P622/preprod 53.3G 57.6G 12.9G /zonedev/dbspfox1/biblio/P622/preprod
zonedev/dbspfox1/biblio/P622/preprod/data 40.4G 57.6G 40.4G /zonedev/dbspfox1/biblio/P622/preprod/data
bash-3.00# svcs -xv
svc:/system/filesystem/local:default (local file system mounts)
State: maintenance since Fri Nov 04 10:07:33 2011
Reason: Start method exited with $SMF_EXIT_ERR_FATAL.
See: http://sun.com/msg/SMF-8000-KS
See: /var/svc/log/system-filesystem-local:default.log
Impact: 33 dependent services are not running:
svc:/system/webconsole:console
svc:/system/filesystem/autofs:default
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/milestone/multi-user-server:default
svc:/application/autoreg:default
svc:/application/stosreg:default
svc:/application/graphical-login/cde-login:default
svc:/application/cde-printinfo:default
svc:/network/smtp:sendmail
svc:/application/management/seaport:default
svc:/application/management/snmpdx:default
svc:/application/management/dmi:default
svc:/application/management/sma:default
svc:/network/sendmail-client:default
svc:/network/ssh:default
svc:/system/sysidtool:net
svc:/network/rpc/bind:default
svc:/network/nfs/nlockmgr:default
svc:/network/nfs/client:default
svc:/network/nfs/status:default
svc:/network/nfs/cbd:default
svc:/network/nfs/mapid:default
svc:/network/inetd:default
svc:/system/sysidtool:system
svc:/system/postrun:default
svc:/system/filesystem/volfs:default
svc:/system/cron:default
svc:/application/font/fc-cache:default
svc:/system/boot-archive-update:default
svc:/network/shares/group:default
svc:/network/shares/group:zfs
svc:/system/sac:default
svc:/network/rpc/gss:default (Generic Security Service)
State: uninitialized since Fri Nov 04 10:07:31 2011
Reason: Restarter svc:/network/inetd:default is not running.
See: http://sun.com/msg/SMF-8000-5H
See: man -M /usr/share/man -s 1M gssd
Impact: 17 dependent services are not running:
svc:/network/nfs/client:default
svc:/system/filesystem/autofs:default
svc:/system/webconsole:console
svc:/system/system-log:default
svc:/milestone/multi-user:default
svc:/milestone/multi-user-server:default
svc:/application/autoreg:default
svc:/application/stosreg:default
svc:/application/graphical-login/cde-login:default
svc:/application/cde-printinfo:default
svc:/network/smtp:sendmail
svc:/application/management/seaport:default
svc:/application/management/snmpdx:default
svc:/application/management/dmi:default
svc:/application/management/sma:default
svc:/network/sendmail-client:default
svc:/network/ssh:default
svc:/application/print/server:default (LP print server)
State: disabled since Fri Nov 04 10:07:31 2011
Reason: Disabled by an administrator.
See: http://sun.com/msg/SMF-8000-05
See: man -M /usr/share/man -s 1M lpsched
Impact: 1 dependent service is not running:
svc:/application/print/ipp-listener:default
svc:/network/rpc/smserver:default (removable media management)
State: uninitialized since Fri Nov 04 10:07:32 2011
Reason: Restarter svc:/network/inetd:default is not running.
See: http://sun.com/msg/SMF-8000-5H
See: man -M /usr/share/man -s 1M rpc.smserverd
Impact: 1 dependent service is not running:
svc:/system/filesystem/volfs:default
svc:/network/rpc/rstat:default (kernel statistics server)
State: uninitialized since Fri Nov 04 10:07:31 2011
Reason: Restarter svc:/network/inetd:default is not running.
See: http://sun.com/msg/SMF-8000-5H
See: man -M /usr/share/man -s 1M rpc.rstatd
See: man -M /usr/share/man -s 1M rstatd
Impact: 1 dependent service is not running:
svc:/application/management/sma:default
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/ 59G 1.1G 58G 2% /
/dev 59G 1.1G 58G 2% /dev
/lib 261G 7.5G 253G 3% /lib
/platform 261G 7.5G 253G 3% /platform
/sbin 261G 7.5G 253G 3% /sbin
/usr 261G 7.5G 253G 3% /usr
proc 0K 0K 0K 0% /proc
ctfs 0K 0K 0K 0% /system/contract
mnttab 0K 0K 0K 0% /etc/mnttab
objfs 0K 0K 0K 0% /system/object
swap 2.1G 248K 2.1G 1% /etc/svc/volatile
fd 0K 0K 0K 0% /dev/fd
swap 2.1G 0K 2.1G 0% /tmp
swap 2.1G 16K 2.1G 1% /var/run
zonedev/dbspfox1/biblio
293G 23K 58G 1% /zonedev/dbspfox1/biblio
zonedev/dbspfox1/biblio/P622
293G 10G 58G 16% /zonedev/dbspfox1/biblio/P622
zonedev/dbspfox1/biblio/P622/31mars
293G 47G 58G 46% /zonedev/dbspfox1/biblio/P622/31mars
zonedev/dbspfox1/biblio/P622/31mars/data
293G 34G 58G 38% /zonedev/dbspfox1/biblio/P622/31mars/data
zonedev/dbspfox1/biblio/P622/dev/data
293G 40G 58G 41% /zonedev/dbspfox1/biblio/P622/dev/data
zonedev/dbspfox1/biblio/P622/preprod
293G 13G 58G 19% /zonedev/dbspfox1/biblio/P622/preprod
zonedev/dbspfox1/biblio/P622/preprod/data
293G 40G 58G 42% /zonedev/dbspfox1/biblio/P622/preprod/data
What i missed? what happen with zfs dev directory?
thank you
WalterHi
I finally found the problem.
ZFS naming restrictions:
names must begin with a letter
Walter -
Zfs on solaris 10 and home directory creation
I am using samba and a root preexec script to automatically create individual ZFS filesystem home directories with quotas on a Solaris 10 server that is a member of a Windows 2003 domain.
There are about 60,000 users in Active Directory.
My question is about best practice.
I am worried about the overhead of having 60,000 ZFS filesytems to mount and run on Solaris 10 ?
Edited by: fatfish on Apr 29, 2010 2:51 AMTesting results as follows -
Solaris 10 10/09 running as VM on Vmware ESX server with 7 GB RAM 1 CPU 64 bit.
ZFS pool created with three 50 GB FC LUNS from our SAN (Hardware RAID5). There are shared to ESX server and presented to the Solaris VM as Raw Device Mappings (no VMFS).
I set up a simple script to create 3000 ZFS filesystem home directories
#!/usr/bin/bash
for i in {1..3000}
do
zfs create tank/users/test$i
echo "$i created"
done
The first 1000 created very quickly.
By the time I reached about 2000 each filesystem was taking almost 5 seconds to create. Way too long. I gave up after about 2500.
So I rebooted.
The 2500 ZFS filesystems mounted in about 4 seconds, so no problem there.
The problem I have is why do the ZFS file system creation time drop of and become unworkable? I tried again to add to the pool after reboot and there was the same slow creation time.
Am I better off with just one ZFS file system with 60,000 userquotas applied and lots of ordinary user home directories created under that with mkdir? -
Boot from second hard drive with ZFS in Solaris 10 x86
Hi,
The usual menu.lst to boot Solaris10 x86 with a boot environment that contains a ZFS boot loader is this one
title Solaris 10 5/08 s10x_nbu6wos_nightly X86
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttyb
module /boot/x86.miniroot-safeI understand that to add an alternate boot disk, I have to play with installgrub and then attach it in the pool.
But my question is : Is there a way to boot from my alternate hard disk by adding a new entry in menu.lst or is the only
way is to modify the BIOS parameter?
Thanks for your help,
Groucho_frGroucho_fr wrote:
Thanks for your answer Alan,
But ok, I imagineI can add a grub entry. But what I want to know is what is the exact way to boot from an alternate hard disk on Solaris 10u6
x86.AFAIK there are only two ways to do this. Select the disks via BIOS which becomes a royal PITA in a short amount of time or play with a boot manager such as GRUB until you get the syntax right.
Personally I put all OS's on the primary disks if possible to avoid these situations and if I have others to play with then you can easily mount them in or set them up as D: drive etc...
The only exception would be in an enterprise setting where you need to do things such as mirroring but then again in those situations it's only one OS per machine anyways so it's much easier.
So at the end, I am not sure what I have to configure and what is the procedure to boot from my alternate boot diskTry the grub homepage for the grub manual and hopefully for a mailing list or try a search engine.
http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view
has an example to follow of using -B.
And moreover, the installgrub does not working for me :
root@mac1 # zpool status
pool: rpool
state: ONLINE
scrub: resilver completed after 0h2m with 0 errors on Thu Mar 26 10:56:24 2009
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
c5t4d0s0 ONLINE 0 0 0
errors: No known data errors
root@mac1 # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c5t4d0s0
cannot open/stat device /dev/rdsk/c5t4d0s2
You shouldn't need to do an installgrub if you installed Solaris. Try c5t4d0s0 not s2.
alan -
Cannot mount USB on Solaris 10 10/08
I inserted the USB drive, I have tried the following, however it didnt work.
devfsadm -Cv
Format>
In format, Im seeing only 1 disk. (that is my harddisk)
0 -> c1todo
I did few more things,
Mount -F pcfs /dev/dsk/c2t0d0p0:c /mnt
How I found out its c2, iostat -En output.
Please someone assist.
Thanks,
Sylvester, Prasanth& I tried making ZFS file system on pendrive,
zpool create myusb c2t0d0p0
zpool list myusb
Now I can see the file system, however windows doesnt recosgnise it now :( screwd again !!!Right, Windows has no idea what ZFS is so there is no way that it can mount the drive.
In order for it to work on both Windows and Solaris then you need to format the device with a filesystem in common.
The most common way to do this is to format it with a FAT32 filesystem or you could use NTFS if you load up the drivers for it on the Solaris side.
After you format it FAT32 then Windows should have no problems with it since it's a native format and Solaris vold mounts my FAT32 USB devices automatically.
alan -
Can't boot with zfs root - Solaris 10 u6
Having installed Solaris 10 u6 on one disk with native ufs and made this work by adding the the following entries
/etc/driver_aliases
glm pci1000,f
/etc/path_to_inst
<lang pci string for my scsi controller> glm
which are needed since the driver selected by default are the ncsr scsi controller driver that do not work in 64 bit.
Now I would like to create a new boot env. on a second disk on the same scsi controller, but use zfs instead.
Using Live Upgrade to create a new boot env on the second disk with zfs as file system worked fine.
But when trying to boot of it I get the following error
spa_import_rootpool: error 22
panic[cpu0]/thread=fffffffffbc26ba0: cannot mount root path /pci@0,0-pci1002,4384@14,4/pci1000@1000@5/sd@1,0:a
Well that's the same error I got with ufs before making the above mentioned changes /etc/driver_aliases and path_to_install
But that seems not to be enough when using zfs.
What am I missing ??Hmm I dropped the live upgrade from ufs to zfs because I was not 100% sure it worked.
Then I did a reinstall selecting to use zfs during the install and made the changes to driver_aliases and path_to_inst before the 1'st reboot.
The system came up fine on the 1'st reboot and did use the glm scsi driver and running in 64bit.
But that was it. When the system then was rebooted (where it made a new boot-archive) it stopped working. Same error as before.
I have managed to get it to boot in 32bit mode but still the same error (thats independent of what scsi driver used.)
In all cases it does pop the SunOS Relase banner and it do load the driver (ncrs or glm) and detects the disks in the correct path and numbering.
But it fails to load the file system.
So basically the current status are no-go if you need to use the ncrs/glm scsi driver to access the disks with your zfs root pool.
File-Safe works and can mount the zfs root pool, but that's no fun as server OS :( -
Dear friends,
I am using Sun Sparc solaris 10, I am trying to Mount the usb using the following command and i was successfully & able to mount the usb, but now suprisingly i find problem.
the command I used in SU mode is
mount -F pcfs /dev/dsk/c2t0d0s0:c /mnt
mount : no such file or Directory,
please can any tell why i occurs,
Your immediate reply would be more hel[pful for me
Thanks in Advance
Raja<table border="0" align="center" width="90%" cellpadding="3" cellspacing="1"><tr><td class="SmallText"><b>Quote:</b></td></tr><tr><td class="quote">
mount -F pcfs /dev/dsk/c2t0d0s0:c /mnt
mount : no such file or Directory
</td></tr></table>
Hello.
1) Check if the device entry /dev/dsk/c2t0d0s0 exists
(by typing ls -L /dev/dsk/c2t0d0s0 - a block file must be displayed)
2) Are you sure the disk is /dev/dsk/c2t0d0s<u><b>0</b></u> and not /dev/dsk/c2t0d0s<u><b>2</b></u>?
3) Make sure that the /mnt directory exists and is not mounted.
I hope this helps.
Martin -
How can i munt the NTFS partition of my Windows XP on Solaris 10?
Can somebody help me????
It can be possible?Package FSWmisc now mounts NTFS partitions directly. No special mount_ntfs program required. It is based on Martin Rosenau's mount_ntfs. Package FSWpart comes with prtpart, which displays all the partitions, including extended partitions, and partition types.
See http://www.genunix.org/distributions/belenix_site/binfiles/README.FSWfsmisc.txt
Download from http://www.belenix.org/binfiles/FSWpart.tar.gz and http://www.belenix.org/binfiles/FSWfsmisc.tar.gz
Install with pkgadd -d . (in the directory you unpacked the above two tarballs).
Here's a sample /etc/vfstab line:
/dev/dsk/c0d0p1 - /c ntfs - no roAnd sample output from mount and xlsmounts:
# mount
/c on 127.0.0.1:/ remote/read only/setuid/devices/port=33249/public/vers=2/proto=udp/xattr/dev=4700004 on Sun Nov 26 19:42:29 2006
# xlsmounts
PHYSICAL DEVICE LOGICAL DEVICE FS PID ADDR Mounted on
/dev/dsk/c0d0p1 /dev/dsk/c0d0p1 ntfs 3354 127.0.0.1:/ /cAlso both mount_ntfs and FSWfsmisc work with Solaris 10 and OpenSolaris ("Nevada"). -
Mounting problem with Solaris 10
Hi everybody !
I am new in Solaris admin (come from HP !)
here is my problem :
i have to create a FS (would be only data in it) that would be shared by 2 zones working on a fail-over principe.
So i create this FS directly under the global zone.
But i can't find in the Man pages the options of mount that tell the 2 zones to have their entries alternatively on this new FS
waiting for your advices, thanks in advance.
Titine.You don;t need to mount the FS by NFS unless your talking about zones on two different machines.
Two zones on the same machine can import the same filesystem with no problems.
But I'm unclear as to what your trying to achieve. Two zones on the same machines provides no real failover because if the machine goes down so do both zones.
Your best option is to have the FS on one machine and NFS export it to 2 other machines.
Theres no real advantage to using zones here of course. But you can if you prefer.
Of course the machine hosting the FS is still a single point of failure unless you use a SAN and HA-NFS. -
Ask about mounting NTFS in Solaris 10
hello ev body
I got a 80Go hard drive
in which installed win XP .. linux mandrake9.2 and reecently the Solaris10
the partition of my disk are
[b]principal -1-
Fat32 partition [32 Mo] principal (is the booting )
Fat32 parttition [6.6 Go] for data storing purpse
[b]principal -2-
Solaris Partition [15 Go] where Solaris is installed
also mounted / ; /export/home ;/... all typical unix mounted
[b]principal -3 -
NTFS partition [10 Go] for data purpse
NTFS partition [10 Go] for data purpse
NTFS partition [15 Go] for data purpse
NTFS partition [14 Go] where installed win Xp
linux swap partion [512 Mo]
ext2 partition [3 Go] where installed linux
free space [<500M]
I wish this is detailled enough to give me the right solution to see all partions when running Solaris
I don't care about writing access in NTFS ..
Personally I successfully mounted The 2 Fat32 partions in solaris with
#mount -F pcfs -o rw /dev/dsk/c0d0p0:c /mnt/C
#mount -F pcfs -o rw /dev/dsk/c0d0p0:c /mnt/C
and I failed to see the NTFS Partions
So please if there is a way to do tell me how to do
or if you need more details I will do better expl..Still Have no Answer !!
but when -
How to convert ufs to zfs in Solaris 10
After installinng solaris 10 i wnat to convert my ufs to zfs , how can i do that , unfortunately tehre is no documentation available in docs.sun.com. any help is greatly appreciated
I found a document on Sun site long time ago titled "The best file system in the world" (Peter Baer Galvin's as far as I remember). I believe it can be still found somewhere there, and here are some of citations that may be helpful:
There are many things that ZFS is, currently, and a few that it is not. The most frustrating current limit is that ZFS cannot be the root file system. A project is underway to resolve that issue, however. It certainly would be nice to install a system with ZFS as the root file system and then to have features like snapshots available for systems work. Consider taking a snapshot, making a change that causes a problem on the system (e.g., installing a bad patch), and then reverting the system to its pre-patch state by restoring the snapshot.
Also, ZFS can be imported and exported, but it is not a true "clustered file system" in that only one host can access the file system at one time.
An open issue is the support of ISVs, such as Oracle, for the use of a ZFS file system to store their data. I'm sure this will come over time.
Hot spares are not implemented currently. If a disk fails, a zpool replace command must be executed for the bad disk to be replaced by a good one.
A mirror can be removed from a mirror pair by the zpool detach command, but RAID-Z sets and non-mirrored disks currently cannot be removed from a pool.
Also, currently there is no built-in encryption, and at this point ZFS is a Solaris-only feature. Whether ports will be done to other operating systems remains to be seen.
Just a brief word on the readiness of ZFS for production use. Usually, a new file system would not even be considered for production use for quite a while after its first ship. ZFS, on the other hand, may well garner production use immediately on its production ship. The testing that has gone into ZFS is astounding, and in fact testing was considered a first-class component of the ZFS design and implementation.
More on it: http://www.opensolaris.org/os/community/zfs/docs/
HTH, Rev -
How to config ZFS in solaris 10
hi,
I'm testing solaris 10 x86 edition in my PC. but i don't
find any information about how to config ZFS in 10 (X86).. ;-(
if anybody know it, pls.give me some information.
thanks a lot.ZFS has not yet been integrated into Solaris 10...
At this time you need either the "Software Express for Solaris 2/06"
( http://www.sun.com/software/solaris/solaris-express/ ) or the
"Solaris Express: Community Release"
( http://www.opensolaris.org/os/downloads/on/ ) to play with ZFS.
. -
Mounting FAT32 in Solaris 10 step by step
Hi!
I had problem with do that and I finally solve. I saw that many peoples have problems with mounting FAT32 so I decided to write how I do that.
1. FAT32 must be a primary partition!
2. # smc
3. In This Computer-> Storage-> Disks check device name. I have c1d0.
4. Close smc and type # fdisk <device name>p0
for example: # fdisk c1d0p0
Check the number of FAT32 partition. In my case it is nr 4. Type 6 to cancel.
5. Make direction /pcfs/c by adding new folder or # mkdir /pcfs and # mkdir /pcfs/c
6. # mount -F pcfs /dev/dsk/<device name>p<nr of FAT32 partition>:c /pcfs/c
for example: # mount -F pcfs /dev/dsk/c1d0p4:c /pcfs/c
That's all!Hi,
I tried exactly as you suggested but it did not work for me. It shows unrecoverable error during system startup messgaes.
I have Windows 47 gb, Linux 28 gb, Solaris 38 gb and FAT32 76 gb.
The FAT32 partition is being detected fine from Linux and XP as well.
I did the partitioning using the utility provided in Ubuntu Linux installation CD.
I am not sure if the FAT32 is primary or not, how can I verify and if not how I can make it primary partition? Please help. Thanks a lot.
Ravi -
Hi all,
Two month ago, I have download Solaris 10, and there aren't the ZFS filesystem, so I want to know if Solaris have been upgraded and if it now contains the ZFS filesystem.
Thanks.
WillitsThere are no Solaris 10 update releases at this time.
There are newer Solaris Express releases, but they don't include
ZFS. -
In the Solaris 10 installation, I can only set the filesystem to UFS ? I want to know if ZFS is in Solaris 10 for X86 and if yes, how I can enable it.
Thanks.
WillitsI'm afraid ZFS didn't make it in time for the first official
Solaris 10 release ('GA') for either sparc or x86.
Promised for an early update release.
BTW, does anyone know what GA stands for?
Maybe you are looking for
-
Messages - Facebook ID numbers instead of Names
Messages Facebook names are alternating between ID Numbers and Full Names. I think this started happening after Airport Utility Update 6.3.1 but not absolutely sure. It will cycle several times between the correct full name and the Facebook ID numb
-
Change in desltop print quality?
When I had a system with XP,in order to make high quality prints (with photo paper) I needed to set the printer controls to HQ, paper type to "photo" and so on...... I acquired a better system running on W7 64Bit and now if I make the same print all
-
Pixel aspect and downrez to NTSC problem
Okay, I seem to have a complicated problem... 1. I have footage that was shot mostly at 1080p30 on an HVX200 (there is also some 720p60 here and there) 2. I am making an NTSC version, pan and scan, not letterbox, and have my sequence settings set usi
-
I accidentally quit finder.app on my MacBookPro SL 10.6 from the force quit menu as it hanged... now am having trouble relaunching it...
-
Unable to deploy artifact file to Managed Server.
In 3.0, I can deploy artifact file to Admin Server but not to an managed server. This happens with any artifact file. Am I doing some thing wrong? I get the following exception on the managed server log file: java.lang.NullPointerException at we