NFSv4 and ZFS ACLs
Hello All,
Is there any specific shell command on Solaris 10, to find out if the file has NFSv4 or ZFS style ACLs set on them ??
-- I see there is a system call available for this purpose, but did not come across any specific command just to tell the presence of the above style ACLs.
-- "ls -v" displays the ACLs itself, but its too verbose
Also, are there any Perl modules available for checking the above style ACLs
Edited by: manjunathbs on Dec 12, 2008 6:03 PM
You will need to have concurrent ACLs applied to an interface.
The access lists are address familiy specific in their syntax and features so they cannot be mixed. An indicative example is shown below.
interface Ethernet1/1
ip access-group test-v4 in
ipv6 traffic-filter test-v6 in
ip access-list extended test-v4
permit ip any host 1.1.1.1
deny ip any any
ipv6 access-list test-v6
permit ipv6 any host 2001:DB8::1
deny ipv6 any any
Similar Messages
-
Hi,
Any one can share what is max file size can be created in Solaris 10 UFS and ZFS ?
What will be max size file compression using tar,gz ?
Regards
Sivafrom 'man ufs':
A sparse file can have a logical size of one terabyte.
However, the actual amount of data that can be stored
in a file is approximately one percent less than one
terabyte because of file system overhead.
As for ZFS, well, its a 128bit filesystem, and the maximum size of a file or directory is 2 ^64^ bytes, which i think is somewhere around 8 exabyte (i.e 8192 petabyte), even though my calculator gave up on calculating it.
http://www.sun.com/software/solaris/ds/zfs.jsp
.7/M.
Edited by: abrante on Feb 28, 2011 7:31 AM
fixed layout and 2 ^64^ -
SunCluster, MPXIO, Clariion and ZFS?
Hi,
we have a 2 node cluster (SunCluster 3.2). Our Storage is a EMC Clariion CX700. We have created some zpools and integrated them into the suncluster.
We cannot use PowerPath 5.1 and 5.2 for this because sun cluster and zfs is not supported in this environment. So, we want to use mpxio. Our question is, if there is a SP-Failover at the clariion, does mpxio support this and everything works fine without any problems?
Thanks!
Greets
BjörnHi,
What you need todo is the following.
edit this file /kernel/drv/scsi_vhci.conf
follow the directions of this link
http://www.filibeto.org/sun/lib/nonsun/emc/SolarisHostConectivity.pdf?bcsi_scan_1BD4CB6F2E356E40=0&bcsi_scan_filename=SolarisHostConectivity.pdf
regards
Filip -
Hi All,
I'm using clustered zones and ZFS and I get these messages below.
Is this something that I need to be worried about ?
Have I missed something when I created the resource which actually
is configured "by the book"?
Will HAStoragePlus work as expected?
Can I somehow verify that the zpool is monitored?
Apr 4 15:38:07 dceuxa2 SC[SUNW.HAStoragePlus:4,dceux08a-rg,dceux08a-hasp,hastorageplus_postnet_stop]: [ID 815306 daemon.warning] Extension properties GlobalDevicePaths and FilesystemMountPoints are both empty.
/Regards
UlfThanks for Your quick replies.
The HASP resource was created with -x Zpools="orapool1,orapool2"
and all other properties is at their defaults.
part of clrs show -v...
Resource: dceux08a-hasp
Type: SUNW.HAStoragePlus:4
Type_version: 4
Group: dceux08a-rg
R_description:
Resource_project_name: default
Enabled{dceuxa1:dceux08a}: True
Enabled{dceuxa2:dceux08a}: True
Monitored{dceuxa1:dceux08a}: True
Monitored{dceuxa2:dceux08a}: True
FilesystemMountPoints: <NULL>
GlobalDevicePaths: <NULL>
Zpools: orazpool1 orazpool2
(Solaris10u3/Sparc SC3.2, EIS 27-Feb)
/BR
Ulf -
We have ZfD running on one server for approx. 600 users (Sybase db on
NetWare 6.5).
We use it for; WS registration, WS Inventory, Application Mgmt, NAL
database, Imaging)
I have a mixture of Microsoft Windows and Novell NetWare servers.
Approximately:
30 Microsoft Windows servers (2000 and 2003)
10 Novell NetWare servers (NW 5.1 SP7 and NW 6.5 SP3)
Q1: Is it feasable to have the ZfS backend running on the same server that
hosts the ZfD backend ?
We are trying to find a way to monitor all server for disk usage. Ideally
we want to get a view/report of all servers (regardless of Novell or
Microsoft) to see where each disk is at with regards to available space and
also see historical trends for disk usage.
Q2: Can ZfS do this for us? We are licensed to use it but so far we've
only implemented the ZfD 6.5.2 and are quite please with the results.
Q3: Also, since we are licensed to use the latest ZfD and ZfS, any reason
to implement ZfS 7 instead of ZfS 6.5? We know that ZfD 7 is pretty much
the same as ZfD 6.5.2 so we've decided to hold back on this upgrade. If we
move forward with ZfS, I'm guessing that sticking with same version being
used with ZfD is a good idea?
Thanks for any answers!
MarcMarc Charbonneau,
>Q1: Is it feasable to have the ZfS backend running on the same server that
>hosts the ZfD backend ?
>
>We are trying to find a way to monitor all server for disk usage. Ideally
>we want to get a view/report of all servers (regardless of Novell or
>Microsoft) to see where each disk is at with regards to available space and
>also see historical trends for disk usage.
Yes, it's very workable with both ZFD and ZFS on the same box. ZFS can
monitor any of these features. It uses SNMP to do this on both netware and
windows.
>
>Q2: Can ZfS do this for us? We are licensed to use it but so far we've
>only implemented the ZfD 6.5.2 and are quite please with the results.
>
Glad to hear ZFD is working for you.
>Q3: Also, since we are licensed to use the latest ZfD and ZfS, any reason
>to implement ZfS 7 instead of ZfS 6.5? We know that ZfD 7 is pretty much
>the same as ZfD 6.5.2 so we've decided to hold back on this upgrade. If we
>move forward with ZfS, I'm guessing that sticking with same version being
>used with ZfD is a good idea?
Yes, although ZFS7 subscribers can run on XP, but I don't think 6.5 can.
In a way, zfd and zfs are very separate and the patches do not have to
match, but if you can keep it the same, than do. :)
Hope that helps.
Jared
Systems Analyst at Data Technique, INC.
jjennings at data technique dot com
Posting with XanaNews 1.17.6.6 in WineHQ
Check out Novell WIKI
http://wiki.novell.com/index.php/IManager -
We have ZfD running on one server for approx. 600 users (Sybase db on
NetWare 6.5).
We use it for; WS registration, WS Inventory, Application Mgmt, NAL
database, Imaging)
I have a mixture of Microsoft Windows and Novell NetWare servers.
Approximately:
30 Microsoft Windows servers (2000 and 2003)
10 Novell NetWare servers (NW 5.1 SP7 and NW 6.5 SP3)
Q1: Is it feasable to have the ZfS backend running on the same server that
hosts the ZfD backend ?
We are trying to find a way to monitor all server for disk usage. Ideally
we want to get a view/report of all servers (regardless of Novell or
Microsoft) to see where each disk is at with regards to available space and
also see historical trends for disk usage.
Q2: Can ZfS do this for us? We are licensed to use it but so far we've
only implemented the ZfD 6.5.2 and are quite please with the results.
Q3: Also, since we are licensed to use the latest ZfD and ZfS, any reason
to implement ZfS 7 instead of ZfS 6.5? We know that ZfD 7 is pretty much
the same as ZfD 6.5.2 so we've decided to hold back on this upgrade. If we
move forward with ZfS, I'm guessing that sticking with same version being
used with ZfD is a good idea?
Thanks for any answers!
MarcMarc,
It appears that in the past few days you have not received a response to your
posting. That concerns us, and has triggered this automated reply.
Has your problem been resolved? If not, you might try one of the following options:
- Do a search of our knowledgebase at http://support.novell.com/search/kb_index.jsp
- Check all of the other support tools and options available at
http://support.novell.com.
- You could also try posting your message again. Make sure it is posted in the
correct newsgroup. (http://support.novell.com/forums)
Be sure to read the forum FAQ about what to expect in the way of responses:
http://support.novell.com/forums/faq_general.html
If this is a reply to a duplicate posting, please ignore and accept our apologies
and rest assured we will issue a stern reprimand to our posting bot.
Good luck!
Your Novell Product Support Forums Team
http://support.novell.com/forums/ -
how to find out after looking at the ACl that this is router acl and this is port acl.
is there is any syntax difference between these two acl's? or these two look the same.how to find out after looking at the ACl that this is router acl and this is port acl.
It depends on where the ACL is applied:
Layer-3 interface (SVI, routed port): Router ACL
Layer-2 interface (physical switch interfaces): Port ACL
is there is any syntax difference between these two acl's?
Both support Standard and Extended ACLs, the Port ACLs support MAC Extended ACLs in addition.
Link: c3560 Configuring Network Security with ACLs -
interface gix/y
ip address A.B.C.D 255.255.255.192
ip access-group ACL-Inbound in
ip access-group ACL-Outbound out
exit
In ACL-Inbound I have allowed SMTP traffic 6 source address to 4 destination server. One sample output among 24 acl is given below.
permit tcp host E.F.G.H host I.J.K.L eq 25
I haven't applied any specific rule for SMTP traffic on outbound direction. My understanding is destinations will be able to reply to the request. Does that need to be specified in the ACLDisclaimer
The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
Liability Disclaimer
In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
Posting
As Fahad has already noted, if you're going to use both an in and out ACL, you'll need to account for the traffic allowed in both direction. Normally, the in and out ACE are just mirror entries, so for your example of:
in
permit tcp host E.F.G.H host I.J.K.L eq 25
out would be:
permit tcp host I.J.K.L eq 25 host E.F.G.H
Fahad also mentioned using a Reflexive ACL. These will generate a stateful mirror ACE for the reverse traffic. The reverse ACE will stay active for a short duration after seeing traffic that creates it and the it will time out and remove itself. Normally you would only use one on a trusted side of the device for generated flows. When used with a trusted side, the ACE often are made more generic, for example, any inside to outside HTTP flow will allow and ACE for the return traffic. -
DNFS and ZFS Storage Appliance: What are the benefits and drawbacks
Hello All:
I have a client who has a 4TB database and wants to easily make clones for their DEV environment. The conventional methods (RMAN duplicate) are taking too long because of the size of the db. Looking into dNFS as an alternative standalone and looking into the ZFS storage appliance as well. What are the benefits of just dNFS being configured alone? I'm sure you can clone easily based on the copy-on-write capabilities but weighing the dNFS option alone (no additional costs) vs using ZFS Storage Appliance (which used dNFS as its protocol) but costs money. Your thoughts?Dear Kirkladb,
as far as I understand your question you like to know the road blocks for usage of dNFS in combination with a ZFS Storage Appliance.
First I like to mention that dNFS is no feature on the appliance and the dNFS traffic is perceived as regular NFS traffic, so there is currently no feature which needs extra licenses on the ZFS Storage Appliance if you run dNFS on your client; dNFS is client driven and requires software on the client. Second the use of dNFS does not exclude to have snapshots or clones on the appliance, although cloning requires a license to be bought.
As mentioned by Nitin the appliance offers many features, some are based on ZFS, some are coming for the underlying OS and some coming form additional software. You seem to be interested in NFS, this is I guess mainly related to NFSv3 and NFSv4. The appliance will see dNFS from the clients as regular incoming NFS requests, means client side makes the difference and therefore it is important to have dNFS and maybe Infiniband drivers at a current level.
To get a copy of your production Database you could go different ways, the appliance offers to create a snapshot (free of charge) which is read-only and to create a clone (additional costs) on top of the snapshot. You have mentioned RMAN, as additional method, Snap Management Utility (Clone License) will also help here and maybe DataGuard. I am sure there are some additional ways not listed.
The point I wanted to make is that Cloning on ZFS-SA abd NFS are different are topics.
Best Regards
Peter -
Hi All,
On our proxy server, I would like to have access granted to our monitoring software userid to any log file created. The proxy server rotates its logs once per day, by appending the date and time to the rotated log file.
It appears, from the ZFS Administration guide, that I can do this by setting an ACE in the ACL of the log directory using the file_inherit inheritance flag. However, I cannot seem to make this function correctly.
For my logs directory, I did the following:
chmod A+user:patrold:read_data/execute:file_inherit:allow logs
This results in the following ACL for the logs directory:
root@testpxyt1# ls -ldv logs
drwxr-xr-x+ 2 webservd webservd 29 Mar 29 02:00 logs
0:user:patrold:read_data/execute:file_inherit:allow
1:owner@::deny
2:owner@:list_directory/read_data/add_file/write_data/add_subdirectory
/append_data/write_xattr/execute/write_attributes/write_acl
/write_owner:allow
3:group@:add_file/write_data/add_subdirectory/append_data:deny
4:group@:list_directory/read_data/execute:allow
5:everyone@:add_file/write_data/add_subdirectory/append_data/write_xattr
/write_attributes/write_acl/write_owner:deny
6:everyone@:list_directory/read_data/read_xattr/execute/read_attributes
/read_acl/synchronize:allow
However, when I look at the ACL for a newly created log file, I see this ACL:
root@testpxyt1# ls -lv access.201003290000
-rw-------+ 1 webservd webservd 396686 Mar 29 00:00 access.201003290000
0:user:patrold:read_data/execute:deny
1:user:patrold:read_data/execute:allow
2:owner@:execute:deny
3:owner@:read_data/write_data/append_data/write_xattr/write_attributes
/write_acl/write_owner:allow
4:group@:read_data/write_data/append_data/execute:deny
5:group@::allow
6:everyone@:read_data/write_data/append_data/write_xattr/execute
/write_attributes/write_acl/write_owner:deny
7:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
Somehow, the account patrold account is first denied access and then allowed access. Unfortunately this seems to result in a deny, as the ID is unable to read the file.
Did I misunderstand the way that this is supposed to work, or have I implemented it incorrectly?
Thanks,
ChrisThanks, that is exactly what it was.
Once I set the aclmode to passthrough, the permissions were set correctly.
Thanks again! -
Cache Flushes Solaris10 StorageTek D280 and NFS and ZFS
I encounter complains from users, who are connected via nfs, to Sun Solaris10 server.
The server is connected via Fibre to a Storage Tek d280.
The performance on the server is okay.
However, on the , via nfs connected , clients, the performance is poor.
I found this document, and want to try to disable the cache flushes on the server.
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
However, I rather want to have the Storage Tek D280 acting as a nice zfs storage device, rather than tweaking the operating system.
But I can not find any document on how to configure the behavior of the cache flushes on this device.
Is there someone who know how to setup this storage tek D280 box correctly to ingnore the Cache Flush commands generated by the NFS ?
Kind regards,806760 wrote:
Thanks for the response.
I don't know how the D280 internally has been setup. It should use raid 5. That's about the only thing I know about it.
It is under control of an ICT department.
However, the effect, or if the D280 is poor configured, does it only affect the NFS clients connected to the Solaris Server ?
I have ruled out the network configuration. This is a 1Gb connection. And for diagnose I tryed with a different switch, and direct connection.
But it did not influence the poor performance of the client, using NFS.
As a test, I just extract a tar file with a big amount of empty files.
This goes over 25 times slower on the clients, than on the server.
I have installed about 8 of those systems, but none is performing so bad.
Since everything on all systems is about the same configuration, the only things which are out of my control, is the network, and the san.
I tryed to test the network, but I don't see any problems with that.
So in my mind, the only thing left, would be the san device.
Searching on this topic, I found some explanations, about the zfs with nfs, which works poorly, due to the nfs is committing a regular synchronous write (NFS commit) However I don't like to do this.
I also can not find any description on how to configure a D280.
It would be nice, if you could provide some settings which has to be set in a D280.
The configuration is two cluster nodes, and two clients.
The cluster node mainly task is to provide the nfs shares.
The clients and servers are in one 19" rack.
The San, I don't know where it is.
It has a 2Gb fibre coupling. ( On the server side there are 4Gb Emulex HBA's installed )
Kind regards,If a tar file extracts 25 times faster on the server then it does over the network, yet both times the data is being written to the SAN LUNs on the D280, the problem is the network.
That tar file extracts slower across the network for two reasons: bandwidth and latency.
There's only so much data you can stuff through a gigE network. Your single 1 gigE link can handle about 100 MB/sec read and 100 MB/sec write combined - total. For all users. That may be part of your performance problem, because the configuration LUN layout of that D280 would have to be really, REALLY bad for it to be unable handle that relatively small amount of IO. You CAN test the performance of the LUNs being presented to your server - just use your favorite benchmarking tool to do various reads from the "/dev/rdsk/...." device files that make up your filesystem(s). Just make doggone sure you ONLY do reads - if you write to those LUNs your filesystem(s) will be corrupted. Something like "dd if=/dev/rdsk/... of=/dev/null bs=1024k count=10000" will tell you how fast that one LUN can stream data - but it won't tell you how many IO ops/sec the LUN can support as you'd need to do random small reads to do that. Any halfway-decently configured D280 LUN should be able to stream data at a constant 200 MB/sec while you're reading from it.
And even if the bandwidth were much higher, you still have to deal with the additional latency of having to do all communications across your network. No matter how fat the pipe is, it still takes more time to send data across the network and have to wait for a reply. What do your ping times look like between client and server? And even with that added latency, there are some things you can do on your hosts. Increase your TCP buffer sizes, mount your filesystems on your Linux clients with the "rsize=32768,wsize=32768,intr,noatime" options, and maybe use NFSv3 instead of NFSv4 - make sure you change both the server and client settings to be sure. And work with your network admins to get jumbo frames enabled. Moving more data per packet is a good way to address latency because you wind up having to wait for a response much fewer times. -
Failure in retrieving quotas: cDOT 8.2 , NFSv4 and Centos 7.1
Hi everyone,I would like to get some help on a tedious quota issue I am facing while using NFSv4 on cDOT 8.2.1 and linux centos 7 (kernel vsersion: 3.10.0-229.el7.x86_64 ). Basically I get an "operation not permitted" eveytime I try to get quotas from the filer. Server (clustered ontap 8.2 ) reports that the quotas are working and enabled: mycluster::> volume quota show -vserver myserver -volume vol1
Vserver Name: myvserver
Volume Name: vol1
Quota State: on
Scan Status: -
Logging Messages: on
Logging Interval: 1h
Sub Quota Status: none
Last Quota Error Message: -
Collection of Quota Errors: - The rquotad daemon is enabled: mycluster::> nfs show -vserver myserver -fields rquota
vserver rquota
myserver enabled
The quotas also work
mycluster::> quota report -vserver myvserver -volume vol1
Vserver: myserver
----Disk---- ----Files----- Quota
Volume Tree Type ID Used Limit Used Limit Specifier
vol1 user * 0B 10GB 0 - *
vol1 qtree_home
user * 0B 10GB 0 - *
vol1 user root 0B - 2 -
vol1 user user1
818.3MB 10GB 10337 - *
vol1 user user2
2.22GB 10GB 12577 - *
vol1 user user3
42.14MB 10GB 1523 - *
vol1 user user4
18.41MB 10GB 501 - *
vol1 user user5
36.20MB 10GB 395 - *
vol1 qtree_home
user root 0B - 1 -
9 entries were displayed.
From the client perspective I have the following configuration: nfs4 exported by autofs: /misc /etc/auto.misc
/net -hosts
+dir:/etc/auto.master.d
/- /etc/auto.home --timeout=600 --ghost
+auto.masterand for instance auto.home
/home -fstype=nfs -nfsvers=4 x.x.x.x:/vol1NFS config file ( /etc/sysconfig/nfs ) MOUNTD_NFS_V2="no"
MOUNTD_NFS_V3="no"
RPCNFSDARGS="-N 2 -N 3"
RPCNFSDARGS=""
RPCMOUNTDOPTS=""
STATDARG=""
SMNOTIFYARGS=""
RPCIDMAPDARGS=""
RPCGSSDARGS=""
GSS_USE_PROXY="yes"
RPCSVCGSSDARGS=""
BLKMAPDARGS=""
NFSMAPID_DOMAIN="my.cool.domain" The user system authentication is not local and is mediated by openldap. And there is an error when I do a user triage since I am not using AD I guess but openLDAP.mycluster::*> diag secd authentication show-creds -vserver myserver -node mycluster-02 -unix-user-name user1
Vserver: myserver (internal ID: 3)
Get user credentials procedure succeeded
[ 7] Determined UNIX id 5000 is UNIX user 'user1'
[ 8] Using a cached connection to ldap.server.ip
Error: command failed: Failed to get user credentials. Reason: "SecD Error: configuration not found". To end with this long post (sorry about that), when i try to get quotas for a user from the client i get this message:uname -a
Linux client 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linuxquota
quota: error while getting quota from x.x.x.x:/vol1 for user1 (id 5000): Operation not permittedquota --versionQuota utilities version 4.01.Compiled with: USE_LDAP_MAIL_LOOKUP EXT2_DIRECT HOSTS_ACCESS RPC RPC_SETQUOTA BSD_BEHAVIOURI also tried quota -m and -v without success. I see using wireshark a conversation between cDot nfsv4 server and centos client which ends in a "not permitted" error: 785 10.423592000 client server Portmap 98 V2 GETPORT Call (Reply In 786) RQUOTA(100011) V:2 UDP
786 10.423927000 server client Portmap 70 V2 GETPORT Reply (Call In 785) PROGRAM_NOT_AVAILABLE
787 10.423974000 client server Portmap 98 V2 GETPORT Call (Reply In 788) RQUOTA(100011) V:1 UDP
788 10.424303000 server client Portmap 70 V2 GETPORT Reply (Call In 787) Port:4049
789 10.424333000 client server RQUOTA 126 V1 GETQUOTA Call (Reply In 790)
790 10.424899000 server client RQUOTA 70 V1 GETQUOTA Reply (Call In 789)status: EPERM (3)Finally the triage for secd gives me this error:mycluster::*> diag secd authentication show-creds -vserver myserver -node mycluster-02 -unix-user-name user1
Vserver: myserver (internal ID: 3)
Get user credentials procedure succeeded
[ 7] Determined UNIX id 5000 is UNIX user 'user1'
[ 8] Using a cached connection to ldap.server.ip
Error: command failed: Failed to get user credentials. Reason: "SecD Error: configuration not found".Secd logs this error: Time Node Severity Event
6/25/2015 11:28:14 mycluster-02 ERROR secd.nameTrans.noNameMapping: vserver (myserver) could not map name (user1): (No rule exists to map name of user from unix-win). Thank you in advance for your patiencehi all
I am also facing the similar problem but in my case i used sun web server 6.1 and i installed policy agent 2.2.After authenticating in sun access manager it will throw me Server errror -
"This server has encountered an internal error which prevents it from fulfilling your request. The most likely cause is a misconfiguration. Please ask the administrator to look for messages in the server's error log."
and in policy agent log file it will throw me following exception
Error 12560:8f45a0 PolicyEngine: am_policy_evaluate: InternalException in AuthService::processLoginStatus() with error message:Exception message=[Invalid user ID and password. Try again.] errorCode='103' templateName=login_failed_template.jsp and code:3
Error 12560:8f45a0 PolicyAgent: validate_session_policy() status: Access Manager authentication service failure.
I delete my agent profile and recreate it but the message is same.I check my directory server also my agent profile is present there.
Can anyone tell me why this is happen?
Thanks,
Rita -
After updating kernel and ZFS modules, system cannot boot
Starting Import ZFS pools by cache file...
[ 4.966034] VERIFY3(0 == zap_lookup(ddt->ddt_os, ddt->ddt_spa->spa_ddt_stat_object, name, sizeof (uint64_t), sizeof (ddt_histogram_t) / sizeof (uint64_t), &hht->ddt_histogram[type][class])) failed (0 == 6)
[ 4.966100] PANIC at ddt.c:124:ddt_object_load()
[*** ] A start job is running for Import ZFS pools by cache (Xmin Ys / no limit)
And then occasionally I see
[ 240.576219] Tainted: P O 3.19.2-1-ARCH #1
Anyone else experiencing the same?Thanks!
I did the same and it worked... kind of. The first three reboots it failed (but did not stop the system from booting) producing:
zpool[426]: cannot import 'data': one or more devices is currently unavailable
systemd[1]: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
The second boot also resulted in a kernel panic, but as far as I can tell unrelated to zfs.
After reboot one and three imported the pool manually.
From the fourth reboot on loading from cache file always succeeded. However, I it takes faily long (~8 seconds) and even shows
[*** ] A start job is running for Import ZFS pools by cache (Xmin Ys / no limit)
briefly. Altough I might only notice that because the recent updates sped up oder parts of the boot process. Did you observe a slowdown during boot time, too, kinghajj?
Last edited by robm (2015-03-22 01:21:05) -
Solaris 10 upgrade and zfs pool import
Hello folks,
I am currently running "Solaris 10 5/08 s10x_u5wos_10 X86" on a Sun Thumper box where two drives are mirrored UFS boot volume and the rest is used in ZFS pools. I would like to upgrade my system to "10/08 s10x_u6wos_07b X86" to be able to use ZFS for the boot volume. I've seen documentation that describes how to break the mirror, create new BE and so on. This system is only being used as iSCSI target for windows systems so there is really nothing on the box that i need other then my zfs pools. Could i simply pop the DVD in and perform a clean install and select my current UFS drives as my install location, basically telling Solaris to wipe them clean and create an rpool out of them. Once the installation is complete, would i be able to import my existing zfs pools ?
Thank you very muchSure. As long as you don't write over any of the disks in your ZFS pool you should be fine.
Darren -
Solaris 10 JET install and ZFS
Hi - so following on from Solaris Volume Manager or Hardware RAID? - I'm trying to get my client templates switched to ZFS but it's failing with:
sudo ./make_client -f build1.zfs
Gathering network information..
Client: xxx.14.80.196 (xxx.14.80.0/255.255.252.0)
Server: xxx.14.80.199 (xxx.14.80.0/255.255.252.0, SunOS)
Solaris: client_prevalidate
Clean up /etc/ethers
Solaris: client_build
Creating sysidcfg
WARNING: no base_config_sysidcfg_timeserver specified using JumpStart server
Creating profile
Adding base_config specifics to client configuration
Adding zones specifics to client configuration
ZONES: Using JumpStart server @ xxx.14.80.199 for zones
Adding sbd specifics to client configuration
SBD: Setting Secure By Default to limited_net
Adding jass specifics to client configuration
Solaris: Configuring JumpStart boot for build1.zfs
Solaris: Configure bootparams build
Starting SMF services for JumpStart
Adding Ethernet number for build1 to /etc/ethers
cleaning up preexisting install client "build1"
removing build1 from bootparams
removing /tftpboot/inetboot.SUN4V.Solaris_10-1
svcprop: Pattern 'network/tftp/udp6:default/:properties/restarter/state' doesn't match any entities
enabling network/tftp/udp6 service
svcadm: Pattern 'network/tftp/udp6' doesn't match any instances
updating /etc/bootparams
copying boot file to /tftpboot/inetboot.SUN4V.Solaris_10-1
Force bootparams terminal type
-Restart bootparamd
Running '/opt/SUNWjet/bin/check_client build1.zfs'
Client: xxx.14.80.196 (xxx.14.80.0/255.255.252.0)
Server: xxx.14.80.199 (xxx.14.80.0/255.255.252.0, SunOS)
Checking product base_config/solaris
Checking product custom
Checking product zones
Product sbd does not support 'check_client'
Checking product jass
Checking product zfs
WARNING: ZFS: ZFS module selected, but not configured to to anything.
Check of client build1.zfs
-> Passed....
So what is "WARNING: ZFS: ZFS module selected, but not configured to to anything." referring to? I've amended my template and commented out all references to UFS so I now have this:
base_config_profile_zfs_disk="slot0.s0 slot1.s0"
base_config_profile_zfs_pool="rpool"
base_config_profile_zfs_be="BE1"
base_config_profile_zfs_size="auto"
base_config_profile_zfs_swap="65536"
base_config_profile_zfs_dump="auto"
base_config_profile_zfs_compress=""
base_config_profile_zfs_var="65536"
I see there is a zfs.conf file in /opt/SUNWjet/Products/zfs/zfs.conf do I need to edit that as well?
Thanks - J.Hi Julian,
You MUST create /var as part of the installation in base_config, as stuff gets put there really early during the install.
The ZFS module allows you to create additional filesystems/volumes in the rpool, but does not let you modify the properties of existing datasets/volumes.
So,
you still need
base_config_profile_zfs_var="yes" if you want a /var dataset.
/export and /export/home are created by default as part of the installation. You can't modify that as part of the install.
For your zones dataset, seems to be fine and as expected, however, the zfs_rpool_filesys needs to list ALL the filesystems you want to create. It should read zfs_rpool_filesys="logs zones". This makes JET look for variables of the form zfs_rpool_filesys_logs and zfs_rpool_filesys_zones. (The last variable is always picked up, in your case the zones entry. Remember, the template is a simple name=value set of variables. If you repeat the "name" part, it simply overwrites the value.)
So you really want:
zfs_rpool_filesys="logs zones"
zfs_rpool_filesys_logs="mountpoint=/logs quota=32g"
zfs_rpool_filesys_zones="mountpoint=/zones quota=200g reservation=200g"
(incidentally, you don't need to put zfs_pools="rpool" as JET assumes this automatically.)
So, if you want to alter the properties of /var and /export, the syntax you used would work, if the module was set up to allow you to do that. (It does not currently do it, but I may update it in the future to allow it).
(Send me a direct e-mail and I can send you an updated script which should then work as expected, check my profile and you should be able to guess my e-mail address)
Alternatively, I'd suggest writing a simple script and stick it into the /opt/SUNWjet/Clients/<clientname> directory with the following lines in them:
varexportquotas:
#!/bin/sh
zfs set quota=24g rpool/export
zfs set quota=24g rpool/ROOT/10/var
and then running it in custom_scripts_1="varexportquotas"
(Or you could simply type the above commands the first time you log in after the build. :-) )
Mike
Edited by: mramcha on Jul 23, 2012 1:39 PM
Edited by: mramcha on Jul 23, 2012 1:45 PM
Maybe you are looking for
-
Screen goes black, and battery drains without using
Well, I noticed this from the first moment I got my iPod touch (first generation.) The screen would go black or freeze on white. sometimes it shuts off on its own when I'm not using it. I figured out what was causing it. It was Wi-Fi. I had it on Pus
-
Signature Acrobat 8 vs. Acrobat 9
Hi there, I am new here and hope you can help me. "Sorry my English isn't good , so I hope you understand my questions" When I sign a PDF in Acrobat 8 my signature will be displayed whit a green hook and a pencil symbol. When I send this pdf to a
-
I am using trial of CC and have downloaded it and PS. Can't seem to find PS in the Apps window located on side of my files. I'm ready to rock, but have nothing to do it with. On Mac Powerbook that is one month old.
-
Economical instructor-led online course
I have a query on which online course should be taken which is most economical to take for Oracle Database 10g Administrator Certified Master certification. I went through the link, http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?pa
-
The GC running for no apparent reason.
Has anyone seen the Concurrent collector run, stopping the application even though there is plenty of free memory. In the following logs from jstat, the eden space is not full and infact is not touched. However the CMS collector decides to run and lo