ZFS Mountpoints and iSCSI
It seems when i create a ZFS file system with a command like 'zfs create -V 2g mypool/iscsi_vol_0' I can export it with iscsitadm. It also shows a mountpoint of '-' when I do a'zfs list'.
-bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 4.00G 62.4G 18K /mypool
mypool/iscsi_vol_0 2G 62.6G 1.84G -
mypool/iscsi_vol_1 18K 10.0G 18K none
mypool/iscsi_vol_2 2G 64.4G 30K -If I create a ZFS file system with a command like 'zfs create mypool/iscsi_vol_1 -o quota=10G' it gets mounted, so I issue a 'zfs set mountpoint=none mypool/iscsi_vol_1' and check if its mounted 'ls /mypool/' or 'mount' and its not, yet I still cant export it?
-bash-3.00# iscsitadm create target -b /dev/zvol/dsk/mypool/iscsi_vol_1 name-tgt1
iscsitadm: Error Failed to stat(2) backing for 'disk'What is the significance of the "-" as a mountpoint that allows the target to be exported via iSCSI?
Thanks!
jcasale wrote:
It seems when i create a ZFS file system with a command like 'zfs create -V 2g mypool/iscsi_vol_0' I can export it with iscsitadm. It also shows a mountpoint of '-' when I do a'zfs list'.Right, the -V makes this a "volume". It's just an empty set of blocks (with a specific size). A normal filesystem allocates space in the pool as necessary depending on files that are added. Since it's a set of blocks, you can't access it directly through a mount point. You'd have to put a filesystem on it to do that.
If I create a ZFS file system with a command like 'zfs create mypool/iscsi_vol_1 -o quota=10G' it gets mounted, so I issue a 'zfs set mountpoint=none mypool/iscsi_vol_1' and check if its mounted 'ls /mypool/' or 'mount' and its not, yet I still cant export it?iSCSI (and SCSI) give access to block devices, not filesystems. So only volumes can be used over iSCSI, not normal ZFS (or other) filesystems.
Darren
Similar Messages
-
ZFS 7320c and T4-2 server mount points for NFS
Hi All,
We have an Oracle ZFS 7320c and T4-2 servers. Apart from the on-board 1 GB Ethernet, we also have a 10 Gbe connectivity between the servers and the storage
configured as 10.0.0.0/16 network.
We have created a few NFS shares but unable to mount them automatically after reboot inside Oracle VM Server for SPARC guest domains.
The following document helped us in configuration:
Configure and Mount NFS shares from SUN ZFS Storage 7320 for SPARC SuperCluster [ID 1503867.1]
However, we can manually mount the file systems after reaching run level 3.
The NFS mount points are /orabackup and /stage and the entries in /etc/vfstab are as follows:
10.0.0.50:/export/orabackup - /orabackup nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
10.0.0.50:/export/stage - /stage nfs - yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
On the ZFS storage, the following are the properties for shares:
zfsctrl1:shares> select nfs_prj1
zfsctrl1:shares nfs_prj1> show
Properties:
aclinherit = restricted
aclmode = discard
atime = true
checksum = fletcher4
compression = off
dedup = false
compressratio = 100
copies = 1
creation = Sun Jan 27 2013 11:17:17 GMT+0000 (UTC)
logbias = latency
mountpoint = /export
quota = 0
readonly = false
recordsize = 128K
reservation = 0
rstchown = true
secondarycache = all
nbmand = false
sharesmb = off
sharenfs = on
snapdir = hidden
vscan = false
sharedav = off
shareftp = off
sharesftp = off
sharetftp =
pool = oocep_pool
canonical_name = oocep_pool/local/nfs_prj1
default_group = other
default_permissions = 700
default_sparse = false
default_user = nobody
default_volblocksize = 8K
default_volsize = 0
exported = true
nodestroy = false
space_data = 43.2G
space_unused_res = 0
space_unused_res_shares = 0
space_snapshots = 0
space_available = 3.97T
space_total = 43.2G
origin =
Shares:
Filesystems:
NAME SIZE MOUNTPOINT
orabackup 31K /export/orabackup
stage 43.2G /export/stage
Children:
groups => View per-group usage and manage group
quotas
replication => Manage remote replication
snapshots => Manage snapshots
users => View per-user usage and manage user quotas
zfsctrl1:shares nfs_prj1> select orabackup
zfsctrl1:shares nfs_prj1/orabackup> show
Properties:
aclinherit = restricted (inherited)
aclmode = discard (inherited)
atime = true (inherited)
casesensitivity = mixed
checksum = fletcher4 (inherited)
compression = off (inherited)
dedup = false (inherited)
compressratio = 100
copies = 1 (inherited)
creation = Sun Jan 27 2013 11:17:46 GMT+0000 (UTC)
logbias = latency (inherited)
mountpoint = /export/orabackup (inherited)
normalization = none
quota = 200G
quota_snap = true
readonly = false (inherited)
recordsize = 128K (inherited)
reservation = 0
reservation_snap = true
rstchown = true (inherited)
secondarycache = all (inherited)
shadow = none
nbmand = false (inherited)
sharesmb = off (inherited)
sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.13/16:@10.0.0.200/16:@10.0.0.203/16
snapdir = hidden (inherited)
utf8only = true
vscan = false (inherited)
sharedav = off (inherited)
shareftp = off (inherited)
sharesftp = off (inherited)
sharetftp = (inherited)
pool = oocep_pool
canonical_name = oocep_pool/local/nfs_prj1/orabackup
exported = true (inherited)
nodestroy = false
space_data = 31K
space_unused_res = 0
space_snapshots = 0
space_available = 200G
space_total = 31K
root_group = other
root_permissions = 700
root_user = nobody
origin =
zfsctrl1:shares nfs_prj1> select stage
zfsctrl1:shares nfs_prj1/stage> show
Properties:
aclinherit = restricted (inherited)
aclmode = discard (inherited)
atime = true (inherited)
casesensitivity = mixed
checksum = fletcher4 (inherited)
compression = off (inherited)
dedup = false (inherited)
compressratio = 100
copies = 1 (inherited)
creation = Tue Feb 12 2013 11:28:27 GMT+0000 (UTC)
logbias = latency (inherited)
mountpoint = /export/stage (inherited)
normalization = none
quota = 100G
quota_snap = true
readonly = false (inherited)
recordsize = 128K (inherited)
reservation = 0
reservation_snap = true
rstchown = true (inherited)
secondarycache = all (inherited)
shadow = none
nbmand = false (inherited)
sharesmb = off (inherited)
sharenfs = sec=sys,rw,[email protected]/16:@10.0.0.218/16:@10.0.0.215/16:@10.0.0.212/16:@10.0.0.209/16:@10.0.0.206/16:@10.0.0.203/16:@10.0.0.200/16
snapdir = hidden (inherited)
utf8only = true
vscan = false (inherited)
sharedav = off (inherited)
shareftp = off (inherited)
sharesftp = off (inherited)
sharetftp = (inherited)
pool = oocep_pool
canonical_name = oocep_pool/local/nfs_prj1/stage
exported = true (inherited)
nodestroy = false
space_data = 43.2G
space_unused_res = 0
space_snapshots = 0
space_available = 56.8G
space_total = 43.2G
root_group = root
root_permissions = 755
root_user = root
origin =
Can anybody please help?
Regards.try this:
svcadm enable nfs/clientcheers
bjoern -
Difference between Zfs volume and filesystem ?
Hi,
Does any one know the difference between Zfs volume and Zfs filesystem?
On one of the existing nodes i saw the following enries for two times....
root@node11> zfs get all rpool/dump
NAME PROPERTY VALUE SOURCE
rpool/dump type volume -
rpool/dump creation Thu Feb 18 13:55 2010 -
rpool/dump used 1.00G -
rpool/dump available 261G -
rpool/dump referenced 1.00G -
rpool/dump compressratio 1.00x -
rpool/dump reservation none default
rpool/dump volsize 1G -
rpool/dump volblocksize 128K -
root@node11> zfs get all rpool/ROOT/firstbe/opt/SMAW
NAME PROPERTY VALUE SOURCE
rpool/ROOT/firstbe/opt/SMAW type filesystem -
rpool/ROOT/firstbe/opt/SMAW creation Thu Feb 18 14:03 2010 -
rpool/ROOT/firstbe/opt/SMAW used 609M -
rpool/ROOT/firstbe/opt/SMAW available 264G -
rpool/ROOT/firstbe/opt/SMAW referenced 609M -
rpool/ROOT/firstbe/opt/SMAW compressratio 1.00x -
rpool/ROOT/firstbe/opt/SMAW mounted yes -
rpool/ROOT/firstbe/opt/SMAW quota none default
rpool/ROOT/firstbe/opt/SMAW reservation 4G local
rpool/ROOT/firstbe/opt/SMAW recordsize 128K default
rpool/ROOT/firstbe/opt/SMAW mountpoint /opt/SMAW inherited from rpool/ROOT/firstbe
root@node11> zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool/dump 1.00G 261G 1.00G -
rpool/ROOT/firstbe/opt/SMAW 609M 264G 609M /opt/SMAW
Regards,
Nitin Knitin.k wrote:
Hi,
Does any one know the difference between Zfs volume and Zfs filesystem?A volume is a block device. A filesystem is a mounted point for file access.
For most users, a volume isn't normally necessary except for 'dump'.
Darren -
Hello.
I have a problem with OEL 6.5 and ocfs2.
When I mount ocfs2 with mount -a command all ocfs2 partitions mount and work, but when I reboot no ocfs2 partitions auto mount. No error messages in log. I use DAS FC and iSCSI FC.
fstab:
UUID=32130a0b-2e15-4067-9e65-62b7b3e53c72 /some/4 ocfs2 _netdev,defaults 0 0
#UUID=af522894-c51e-45d6-bce8-c0206322d7ab /some/9 ocfs2 _netdev,defaults 0 0
UUID=1126b3d2-09aa-4be0-8826-0b2a590ab995 /some/3 ocfs2 _netdev,defaults 0 0
#UUID=9ea9113d-edcf-47ca-9c64-c0d4e18149c1 /some/8 ocfs2 _netdev,defaults 0 0
UUID=a368f830-0808-4832-b294-d2d1bf909813 /some/5 ocfs2 _netdev,defaults 0 0
UUID=ee816860-5a95-493c-8559-9d528e557a6d /some/6 ocfs2 _netdev,defaults 0 0
UUID=3f87634f-7dbf-46ba-a84c-e8606b40acfe /some/7 ocfs2 _netdev,defaults 0 0
UUID=5def16d7-1f58-4691-9d46-f3fa72b74890 /some/1 ocfs2 _netdev,defaults 0 0
UUID=0e682b5a-8d75-40d1-8983-fa39dd5a0e54 /some/2 ocfs2 _netdev,defaults 0 0What is the output of:
# chkconfig --list o2cb
# chkconfig --list ocfs2
# cat /etc/ocfs2/cluster.conf -
Ask the Expert: Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI
Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about Cisco UCS Troubleshooting Boot from SAN with FC and iSCSI with Vishal Mehta and Manuel Velasco.
The current industry trend is to use SAN (FC/FCoE/iSCSI) for booting operating systems instead of using local storage.
Boot from SAN offers many benefits, including:
Server without local storage can run cooler and use the extra space for other components.
Redeployment of servers caused by hardware failures becomes easier with boot from SAN servers.
SAN storage allows the administrator to use storage more efficiently.
Boot from SAN offers reliability because the user can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
Cisco UCS takes away much of the complexity with its service profiles and associated boot policies to make boot from SAN deployment an easy task.
Vishal Mehta is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco Nexus 5000, Cisco UCS, Cisco Nexus 1000v, and virtualization. He has presented at Cisco Live in Orlando 2013 and will present at Cisco Live Milan 2014 (BRKCOM-3003, BRKDCT-3444, and LABDCT-2333). He holds a master’s degree from Rutgers University in electrical and computer engineering and has CCIE certification (number 37139) in routing and switching and service provider.
Manuel Velasco is a customer support engineer for Cisco’s Data Center Server Virtualization TAC team based in San Jose, California. He has been working in the TAC for the past three years with a primary focus on data center technologies such as Cisco UCS, Cisco Nexus 1000v, and virtualization. Manuel holds a master’s degree in electrical engineering from California Polytechnic State University (Cal Poly) and VMware VCP and CCNA certifications.
Remember to use the rating system to let Vishal and Manuel know if you have received an adequate response.
Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Data Center community, under subcommunity Unified Computing, shortly after the event. This event lasts through April 25, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.Hello Evan
Thank you for asking this question. Most common TAC cases that we have seen on Boot-from-SAN failures are due to misconfiguration.
So our methodology is to verify configuration and troubleshoot from server to storage switches to storage array.
Before diving into troubleshooting, make sure there is clear understanding of this topology. This is very vital with any troubleshooting scenario. Know what devices you have and how they are connected, how many paths are connected, Switch/NPV mode and so on.
Always try to troubleshoot one path at a time and verify that the setup is in complaint with the SW/HW interop matrix tested by Cisco.
Step 1: Check at server
a. make sure to have uniform firmware version across all components of UCS
b. Verify if VSAN is created and FC uplinks are configured correctly. VSANs/FCoE-vlan should be unique per fabric
c. Verify at service profile level for configuration of vHBAs - vHBA per Fabric should have unique VSAN number
Note down the WWPN of your vhba. This will be needed in step 2 for zoning on the SAN switch and step 3 for LUN masking on the storage array.
d. verify if Boot Policy of the service profile is configured to Boot From SAN - the Boot Order and its parameters such as Lun ID and WWN are extremely important
e. finally at UCS CLI - verify the flogi of vHBAs (for NPV mode, command is (from nxos) – show npv flogi-table)
Step 2: Check at Storage Switch
a. Verify the mode (by default UCS is in FC end-host mode, so storage switch has to be in NPIV mode; unless UCS is in FC Switch mode)
b. Verify the switch port connecting to UCS is UP as an F-Port and is configured for correct VSAN
c. Check if both the initiator (Server) and the target (Storage) are logged into the fabric switch (command for MDS/N5k - show flogi database vsan X)
d. Once confirmed that initiator and target devices are logged into the fabric, query the name server to see if they have registered themselves correctly. (command - show fcns database vsan X)
e. Most important configuration to check on Storage Switch is the zoning
Zoning is basically access control for our initiator to targets. Most common design is to configure one zone per initiator and target.
Zoning will require you to configure a zone, put that zone into your current zonset, then ACTIVATE it. (command - show zoneset active vsan X)
Step 3: Check at Storage Array
When the Storage array logs into the SAN fabric, it queries the name server to see which devices it can communicate.
LUN masking is crucial step on Storage Array which gives particular host (server) access to specific LUN
Assuming that both the storage and initiator have FLOGI’d into the fabric and the zoning is correct (as per Step 1 & 2)
Following needs to be verified at Storage Array level
a. Are the wwpn of the initiators (vhba of the hosts) visible on the storage array?
b. If above is yes then Is LUN Masking applied?
c. What LUN number is presented to the host - this is the number that we see in Lun ID on the 'Boot Order' of Step 1
Below document has details and troubleshooting outputs:
http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/115764-ucs-san-tshoot-00.html
Hope this answers your question.
Thanks,
Vishal -
Migrade old NW-cluster to VmWare and iSCSI
That's the job to be done,
question is just; which way to go;
Setup today is;
2x Single-CPU XeonServers, both connected
to 2xPromise 15HD/Raidkabinetts over 160mb SCSI.
This install beeing approx 5 years old and is getting slow and full.
Idea is to buy new, fast 6core Intel servers, run VmWare which makes
use of the power and make our life easier....
One question Im wondering about and have a guess regarding the answer
to is;
iSCSI based storage for these servers, should it be assigned and
supplied through VmWare or should the virtual NWserver running under
VmWare connect directly to the iSCSI storage ?
Usually, having an additional step on the way should be overhead, but
with regards to support for load-balancing NIC's, memory,etc,, my
guess is that the opposite could be true here; meaning that the VmWare
box should handle all storage and NW getting it's storage through
VmWare. But,, again just my guess...any input ?
Another question of course is Netware Vs Linux OES.
We have long time experience with NW and find our way around it well,
the Linux part is still something we really don't feel that at home
with. The install's of OES on Linux we've done for lab, test's etc
have allways felt rather unfinished. One could think that beeing from
the Novell/Netware camp, a logical step should be Linux/OES but
....Even when following the online doc's to do a basic test-setup works
quite ..bad...to much manual fixes getting stuff to work, to much
hazzle getting registration and updates to work....
Still, going virtual might be a way to make it easier switching to
OES/Linux since it'll be easier having a backup/image each time one
try do update,fix,,etc,,etc..
In the end, the needs are basic, one Groupwise server and one
FileServer. Going virtual enables us to over time migrate other
resources to....Thanks 4 the quick reply Massimo,
Well, iSCSI or not, the other part of the question.
Time IS probably here to replace Netware, that much is obvious.
With the existing setup, we got working backups,
with our W2k/03/08, we got working backups and disaster recovery plans
Moving forward, using OES for us at least, seems a more difficult path
while any move from Netware today would probaby give us better
throughput. Using VmWare seems like a managable solution since do
updates/upgrades and backups could be done easily. To have a image to
revert to if any update goes wrong is much easier/faster than a
re-install each time....
On Mon, 22 Nov 2010 08:54:15 GMT, Massimo Rosen
<[email protected]> wrote:
>Hi,
>
>[email protected] wrote:
>>
>> That's the job to be done,
>> question is just; which way to go;
>>
>> Setup today is;
>> 2x Single-CPU XeonServers, both connected
>> to 2xPromise 15HD/Raidkabinetts over 160mb SCSI.
>>
>> This install beeing approx 5 years old and is getting slow and full.
>
>Hmmmm....
>
>> Idea is to buy new, fast 6core Intel servers, run VmWare which makes
>> use of the power and make our life easier....
>> One question Im wondering about and have a guess regarding the answer
>> to is;
>>
>> iSCSI based storage for these servers, should it be assigned and
>> supplied through VmWare or should the virtual NWserver running under
>> VmWare connect directly to the iSCSI storage ?
>
>If your current setup is slow, you shouldn't be using iSCSI at all. It
>won't be much faster, and iSCSI is becoming increasingly stale
>currently, unless you have 10GBE. And even then it isn't clear if iSCSI
>over 10GBE is really much faster than when using 1GB. TCP/IP needs a lot
>of tuning to achieve that speed.
>
>>
>> In the end, the needs are basic, one Groupwise server and one
>> FileServer. Going virtual enables us to over time migrate other
>> resources to....
>
>I would *NEVER* put a Groupwise Volume into a VMDK. That said, my
>suggestion would be OES2, and a RDM at the very least for Groupwise.
>
>CU, -
Fault Tolerance of NFS and iSCSI
Hello,
I'm currently designing a new datacenter core environment. In this case there are also nexus 5548 with FEXs involved. On this fex's there are some servers which speak NFS and iSCSI.
While changing the core component there will be a disruption between the servers.
What ks the maximim timeout a NFS or iSCSI protocoll can handle while changing the components. Maybe there will be disruption for a maximimum of 1 sekond.
Regards
Udo
Sent from Cisco Technical Support iPad AppJDW1: In case you haven't received the ISO document yet, the relevent section of the cited ISO 11898-2:2003 you want to look at is section 7.6 "Bus failure management", and specifically Table 12 - "Bus failure detection" and Figure 19 - "Possible failures of bus lines".
-
Hi,
I'm in the early stages of designing a RAC installation and am wondering if iSCSI is a possibility for the shared storage. As far as I have been able to tell, the only certified shared storage systems are all FC based.
I'm wondering if anyone here has any thoughts/experiences with RAC and iSCSI that they'd like to share.
regards
iainI built my iSCSI solution using linux and off the shelf raid solutions - originally just using gig-e cards, however i have upgraded to the accelerated cards (recognizes iSCSI and offloads the host CPU of procesing) and it works great.
There are some really affordable solution either way these days. Fibre Channel is still more expensive on the drive side considering moderm SATA drives are catching up in performance.
I built a 3.2tb iSCSI network that is shared between 6 servers running a web index of 100milliion pages with about 60k queries a day and it works great. I was able to roll it out on the colo's switches without getting specialized cabinets, fiber channel switches and only sent in upgraded nics.. sooner or later as the queries increase i'll send in a seperate gigE switch - however its just on its own vlan now.
For me it was a lot of work since i built a system from scratch - today lots of vendors offer off the shelf components are bargain basement prices compared to when i ventured into it :) -
NFS and ISCSI using ip hash load balance policy
As i know all these days that the best practice for iscsi is to use single nic and one standby with " route based port id" ButI have seen in a client placethat NFS and iscsi are configured to use"route based ip hash" and multiple nic and it has been working all these days. i can not see that iscsi does multi path there.I was told by the sys admin that it is ok to use that since the both protocol are configured in same storage and it does not make sense to separate it ,his explanation that if we want separate policy then use separate storage that is one for nfs and other for iscsi, i do not buy that, i might be wrong.He pointed his link below saying that you can use ip hash.http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalI....Is it ok to use " route based ip hash for iscsi as on the link?
This topic first appeared in the Spiceworks CommunityWhen you create your uplink port profile you simply use the auto channel command in your config:
channel-group auto mode on
This will create a static etherchannel when two or more ports are added to the uplink port profile from the same host. Assuming your upstream switch config is still set to "mode on" for the etherchannel config, there's nothing to change.
Regards,
Robert -
Is it possible to create /var as its own ZFS dataset when using liveupgrade? With ufs, there's the -m option to lucreate. It seems like any liveupgrade to a ZFS root results in just the root, dump, and swap datasets for the boot environment.
merillHey man.
I bad my head against the wall with the same question :-)
One thing that might help you out anyway is that i found a solution on how to move ufs filesystems to the new ZFS pool.
Let's say you have a ufs fs with let's say application server and stuff on /app which is on c1t0d0s6.
When you create the new ZFS based BE the /app is shared.
In order to move it to the new BE, all you need to do is to comment the lines in /etc/vfstab you want to be moved.
then run lucreate to create the ZFS BE.
After that, create a new dataset for /app, just give it a different mountpoint.
Copy all your stuff.
rename the original /app
and set the dataset's mountpoint
this is it, all your stuff are now on ZFS.
Hope it will be usefull, -
ZFS clones and snapshot... can't delete snapshot were clone is based on
root@solaris [/] # zfs list -r
NAME USED AVAIL REFER MOUNTPOINT
home 100K 9,78G 21K /datahome
root@solaris [/] # zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
home 9,94G 108K 9,94G 0% ONLINE -
root@solaris [/] # zfs create home/test
root@solaris [/] # zfs snapshot home/test@today
root@solaris [/] # zfs clone home/test@today home/myclone
root@solaris [/] # zfs list -r
NAME USED AVAIL REFER MOUNTPOINT
home 138K 9,78G 23K /datahome
home/myclone 0 9,78G 21K /datahome/myclone
home/test 21K 9,78G 21K /datahome/test
home/test@today 0 - 21K -
root@solaris [/] # zfs promote home/myclone
root@solaris [/] # zfs list -r
NAME USED AVAIL REFER MOUNTPOINT
home 140K 9,78G 24K /datahome
home/myclone 21K 9,78G 21K /datahome/myclone
home/myclone@today 0 - 21K -
home/test 0 9,78G 21K /datahome/test
root@solaris [/] # zfs destroy home/myclone
cannot destroy 'home/myclone': filesystem has children
use '-r' to destroy the following datasets:
home/myclone@today
root@solaris [/] # zfs destroy home/myclone@today
cannot destroy 'home/myclone@today': snapshot has dependent clones
use '-R' to destroy the following datasets:
home/test
root@solaris [/] #Why can't I destroy a snapshot? home/myclone is now a volume that is not linked to home/test.
So I would expect to be able to delete the snapshot from myclone.
Maybe I misunderstand something about how this works or I have the wrong expectations.
I would expect a clone to be something like a copy that is independent of the volume being cloned.The idea is that when you create a clone, it is lightweight and based on the snapshot. That's what makes it so fast. You're not copying every block in the filesystem. So the snapshot is what ties together the parent filesystem and the clone.
For the clone to be independent, you'd have to copy all the blocks. There's no option to do that within the clone process. So as long as both the parent filesystem and the clone filesystem are around, the snapshot has to exist as well.
Darren -
Hello all,
I have a setup like 2 x T4-4 server and 1x ZFS 7320 Storage with 10G Ethernet connectivity between these . Each T4 server i configured with 4 Ldom's in which the OS disk is the iSCSI lun allocated from ZFS storage.
Now i would like to know
1. is any known issues for allocating iSCSi luns as LDOM OS disk
2. which is the best way to allocate lun from ZFS 7320 to these LDOM's for filesystem ?
3. how can i allocate raw LUN for oracle ASM in the LDOM ? is ISCSI is a good option ? is any known issues ?
Thanks and regards
AnzHello all,
I have a setup like 2 x T4-4 server and 1x ZFS 7320 Storage with 10G Ethernet connectivity between these . Each T4 server i configured with 4 Ldom's in which the OS disk is the iSCSI lun allocated from ZFS storage.
Now i would like to know
1. is any known issues for allocating iSCSi luns as LDOM OS disk
2. which is the best way to allocate lun from ZFS 7320 to these LDOM's for filesystem ?
3. how can i allocate raw LUN for oracle ASM in the LDOM ? is ISCSI is a good option ? is any known issues ?
Thanks and regards
Anz -
Windows 7 answer file deployment and iscsi boot
Hi, I am trying to prepare an image with windows7 Ent that has been installed, went through "audit" and then shutdown with:
OOBE+Generalize+Shutdown
So that I can clone this image and the next time it boots, it will use answer file to customize, join domain etc.
The complication to this - is that I am using iscsi boot for my image, and working within Vmware ESX.
I can install Windows without any issues, get the drivers properly working, reboot and OOBE on the same machine - no issues.
The problems come when I clone the VM and the only part that changes (that I think really matters) is the mac address of the network card. The new clone when it comes up after the OOBE reboot hangs for about 10min and then proceeds without joining to domain.
Using Panter logs and network traces - I saw that the domain join command was timing out and in fact no traffic was being sent to the DC. So the network was not up. The rest of the answer file customization works fine.
As a test I brought up this new clone (with new mac) in Audit mode - and Windows reported that it found and installed drivers for new device - VMXNET3 Driver 2. So in fact it does consider this a new device.
Even though it iscsi boots from this new network card - later in process its unable to use it until driver is reinstalled.
In my answer file I tried with and without below portion but it didnt help:
<settings pass="generalize">
<component>
<DoNotCleanUpNonPresentDevices>true</DoNotCleanUpNonPresentDevices>
<PersistAllDeviceInstalls>true</PersistAllDeviceInstalls>
</component>
</settings>
I also tried with E1000 NIC, but couldnt get windows to boot properly after the cdrom installation part.
So my question - is my only option to use workarounds like post OOBE scripts for join etc?
Is it possible to let Windows boot and then initiate an extra reboot once the driver was installed and then allow it to go to Customize phase?
thank you!Hi,
This might be caused by the iscsi boot.
iSCSI Boot is supported only on Windows Server. Client versions of Windows, such as Windows Vista® or Windows 7, are not supported.
Detailed information, please check:
About iSCSI Boot
Best regards
Michael Shao
TechNet Community Support -
Zfs snapshots and booting ...
Hello,
In solaris 9, filesystem snapshots did not survive reboots. Do zfs snapshots in solaris 10 persist across reboots ?
Can I boot off of a zfs partition ?
thanks.Does this mean that when new machines appear with zfs
support, OR when I can update my PROM, that I will be
able to boot a zfs partition ?ZFS isn't out yet, so your question is premature. We'll get a look at it within a few weeks, hopefully.
However, a few months ago it was widely reported by the developers that the initial release would not have boot support. Who knows if this has changed or not.
I don't see any particular reason that PROM or hardware support is required, it should just need a bootloader that understands ZFS. I don't think that there's any UFS support in the existing proms. Just stuff that understands the VTOC label and how to load and execute a few blocks from a particular slice.
Darren -
Hi everyone,
With the new funky OS-features in Solaris 10/08, does anyone know if such features are going to get support in the OSP/SUNWjet/N1SPS? ZFS boot would be nice, for a change :)
I haven't seen any updated versions of the OSP plugin for N1SPS for quite a while now, is it still under development?
Cheers,
Ino!~Hi Ino,
as far as I know (and I might be mistaken) OSP is not under any active development and all bare metal OS provisioning activities are now domain of xVM Ops Center, which is built on top of Jet, which does support ZFS root/boot installation already.
If you want to get hacky, you can replace the SUNWjet package on your Jet server by hand (pkgrm/pkgadd), put there the fresh one and SPS/OSP should happily work with it (read: I have not tested it myself)...
If you want to get supported, then go the xVM OC 2.0 way...
HTH,
Martin
Maybe you are looking for
-
Mail crashing on startup after 10.5.6 update
Mail crashes on startup after I did a software update. I repaired permissions and I tried to restore from a Time Machine backup from before the crash, but Mail is still crashing. Any help would be greatly appreciated. Thanks. Here is the crash report
-
PDF.OCX or AcroPDF.DLL
We have a Powerbuilder 9 and 10 application that use to use PDF.OCX, but we can no longer register the file. We have also read this is no longer supported so we started investigating AcroPDF.DLL. Problem is no matter what we register, we receiv e a
-
Loading Itunes on to my computer
I get the message "Could not open key; HKEYLOCALMACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Verify that you have sufficient access to that key, or contact your support personnel." when i am trying to load Itunes to my computer, i am set as
-
Cand change the slide show ?
hello, can change http://mdphoto.ro/poze/Moni/flashgallery.html you reach the final picture and then give the next automatically take me to the second picture. can this be?
-
1. Clicking on preferences now only shows some baic settings in 'General' - 'Thumbnail, Playback, Metadata, Keywords, Labels, File Type Associations, Cache, Startup Scripts, Advanced & Output' are gone. 2. The sliders under Lens Correction for Chroma