ZFS and Veritas DMP

Does anyone know whether ZFS is supported on DMP devices on Storage Foundation 5.1
It is not supported on Storage Foundation 5.0 MP3 but I was wondering if it had been introduced
for 5.1
http://seer.entsupport.symantec.com/docs/324001.htm
The 5.1 documentation is not clear as to whether it is supported or not.

Ended up using EMC Powerpath. It was much easier and worked like a champ the first time. I didn't have to fiddle with creating diskgroups and logical volumes, just changed the permissions on the /dev/rdsk/emcpower* devices to grid:asmadmin, ran them through format to build partitions that start at cylinder 1, and the installer picked them up right away.

Similar Messages

  • 11gR2 on Solaris and Veritas DMP

    I am in the process of building a 2 node cluster to test some upgrades we have coming up. Our current clusters are using Veritas for the clustering software. For this new cluster ( 2 new machines ), we want to use 11gR2 clustering and ASM.
    My question is this:
    We want to take advantage of the Veritas multi-pathing ( DMP ). In 1 document I have read, it says that "ASM disks must be configured as logical volumes to utilize DMP multipathing. ASM must us the dmp device which resides over the vxvm logical volumes: /dev/vx/rdmp/cxtydzsx"
    One of my problems is that all of the devices under /dev/vx/rdmp are owned by root. It is my understanding that in order for ASM to see them, they need to be owned by grid:asmadmin. Just on a whim, I chown'd all of those devices to grid, but that change isn't persistent across a reboot.
    My other problem is that my second node doesn't see the diskgroups/volumes I created on the first node. The second node can see the same disks, just not the diskgroups/volumes. I have been told that in order to see the diskgroups and such on the second node, they will have to be deported from node 1 and imported on node 2. That makes sense. We can't import them shared because we don't have the clusterware installed yet.
    All of these disk devices are raw and don't have any type of filesystem associated with them. I was able to form a cluster with 11gR2 before, but that was by pointing directly to the block device under /dev/rdsk/cxtydzsx, but that didn't provide any type of multipathing protection. The disks are luns being presented from an EMC CX480, and the same luns are presented to both nodes.
    I'm not finding the provided documentation very clear as to how exactly I need to setup the disks and what I need to do to get DMP working.
    Any advice would be appreciated

    Ended up using EMC Powerpath. It was much easier and worked like a champ the first time. I didn't have to fiddle with creating diskgroups and logical volumes, just changed the permissions on the /dev/rdsk/emcpower* devices to grid:asmadmin, ran them through format to build partitions that start at cylinder 1, and the installer picked them up right away.

  • Setting up Veritas DMP with a 6140...

    Hello!
    I'm somewhat experienced with StoreEdge 3510 Arrays and can set them up in my sleep, but I'm having no luck when it comes to setting up Veritas DMP with a 6140 I've somehow become responsible for configuring. Does anyone have any information on ensuring the 6140 array is configured properly so vxvm DMP functions? I've got the luns configured, the host all cabled up, and the luns under vxvm control on a Sun V440 with Solaris 10. I can see the luns, write data to them, but I can't seem to collect the information needed in order to have DMP function. As such, the vxdmpadm command only shows one path to the array despite having four physical FC connections between the 6140 and the V440 server.
    Any help is greatly appreciated! I'm sure I'm just "missing something" since this is new hardware to me, but I can't seen to find out what.

    All the 6140 setup documentations are on www.sun.com/documentation/.
    There is no special step as long as you have a qualified version (for 6140) of VxVM and you install the correct ASL.
    At this point, you will need to create the initiators using CAM, with the right host type. In CAM you can select different host type, this defines how the 6140 f/w has to behave when there is failover.
    If you use VxVM on the host and DMP, you have to use the host type Solaris DMP for your initiators, at this point the 6140 f/w will not that AVT (Automatic Volume Transfer) has to be turned ON.
    Regards

  • WebStart Flash and Veritas Volume Manager

    I would like to use WebStart Flash for backup of the system
    disk. The goal is to be able to recover the system disk rapidly.
    It works perfectly for systems without Veritas Volume
    Manager on the system disk.
    However, if Veritas Volume Manager is installed and used
    for mirroring the root disk, the system is not able to boot using
    WebStart Flash. This is probably because the "private region"
    of the disk is not included in the flash archive.
    Does anybody have a solution for this, or does any of
    you successfully combine WebStart Flash and Veritas
    Volume Manager?
    I use Jumpstart and the install_type is configured to
    flash_install.
    The question was also asked in the newsgroup
    comp.unix.solaris.
    Rgds,
    Henrik

    For many reasons, today you cannot save the VxVM
    private region information as an implicit part of a
    flash archive. The procedure would likely be to
    unencapsulate the root drive, create the flash archive,
    then re-encapsulate the root drive. This is an ugly
    procedure and may cause more pain than it is worth.
    When a root disk is encapsulated, an entry is put
    into the /etc/system file which says to use the VxVM
    or SVM logical volume for the rootdev rather than
    the actual device from which the system was originally
    booted. When you create a flash archive, this modification
    to the /etc/system is carried along. But, when you install
    it on a new system which doesn't have VxVM properly
    installed already (a chicken-and-the-egg problem)
    then the change of the rootdev to the logical volume
    will fail. The result is an unbootable system (without
    using 'boot -a' and pointing to a different /etc/system
    file like /dev/null).
    The current recommended process is to use a prototype
    system which does not have an encapsulated root
    to create the flash archive.
    VxVM also uses the ELM license manager which will
    tie the VxVM runtime license to the hostid. This makes
    moving flash archives with VxVM to other machines
    impractical or difficult.
    The long term solution would be to add logical volume
    management support to the JumpStart infrastructure.
    I'm not counting on this anytime soon :-(
    -- richard

  • EBS 7.4 with ZFS and Zones

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

    The EBS 7.4 product claims to support ZFS and zones, yet it fails to explain how to recover systems running this type of configuration.
    Has anyone out there been able to recover a server using the EBS software, that is running with ZFS file systems both in the Global zone and sub zones (NB: The servers system file store / /usr /var, is UFS for all zones).
    Edited by: neilnewman on Apr 3, 2008 6:42 AM

  • Sun c4 and veritas netbackup 6 mp4 installation

    Hello, please help me!
    We have SAN with 2 st6140 and 3 broacade 200e switch. One site is: two 200e and 3 linux hosts, other site is: v240 server (one FC port) with solaris preinstalled and installed Veritas NetBackup mp4, 1 200e switch, Sun C4 library whith FC ports.
    On second site I connected C4 FC host port to the 200e swith, FC port on v240 also connected to the 200e switch.
    Can you describe me a common porcedure to configure correctly a server with Veritas and C4?
    I cant add "device" in veritas Administration console now, because it cant see nothing.
    How i have to connect C4 and Veritas server to SAN correctly ?

    create a soft zone on your brocade , with your C4's FCs ports and your Netbackup Master server, for Netbackup use version 6.5 , not 6.0
    once you activate your zone, run cfgadm -al -o show_FCP_dev, you should be able to see your paths, one path or more paths from your server, two or more from your C4 libraries' fabrics, netbackuop 6.5 is very clean and installed , to be able to recognize your drivers ...once the software installed, go to your GUI, to configure device...
    Also, use the latest version of Solaris10 on this..it is much cleaner to work with netbackup 6.5

  • ZFS and fragmentation

    I do not see Oracle on ZFS often, in fact, i was called in too meet the first. The database was experiencing heavy IO problems, both by undersized IOPS capability, but also a lack of performance on the backups - the reading part of it. The IOPS capability was easily extended by adding more LUNS, so i was left with the very poor bandwidth experienced by RMAN reading the datafiles. iostat showed that during a simple datafile copy (both cp and dd with 1MiB blocksize), the average IO blocksize was very small, and varying wildly. i feared fragmentation, so i set off to test.
    i wrote a small C program that initializes a 10 GiB datafile on ZFS, and repeatedly does
    1 - 1000 random 8KiB writes with random data (contents) at 8KiB boundaries (mimicking a 8KiB database block size)
    2 - a full read of the datafile from start to finish in 128*8KiB=1MiB IO's. (mimicking datafile copies, rman backups, full table scans, index fast full scans)
    3 - goto 1
    so it's a datafile that gets random writes and is full scanned to see the impact of the random writes on the multiblock read performance. note that the datafile is not grown, all writes are over existing data.
    even though i expected fragmentation (it must have come from somewhere), is was appalled by the results. ZFS truly sucks big time in this scenario. Where EXT3, on which i ran the same tests (on the exact same storage), the read timings were stable (around 10ms for a 1MiB IO), ZFS started of with 10ms and went up to 35ms for 1 128*8Kib IO after 100.000 random writes into the file. it has not reached the end of the test yet - the service times are still increasing, so the test is taking very long. i do expect it to stop somewhere - as the file would eventually be completely fragmented and cannot be fragmented more.
    I started noticing statements that seem to acknowledge this behavior in some Oracle whitepapers, such as the otherwise unexplained advice to copy datafiles regularly. Indeed, copying the file back and forth defragments it. I don't have to tell you all this means downtime.
    On the production server this issue has gotten so bad that migrating to a new different filesystem by copying the files will take much longer than restoring from disk backup - the disk backups are written once and are not fragmented. They are lucky the application does not require full table scans or index fast full scans, or perhaps unlucky, because this issue would have been become impossible to ignore earlier.
    I observed the fragmentation with all settings for logbias and recordsize that are recommended by Oracle for ZFS. The ZFS caches were allowed to use 14GiB RAM (and moslty did), bigger than the file itself.
    The question is, of course, am i missing something here? Who else has seen this behavior?

    Stephan,
    "well i got a multi billion dollar enterprise client running his whole Oracle infrastructure on ZFS (Solaris x86) and it runs pretty good."
    for random reads there is almost no penalty because randomness is not increased by fragmentation. the problem is in scan-reads (aka scattered reads). the SAN cache may reduce the impact, or in the case of tiered storage, SSD's abviously do not suffer as much from fragmentation as rotational devices.
    "In fact ZFS introduces a "new level of complexity", but it is worth for some clients (especially the snapshot feature for example)."
    certainly, ZFS has some very nice features.
    "Maybe you hit a sync I/O issue. I have written a blog post about a ZFS issue and its sync I/O behavior with RMAN: [Oracle] RMAN (backup) performance with synchronous I/O dependent on OS limitations
    Unfortunately you have not provided enough information to confirm this."
    thanks for that article,  in my case it is a simple fact that the datafiles are getting fragmented by random writes. this fact is easily established by doing large scanning read IO's and observing the average block size during the read. moreover, fragmentation MUST be happening because that's what ZFS is designed to do with random writes - it allocates a new block for each write, data is not overwritten in place. i can 'make' test files fragmented by simply doing random writes to it, and this reproduces on both Solaris and Linux. obviously this ruins scanning read performance on rotational devices (eg devices for which the seek time is a function of the 'distance between consecutive file offsets).
    "How does the ZFS pool layout look like?"
    separate pools for datafiles, redo+control, archives, disk backups and oracle_home+diag. there is no separate device for the ZIL (zfs intent log), but i tested with setups that do have a seprate ZIL device, fragmentation still occurs.
    "Is the whole database in the same pool?"
    as in all the datafiles: yes.
    "At first you should separate the log and data files into different pools. ZFS works with "copy on write""
    it's already configured like that.
    "How does the ZFS free space look like? Depending on the free space of the ZFS pool you can delay the "ZFS ganging" or sometimes let (depending on the pool usage) it disappear completely."
    yes, i have read that. we never surpassed 55% pool usage.
    thanks!

  • Co-existence of HDLM and VxVM DMP 5.0

    Can HDLM and VxVM DMP 5.0 can exist in same host ?

    Hi,
    You definitely want to install into a different folder so you
    don't overwrite things like the jdeveloper.ini files.
    If you have 2.0 installed in a directory like 'C:\JDeveloper',
    installe 3.0 in 'C:\JDeveloper 3.0'. You should not have any
    problems with a space in the directory name.
    Laura
    Raphael Roale (guest) wrote:
    : John,
    : I am now using both JDev 2 and 3, and I did not have any
    : problems. Try to install in different folder.
    : Raphael
    : John Salvo (guest) wrote:
    : : I already have JDeveloper 2.0 on my machine, and I recently
    : : downloaded JDeveloper 3.0.
    : : However, I would like to be able to keep both until such time
    I
    : : am satisfied with JDeveloper 3.0.
    : : Can these two versions co-exist safely on a single machine???
    : : Thanks,
    : : John Salvo
    null

  • Asmlib and VX DMP

    setup a dg and raw volumes in Veritas to use as ASM. I was able to createdisk on them and see them ok, however after reboot they don't show up, and I suspect is an issue with scandisk, judging by the logs.
    # oracleasm querydisk /dev/vx/dsk/mon2a-raw/raw-data1
    Device "/dev/vx/dsk/mon2a-raw/raw-data1" is marked an ASM disk with the label "DATA1"
    There are several others, all of which can be seen with querydisk. However when I try and run scandisks it hangs for 10-15 minutes, and the oracleasm log is looking up bogus VxVM devices -
    Creating /dev/oracleasm mount point: /dev/oracleasm
    Loading module "oracleasm": oracleasm
    Mounting ASMlib driver filesystem: /dev/oracleasm
    Reloading disk partitions: done
    Cleaning any stale ASM disks...
    Scanning system for ASM disks...
    oracleasm-read-label: Unable to open device "/dev/VxDMP1": No such file or directory
    oracleasm-read-label: Unable to open device "/dev/VxDMP1": No such file or directory
    oracleasm-read-label: Unable to open device "/dev/VxDMP1": No such file or directory
    oracleasm-read-label: Unable to open device "/dev/VxDMP1": No such file or directory
    oracleasm-read-label: Unable to open device "/dev/VxDMP1p1": No such file or directory
    oracleasm-read-label: Unable to open device "/dev/VxDMP1p1": No such file or directory
    oracleasm-read-label: Unable to open device "/dev/VxDMP1p1": No such file or directory
    oracleasm-read-label: Unable to open device "/dev/VxDMP1p1": No such file or directory
    I've tried updating /etc/sysconfig/oracleasm to reflect the DMP style naming, but am not sure this is even close to correct - the only documentation I've been able to find is for multipath with Linux.
    # ORACLEASM_SCANORDER: Matching patterns to order disk scanning
    ORACLEASM_SCANORDER="vx"
    # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
    ORACLEASM_SCANEXCLUDE="VxVM"

    Thanks Wim. Thanks a lot. You are the only one who gave me the right answer on what I was asking
    almost three years. In fact, you've just confirmed what I always thought (or at least suspected).
    Namely, after having examined the raw driver source code on Linux kernel source tree as well as
    Stephen Tweedie's `raw' binding utility, I was continuously trying to convince my colleagues that
    Linux raw-bound device interface was nothing else but a fatamorgana, a mirage of something called
    "Sun Solaris native raw character device interface", doing nothing else but opening a block device
    with O_DIRECT. Thanks a lot.
    So, if I got you right, using ASM on Linux implies that setting Oracle startup parameters
    'disk_asynch_io=TRUE' and 'filesystemio_options=SETALL' is obsolete (redundant), except in the case
    that traditional Oracle filesystem is combined with ASM.
    One more question, please, and I'll finish. If ASMLib is just an add-on module with primary function
    to simplify the management and discovery of ASM disks, but doesn't necessarily give us much of an io
    performance benefit, as you wrote, why does it exist only for Linux? There is no ASMLib for other
    unixes (Solaris, AIX, HP-UX etc). Only for Linux. Why would be the management and discovery of ASM
    disks more complicated on Linux than on other unixes, so that we need ASMLib only for Linux?
    Hundred times - thanks a lot.
    Regards
    N.J.

  • Performance problems when running PostgreSQL on ZFS and tomcat

    Hi all,
    I need help with some analysis and problem solution related to the below case.
    The long story:
    I'm running into some massive performance problems on two 8-way HP ProLiant DL385 G5 severs with 14 GB ram and a ZFS storage pool in raidz configuration. The servers are running Solaris 10 x86 10/09.
    The configuration between the two is pretty much the same and the problem therefore seems generic for the setup.
    Within a non-global zone I’m running a tomcat application (an institutional repository) connecting via localhost to a Postgresql database (the OS provided version). The processor load is typically not very high as seen below:
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                            
        49 postgres  749M  669M   4,7%   7:14:38  13%
         1 jboss    2519M 2536M    18%  50:36:40 5,9%We are not 100% sure why we run into performance problems, but when it happens we experience that the application slows down and swaps out (according to below). When it settles everything seems to turn back to normal. When the problem is acute the application is totally unresponsive.
    NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU
        1 jboss    3104M  913M   6,4%   0:22:48 0,1%
    #sar -g 5 5
    SunOS vbn-back 5.10 Generic_142901-03 i86pc    05/28/2010
    07:49:08  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
    07:49:13    27.67   316.01   318.58 14854.15     0.00
    07:49:18    61.58   664.75   668.51 43377.43     0.00
    07:49:23   122.02  1214.09  1222.22 32618.65     0.00
    07:49:28   121.19  1052.28  1065.94  5000.59     0.00
    07:49:33    54.37   572.82   583.33  2553.77     0.00
    Average     77.34   763.71   771.43 19680.67     0.00Making more memory available to tomcat seemed to worsen the problem or at least didn’t prove to have any positive effect.
    My suspicion is currently focused on PostgreSQL. Turning off fsync boosted performance and made the problem less often to appear.
    An unofficial performance evaluation on the database with “vacuum analyze” took 19 minutes on the server and only 1 minute on a desktop pc. This is horrific when taking the hardware into consideration.
    The short story:
    I’m trying different steps but running out of ideas. We’ve read that the database block size and file system block size should match. PostgreSQL is 8 Kb and ZFS is 128 Kb. I didn’t find much information on the matter so if any can help please recommend how to make this change…
    Any other recommendations and ideas we could follow? We know from other installations that the above setup runs without a single problem on Linux on much smaller hardware without specific tuning. What makes Solaris in this configuration so darn slow?
    Any help appreciated and I will try to provide additional information on request if needed…
    Thanks in advance,
    Kasper

    raidz isnt a good match for databases. Databases tend to require good write performance for which mirroring works better.
    Adding a pair of SSD's as a ZIL would probably also help, but chances are its not an option for you..
    You can change the record size by "zfs set recordsize=8k <dataset>"
    It will only take effect for newly written data. Not existing data.

  • Lucreate not working with ZFS and non-global zones

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here... I'm experiencing the exact same issue on my system. Below is the lucreate and zfs list output.
    # lucreate -n patch20130408
    Creating Live Upgrade boot environment...
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10s_u10wos_17b>.
    Current boot environment is named <s10s_u10wos_17b>.
    Creating initial configuration for primary boot environment <s10s_u10wos_17b>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10s_u10wos_17b> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <patch20130408>.
    Source boot environment is <s10s_u10wos_17b>.
    Creating file systems on boot environment <patch20130408>.
    Populating file systems on boot environment <patch20130408>.
    Temporarily mounting zones in PBE <s10s_u10wos_17b>.
    Analyzing zones.
    WARNING: Directory </zones/APP> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/APP-patch20130408>.
    WARNING: Device <tank/zones/APP> is shared between BEs, remapping to <tank/zones/APP-patch20130408>.
    WARNING: Directory </zones/DB> zone <global> lies on a filesystem shared between BEs, remapping path to </zones/DB-patch20130408>.
    WARNING: Device <tank/zones/DB> is shared between BEs, remapping to <tank/zones/DB-patch20130408>.
    Duplicating ZFS datasets from PBE to ABE.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b> on <rpool/ROOT/s10s_u10wos_17b@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b@patch20130408> on <rpool/ROOT/patch20130408>.
    Creating snapshot for <rpool/ROOT/s10s_u10wos_17b/var> on <rpool/ROOT/s10s_u10wos_17b/var@patch20130408>.
    Creating clone for <rpool/ROOT/s10s_u10wos_17b/var@patch20130408> on <rpool/ROOT/patch20130408/var>.
    Creating snapshot for <tank/zones/DB> on <tank/zones/DB@patch20130408>.
    Creating clone for <tank/zones/DB@patch20130408> on <tank/zones/DB-patch20130408>.
    Creating snapshot for <tank/zones/APP> on <tank/zones/APP@patch20130408>.
    Creating clone for <tank/zones/APP@patch20130408> on <tank/zones/APP-patch20130408>.
    Mounting ABE <patch20130408>.
    Generating file list.
    Finalizing ABE.
    Fixing zonepaths in ABE.
    Unmounting ABE <patch20130408>.
    Fixing properties on ZFS datasets in ABE.
    Reverting state of zones in PBE <s10s_u10wos_17b>.
    Making boot environment <patch20130408> bootable.
    Population of boot environment <patch20130408> successful.
    Creation of boot environment <patch20130408> successful.
    # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    rpool 16.6G 257G 106K /rpool
    rpool/ROOT 4.47G 257G 31K legacy
    rpool/ROOT/s10s_u10wos_17b 4.34G 257G 4.23G /
    rpool/ROOT/s10s_u10wos_17b@patch20130408 3.12M - 4.23G -
    rpool/ROOT/s10s_u10wos_17b/var 113M 257G 112M /var
    rpool/ROOT/s10s_u10wos_17b/var@patch20130408 864K - 110M -
    rpool/ROOT/patch20130408 134M 257G 4.22G /.alt.patch20130408
    rpool/ROOT/patch20130408/var 26.0M 257G 118M /.alt.patch20130408/var
    rpool/dump 1.55G 257G 1.50G -
    rpool/export 63K 257G 32K /export
    rpool/export/home 31K 257G 31K /export/home
    rpool/h 2.27G 257G 2.27G /h
    rpool/security1 28.4M 257G 28.4M /security1
    rpool/swap 8.25G 257G 8.00G -
    tank 12.9G 261G 31K /tank
    tank/swap 8.25G 261G 8.00G -
    tank/zones 4.69G 261G 36K /zones
    tank/zones/DB 1.30G 261G 1.30G /zones/DB
    tank/zones/DB@patch20130408 1.75M - 1.30G -
    tank/zones/DB-patch20130408 22.3M 261G 1.30G /.alt.patch20130408/zones/DB-patch20130408
    tank/zones/APP 3.34G 261G 3.34G /zones/APP
    tank/zones/APP@patch20130408 2.39M - 3.34G -
    tank/zones/APP-patch20130408 27.3M 261G 3.33G /.alt.patch20130408/zones/APP-patch20130408

    I replied to this thread: Re: lucreate and non-global zones as to not duplicate content, but for some reason it was locked. So I'll post here...The thread was locked because you were not replying to it.
    You were hijacking that other person's discussion from 2012 to ask your own new post.
    You have now properly asked your question and people can pay attention to you and not confuse you with that other person.

  • ZFS and Windows 2003 Diskpart

    I was given some space on a thumper that has zfs drives. I connect from a Windows 2003 using ISCI. I was running out of space and the admin gave me more space which ended up with my losing the drive, but it came back (is that normal). When I went to use diskpart to expand the drive to the additional space, it would not work. Can I not use diskpart to extend the drive size or do I need to something additional?
    Thanks for your help.

    Earl,
    I'm stuck with the 2003 itunes install problem, too. Can you post or email your solution?
    THANKS,
    Glenn

  • Solaris 10 6/06 ZFS and Zones, not quite there yet...

    I was all excited to get ZFS working in our environment, but alas, a warning appeared in the docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=view
    essentially it says that ZFS should not be used for non-global zone root file systems.. I was hoping to do this, and make it easy, global zone root UFS, and another disk all ZFS where all non-global whole root zones would live.
    One can only do so much with only 4 drives that require mirroring! (x4200's, not utilizing an array)
    Sigh.. Maybe in the next release (I'll assume ZFS will be supported to be 'bootable' by then...
    Dave

    I was all excited to get ZFS working in our
    environment, but alas, a warning appeared in the
    docs, which drained my excitement! :
    http://docs.sun.com/app/docs/doc/817-1592/6mhahuous?a=
    view
    essentially it says that ZFS should not be used for
    non-global zone root file systems..Yes. If you can live with the warning it gives (you may not be able to upgrade the system), then you can do it. The problem is that the the installer packages (which get run during an upgrade) don't currently handle ZFS.
    Sigh.. Maybe in the next release (I'll assume ZFS
    will be supported to be 'bootable' by then...Certainly one of the items needed for bootable ZFS is awareness in the installer. So yes it should be fixed by the time full support for ZFS root filesystems is released. However last I heard, full root ZFS support was being targeted for update 4, not update 3.
    Darren

  • IO issues on Red Hat Linux SAN storage and veritas volume manager

    Oracle 11.0.1.7:
    We recently moved from Solaris Sparc box to linux box and since then we are seeing IO issues. And it's quite obvious in iostat that the io goes as high as 100ms. So I am wondering if someone can advice if there are any mount options we need to give or if we need to use vxodmfs file system. Below are the details of this environment:
    1. It's using veritas volume manager. (Same as solaris)
    2. It's using vxfs file system (Same as solaris)
    3. It's using veritas ODM libraries that helps us in IO. (Same as solaris)
    4. SAN EMC storage (Same as solaris)
    5. Mount options /dev/vx/dsk/orapdata1dg/oradata1vol1 on /u01/oradata type vxfs (rw,delaylog,largefiles,ioerror=mwdisable)
    Edited by: user628400 on Aug 7, 2009 5:48 PM

    Not sure if this really applies here but, from Metalink note 359515.1, parameters should be like follows on NAS devices:
    Linux x86 #*      
    Mount options for Binaries
    rw,bg,hard,nointr,rsize=32768,
    wsize=32768,tcp, vers=3,
    timeo=600, actimeo=0
    Mount options for Oracle Datafiles
    rw,bg,hard,nointr,rsize=32768,
    wsize=32768,tcp,actimeo=0,
    vers=3,timeo=600      
    Linux x86-64 #*
    Mount options for Binaries
    rw,bg,hard,nointr,rsize=32768,
    wsize=32768,tcp,vers=3,
    timeo=600, actimeo=0      
    Mount options for Oracle Datafiles
    rw,bg,hard,nointr,rsize=32768,
    wsize=32768,tcp,actimeo=0,
    vers=3,timeo=600
    Give it a try and let us know....
    Edited by: JRodriguez on Aug 10, 2009 6:35 PM

  • Zfs and encryption

    we are looking for a filesystem level encryption technology. At this point most of our services are on zfs. At one time I saw encryption on the roadmap for zfs features. Where does this sit now?
    Are there test bed versions of opensolaris where we can test this?
    Is the answer known as to if and when zfs encryption will be in Solaris 10 or beyond??
    Thanks.

    I don't believe that the feature is ready yet, but you may find some more information about the project here: [http://hub.opensolaris.org/bin/view/Project+zfs-crypto/]
    You would probably also be better of with asking for a status on the forum/mailinglist for the project: [http://opensolaris.org/jive/forum.jspa?forumID=105]
    Edited by: Tenzer on May 11, 2010 9:31 AM

Maybe you are looking for

  • Acount doesn't show in FROM option

    I have several (6) mail accounts, but one of them is not showing up in the FROM section when I compose a mail. In other words, I cannot select it as the sender. I can receive mails on that account, however. Have checked all the settings, but cannot f

  • Mavericks Sluggish Performance

    Hello all, I have a Mac Mini that is pretty sluggish graphically since Mavericks. Initially an upgrade install, I have now done a fresh install and a PRAM reset. No improvement. Noticing it a lot in Finder, Mail and sometimes Safari.

  • Missing some stuff in Oracle10g OEM console

    I have installed oracle enterprise manager console. For oracle9i, Login to system user, then click session, then right hand side, i see the below info. SID CPU Memory -PGA IO-phys reads logical reads hard parse status username osuser os process id ma

  • Brand new iMac 24 inch does not start

    I went to best buy and bought the 24 inch 2.66 GHz iMac. We loved it when the apple guy gave us a demo. I brought the brand new iMac home and set it up. Pushed the power button - IMac does not power on. Tried diff options like changing plug socket, e

  • AdvancedDataGrid border - where did it go?

    I have an AdvancedDataGrid defined in my app. The width is explicitly specified, while the height is 100%. The width and minWidth values of the columns do not add up to a total greater than the grid width. I'm using the default skin, and it has a Hie