ZFS file system mount in solaris 11

Create a ZFS file system for the package repository in the root pool:
# zfs create rpool/export/repoSolaris11
# zfs list
The atime property controls whether the access time for files is updated when the files are read.
Turning this property off avoids producing write traffic when reading files.
# zfs set atime=off rpool/export/repoSolaris11
Create the required pkg repository infrastructure so that you can copy the repository
# pkgrepo create /export/repoSolaris11
# cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > \
sol-11-1111-repo-full.iso
# mount -F hsfs /export/repoSolaris11/sol-11-1111-repo-full.iso /mnt
# ls /mnt
# df -k /mnt
Using the tar command as shown in the following example can be a faster way to move the
repository from the mounted file system to the repository ZFS file system.
# cd /mnt/repo; tar cf - . | (cd /export/repoSolaris11; tar xfp -)
# cd /export/repoSolaris11
# ls /export/repoSolaris11
   pkg5.repository README
   publisher sol-11-1111-repo-full.iso
# df -k /export/repoSolaris11
# umount /mnt
# pkgrepo -s /export/repoSolaris11 refresh
=============================================
# zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
==============================================I am trying to reconfigure the package repository with above steps. when reached the below step
# zfs create -o mountpoint=/export/repoSolaris11 rpool/repoSolaris11
created the mount point but not mounted giving the error message
cannot mount ,directory not empty When restarted the box, threw service adm screen with error message
not able to mount all pointsPlease advise and Thanks in advance.

Hi.
Don't mix content of directory as mountpoint and what you see after FS was mounted.
On othet ZFS - mount point also clear. You see contetn of ZFS file system.
For check you can unmount any other ZFS and see that mountpoint also clear.
Regards.

Similar Messages

  • How so I protect my root file system? - x86 solaris 10 - zfs data pools

    Hello all:
    I'm new to ZFS and am trying to understand it better before I start building a new file server. I'm looking for a low cost file server for smaller projects I support and would like to use the ZFS capabilities. If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem? The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too? or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Should I install solaris 10 on a mirrored set of drives? Can the root filesystem be zfs too? I'd like to be able to use a fairly simple PC to do this, perhaps one that doesn't have built in raid. I'm not looking for 10 terabytes of storage, maybe just four 500gb sata disks connected into a raidz zpool.
    thanks,

    patrickez wrote:
    If I install Solaris 10 on a x86 platform and add a bunch of drives to it to create a zpool (raidz), how do I protect my root filesystem?Solaris 10 doesn't yet support ZFS for a root filesystem, but it is working in some OpenSolaris distributions.
    You could use Sun Volume Manager to create a mirror for your root filesystem.
    The files in the ZFS file system are well protected, but what about my operating system files down in the root ufs filesystem? If the root filesystem gets corrupted, do I lose the zfs filesystem too?No. They're separate filesystems.
    or can I independantly rebuild the root filesystem and just remount the zfs filesystem? Yes. (Actually, you can import the ZFS pool you created).
    Should I install solaris 10 on a mirrored set of drives?If you have one, that would work as well.
    Can the root filesystem be zfs too?Not currently in Solaris 10. The initial root support in OpenSolaris will require the root pool be only a single disk or mirrors. No striping, no raidz.
    Darren

  • New zone and inherited file system mount point error

    Hi - would anyone be able to help with the following error please. I've tried to create a new zone that has the following inherited file system:
    inherit-pkg-dir:
    dir: /usr/local/var/lib/sudo
    But when I try to install it fails with:
    root@tdukunxtest03:~ 532$ zoneadm -z tdukwbprepz01 install
    A ZFS file system has been created for this zone.
    Preparing to install zone <tdukwbprepz01>.
    ERROR: cannot create zone <tdukwbprepz01> inherited file system mount point </export/zones/tdukwbprepz01/root/usr/local/var/lib>
    ERROR: cannot setup zone <tdukwbprepz01> inherited and configured file systems
    ERROR: cannot setup zone <tdukwbprepz01> file systems inherited and configured from the global zone
    ERROR: cannot create zone boot environment <tdukwbprepz01>
    I've added this because unknown to me when I installed sudo from sunfreeware in the global it requires access to /usr/local/var/lib/sudo - sudo itself installs in /usr/local. And when I try to run any sudo commands in the new zone it gave this:
    sudo ls
    Password:
    sudo: Can't open /usr/local/var/lib/sudo/tdgrunj/8: Read-only file system
    Thanks - Julian.

    Think I've just found the answer to my problem, I'd already inherited /usr ..... and as sudo from freeware installs in /usr/local I guess this is never going to work. I can only think to try the sudo version of the Solaris companion DVD or whatever it's called.

  • Best practices for ZFS file systems when using live upgrade?

    I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
    Here is the ZFS layout I have come up with (swap, home, etc omitted):
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              11.0G  52.0G    94K  /rpool
    rpool/ROOT                         4.80G  52.0G    18K  legacy
    rpool/ROOT/boot1                   4.80G  52.0G  4.28G  /
    rpool/ROOT/boot1/zones-root         534M  52.0G    20K  /zones-root
    rpool/ROOT/boot1/zones-root/zone1   534M  52.0G   534M  /zones-root/zone1
    rpool/zone-data                      37K  52.0G    19K  /zones-data
    rpool/zone-data/zone1-runtime        18K  52.0G    18K  /zones-data/zone1-runtimeThere are 2 key components here:
    1) The ROOT file system - This stores the / file systems of the local and global zones.
    2) The zone-data file system - This stores the data that will be changing within the local zones.
    Here is the configuration for the zone itself:
    <zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
      <inherited-pkg-dir directory="/lib"/>
      <inherited-pkg-dir directory="/platform"/>
      <inherited-pkg-dir directory="/sbin"/>
      <inherited-pkg-dir directory="/usr"/>
      <filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
      <network address="192.168.0.1" physical="e1000g0"/>
    </zone>The key components here are:
    1) The local zone / is shared in the same file system as global zone /
    2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
    The system (local and global zone) will operate like this:
    The global zone is used to manage zones only.
    Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
    During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
    Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
    What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
    Thanks in advance for your feedback!
    --Jason                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello "jemurray".
    Have you read this document? (page 198)
    http://docs.sun.com/app/docs/doc/820-7013?l=en
    Then the solution is:
    01.- Create an alternate boot enviroment
    a.- In a new rpool
    b.- In the same rpool
    02.- Upgrade this new enviroment
    03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
    This maybe sound easy but you should be carefull, please try this in a development enviroment
    Good luck

  • Root disk craches how to retrive the data from ZFS file systems.

    Hi Friends,
    The solaris 10 OS (root disk) is crached.i have configered execpt root disk to all disks for ZFS file systems.We using to application.Now any possble to retrive the data.
    Server model - V880
    Pls help me.
    Advance thanks.

    If the OS wasn't on ZFS, then just rebuild the server, hook up the drives and run 'zpool import'. It should find the pool on the disks and offer it up to be imported.
    Darren

  • ZFS file system options

    Dear Friends,
    Which is the command to create an zfs file system similar to the following command.
    newfs -i 200000 -c 256 -C 8 -m 1 /dev/rdsk/c1t1d0s0
    Thanks,
    Sal.

    Dear Darren & Friends,
    I am trying to install lotus domino in T5240 server, where the mail files will be stored on zfs file system. I would like to tune the ZFS file system for donino before going to the production. Unfortunately sun doesn't have any documnetation for the ZFS file system tuning for domino, but many for ufs. In ufs tuning sun is suggesting like this
    "Modern disk subsystems can perform larger data transfers than their predecessors, and Solaris can take
    advantage of this to read more data than Domino asks for in hopes of having the next piece of a file already in
    memory as soon as Domino asks for it. Unfortunately, Domino doesn't always use the &ldquo;anticipated&rdquo; data but
    instead next asks for data from an entirely different region of the file, so the effort spent reading the extra data
    may be wasted. In extreme circumstances and with modern disk systems Solaris can be fooled into reading
    fifty or sixty times as much data as is actually needed, wasting more than 98% of the I/O effort.
    To prevent this pathological behavior, build or tune the file systems that hold NSF databases so that Solaris
    won't try to read more than about 64KB at a time from them. The instructions that follow show how to do this for
    the default Unix File System (UFS); if you are using an alternative file system, consult its documentation.
    If you are in a position to build or rebuild the file systems, we suggest using the command
    newfs -i 200000 -c 256 -C 8 -m 1 /dev/rdsk/...
    The important option is -C 8, which limits the amount of read-ahead to no more than eight pages of 8KB each.
    The other options are less important, and may even cause newfs to issue warning messages for some disk
    sizes; these warnings can be ignored since newfs will adjust the values to suit the actual disk."
    Can I build an zfs file system atleast with the option similaiar to -C 8
    Thanks,
    Sal.

  • ASM vs ext3 File system(mount point)

    Please suggest which one is better for small databases.
    ASM or ext3 File system(mount point)?
    Any metalink note.

    ASM better if you do not want to play with I/O tiuning, (if you tune ext3 file system it woud be the same from performace view),
    but it more compilcated for admininstering database files in ASM then in ordinary file system.
    Oracle is recomending to use ASM for database file system.
    I woud think if you have some development database and nead a lot of cloning, moving of datafiles its better to use ordinary file system,
    so you can use copy OS comands, not so complicated.
    If you nead some striping, miroring, snapshoting from ext3 you can use LVM on unix/linux.
    I am not sure but I think what striping or miroring is better on ASM then on LVM, becouse ASM is doing it better for databse I/O.

  • ZFS File   system space issue

    Hi All,
    Kindly help how to resolve ZFS File system space issue. I have deleted nearly 20 GB but File system size remains the same. Kindly advice on it.
    Thanks,
    Kumar

    The three reasons that I'm aware of that causes deleting files not to return space are 1) the file is linked to multiple names 2) deleting files which are still open by a process and 3) deleting files which are backed up by a snapshot. While it is possible you deleted 20GB in multiply linked and/or open files, I'd guess snapshots to be most likely case.
    For multiple "hard" links, you can see the link count (before you delete the file) in the "ls -l" command (the second field). If it is greater than one, deleting the file won't free up space. You have to delete all file names linked to the file to free up the space.
    For open files, you can use the pfiles command to see what files a process has open. The file space won't be recovered until all processes with a file open close it. Killing the process will do the job. If you like to use a big hammer, a reboot kills everything.
    For snapshots: Use the "zfs -t snapshot" command and look at the snapshots. The space used by the snapshot indicates how much space is held by the snapshot. Deleting the snapshot will free up space unless the space is still being held by another snapshot. To free space held by a file, you have to delete all snapshots which contain that file.
    Hopefully, I got all of this right.

  • Oc 11gR1 update 3: doesn't show ZFS file systems created on brownfield zone

    Subject is a pretty concise description here. I have several brownfield Solaris 10U10 containers running on M5000s, and I have delegated three zpool to each container for use by Oracle. Below is relevant output from zonecfg export for one of these containers. They were all built in the same manner, then placed under management by OC. (Wish I'd been able to build them as green field with Ops Center, but there just wasn't enough time to learn how to configure OpsCenter the way I needed to use it.)
    set name=Oracle-DB-Instance
    set type=string
    set value="Oracle e-Business Suite PREPROD"
    end
    add dataset
    set name=PREPRODredoPOOL
    end
    add dataset
    set name=PREPRODarchPOOL
    end
    add dataset
    set name=PREPRODdataPOOL
    end
    The problem is, none of the file systems built on these delegated pools in the container appear in the Ops Center File System Utilization charts. Does anyone have a suggestion for how to get OC to monitor the file systems in the zone?
    Here's the output from zfs list within the zone described by the zonecfg output above:
    [root@acdpreprod ~]# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    PREPRODarchPOOL 8.91G 49.7G 31K none
    PREPRODarchPOOL/d05 8.91G 49.7G 8.91G /d05
    PREPRODdataPOOL 807G 364G 31K none
    PREPRODdataPOOL/d02 13.4G 36.6G 13.4G /d02
    PREPRODdataPOOL/d03 782G 364G 782G /d03
    PREPRODdataPOOL/d06 11.4G 88.6G 11.4G /d06
    PREPRODredoPOOL 7.82G 3.93G 31K none
    PREPRODredoPOOL/d04 7.82G 3.93G 7.82G /d04
    None of the file systems in the delegated datasets appear in Ops Center for this zone. Are there any suggestions for how I correct this?

    Do you mean adopt the zone? That requires the zone be halted and it also says something about copying all file systems to the pool created for the zone. Of the 12 zones I have (four on each of three M5000s), seven of them are already in "production" status, and four of those seven now support 7x24 world-wide operations. A do-over is not an option here.

  • Ecc 6.0 file system setup in solaries

    Hi,
       I want to install ecc 6.0  on solaries system.
    Can any one provide file system setup before starting the installation. Like /usr/sap this kind of file setup.
    It will help ful to our installation.
    Regards,
    Venkat

    Hi,
    I think you are not clear, you have not reade upgrade guide. Please reade upgrade guide so many other thing to be require in installation. All are depend on the your company requirement and how much data will grow per month etc.
    Regards,
    Anil

  • UFS file system mount options

    I'm doing some performance tuning on a database server. In mounting a particular UFS file system, I need to enable the "forcedirectio" option. However, the "logging" option is already specified. Is there any problem mounting this file system with BOTH "logging" and "forcedirectio" at the same time? I can do it and the system boots just fine but I'm not sure if it's a good idea or not. Anybody know?

    Direct IO bypasses the page cache. Hence the name "direct".
    Thus, for large-block streaming operations that do not access the same data more than once, direct IO will improve performance while reducing memory usage - often significantly.
    IO operations that access data that could otherwise be cached can go MUCH slower with direct IO, especially small ones.

  • File system mount prob

    after starting my server i found some of the partitions( where oracle data files are located) not mounted.what soud i do to mount properly that file system

    Hi,
    Are you using Linux? You need to add the for these files systems to /etc/fstab. Suppose you have a hard disk /dev/sdc1 which is mounted on /u01 file system, add following line at the bottom of your /etc/fstab file (suppose you are using ext3 file system
    /dev/sdc1                 ext3    defaults        1 2This will auto mount your file system on every startup
    Salman

  • How to Stop Finder WEBDAVFS from requesting .hidden, ._Directory , ._FileName files after Remotes File System Mount has happened?

    Hi,
    After I have mounted remote directory on Finder , Finder requests for all file types including .hidden, ._Directory , ._FileName files and all such files which is creating havoc to the performance of the Finder.
    Case 1 : In one Directory I have 500 files+directories. When the Finder tries to fetch the directory content it is sending requests for even .hidden, ._Directory , ._FileName files and so the request count grows exponentially causing breakdown for Finder. Its taking 10 mins to load the directory. The same directory request when posted by Transmit by Panic it loads within 30 secs. Transmit do not send request for ._*  files/directory.
    I am rejecting request for all hidden file request but they keep on coming in thousands.
    Case 2 : Finder tries to refresh the file listing whenever the Finder window is brought to Focus.
    Any help is appreciated. When I try to use find for such files to delete I see no listing! 
    Used : ls -1aR /Volumes/InquiraWebDAV
    To demostrate the problem please check example log :
    Response XML1:
    </D:multistatus>
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/.DS_Store
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/.DS_Store
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 3
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 3
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 2
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder 2
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/._untitled folder
    Client IP =17.1.1.1 clientHeaderAgent=WEBDAVFS/1.9.0 (01908000) DARWIN/11.3.0 (X86_64)
    DSID from cache =281950648 and path=/CUSTOMER_RELATIONS_VIEW/untitled folder/
    here4
    Mar 7, 2012 12:51:17 PM org.apache.catalina.core.ApplicationContext log
    INFO: webdav: [PROPFIND] /CUSTOMER_RELATIONS_VIEW/untitled folder/
    inside doProfind() path=/CUSTOMER_RELATIONS_VIEW/untitled folder/
    req.getHeader('Depth')=1
    Request XMl:<?xml version="1.0" encoding="utf-8" standalone="no"?><D:propfind xmlns:D="DAV:">
    <D:prop>
    <D:getlastmodified/>
    <D:getcontentlength/>
    <D:creationdate/>
    <D:resourcetype/>
    </D:prop>
    </D:propfind>
    Element Node=#text |
    Element Node=D:prop | null
    Element Node=#text |
    {281950648=[AOS_VIEW, APPLECARE_ALLGEOS, CUSTOMER_RELATIONS_VIEW, EXECUTIVE_RELATIONS_VIEW]}281950648
    temp=/AOS_VIEW/  ###   temp1=/CUSTOMER_RELATIONS_VIEW/UNTITLED FOLDER/  isAllowed=false
    temp=/APPLECARE_ALLGEOS/  ###   temp1=/CUSTOMER_RELATIONS_VIEW/UNTITLED FOLDER/  isAllowed=false
    temp=/CUSTOMER_RELATIONS_VIEW/  ###   temp1=/CUSTOMER_RELATIONS_VIEW/UNTITLED FOLDER/  isAllowed=true
    href/InquiraWebDAV
    rewriteUrl(href)=/InquiraWebDAV/CUSTOMER_RELATIONS_VIEW/untitled%20folder/
    resourceName=untitled folder  type=0
    properties=java.util.Vector$1@2aa05bc3
    property=getlastmodified
    property=getcontentlength
    property=creationdate
    property=resourcetype
    newPath=/CUSTOMER_RELATIONS_VIEW/untitled folder/untitled folder
    newPath=/CUSTOMER_RELATIONS_VIEW/untitled folder/untitled folder 2
    newPath=/CUSTOMER_RELATIONS_VIEW/untitled folder/untitled folder 3
    Response XML1:
    <?xml version="1.0" encoding="utf-8" ?>
    <D:multistatus xmlns:D="DAV:"><D:response><D:href>/InquiraWebDAV/CUSTOMER_RELATIONS_VIEW/untit led%20folder/</D:href>
    <D:propstat><D:prop><D:creationdate>2012-03-06T22:56:52Z</D:creationdate>
    <D:resourcetype><D:collection/></D:resourcetype>
    </D:prop>
    <D:status>HTTP/1.1 200 OK</D:status>
    </D:propstat>
    <D:propstat><D:prop><D:getlastmodified/><D:getcontentlength/></D:prop>
    <D:status>HTTP/1.1 404 Not Found</D:status>
    </D:propstat>
    </D:response>

    Here is a related discussion but no solution.
    https://discussions.apple.com/message/8216700#8216700
    Do any one know if Apple Support has any solution for this problem or How do I get expert help from them?

  • ZFS File System - Need to add space

    Dear All,
    Please help me in the below case.
    I have the df -h output as below.
    rpool/export/home/chaitsri
    134G 35K 126G 1% /export/home/chaitsri
    datapool 134G 32K 134G 1% /datapool
    datapool/test1 20G 31K 20G 1% /datapool/test1
    Request is to add the space of /datapool/test1 to /export/home/chaitsri. Please let me know how can we achieve this.

    It's not clear from your post whether you want to move just /export/home/chaitsri or all of /export/home. The procedure will be similar regardless. I'll assume you want to do the latter. This is untested so please don't blindly copy/paste until you've thought about what you want to achieve.
    1) Make sure no non-root users are logged on to the system
    2) Create the appropriate dataset (filesystem) on datapool, eg:
    # zfs create datapool/export/home
    3) For any local users, create the appropriate datasets on datapool and copy the data from rpool/export/home/<username> to datapool/export/home/<username>. Any automounted home directories don't need this because the data will be on the home servers.
    4) Change the mountpoint property for rpool/export/home (and any local users)
    # zfs set mountpoint=/origexport/home rpool/export/home
    5) Change the mountpoint property for datapool/export/home
    # zfs set mountpoint=/export/home/ datapool/export/home
    6) Once you're happy that things work, then you can delete the rpool/export/home dataset(s).
    HTH
    Steve

  • /globaldevices on different file system mounts

    IHAC that has a SC3.2 on a pair of V890's Solaris9-U5 and Veritas Foundation Suite 5 for boot disks.
    I have noticed that the paths to /globaldevices are different:
    /dev/vx/dsk/bootdg/rootdg_16vol 487863 5037 434040 2% /global/.devices/node@1
    /dev/vx/dsk/bootdg/node@2 487863 5035 434042 2% /global/.devices/node@2
    Can I just go on the first name and rename /dev/vx/bootdg/ and /dev/vx/rdsl/bootdg and rename the rootdg_16vol to node@1 or is there a different method.

    Yes, you can just rename as a normal veritas volume using vxedit. Make sure you modify the /etc/vfstab file for node1.
    1. umount the /global/.devices/node@1
    2. renamee
    3. modify vfstab
    4. mount /global/.devices/node@1
    If possible test reboot to verify.
    Actually, it doesnot make any difference ifyou leave it that way.
    -LP

Maybe you are looking for

  • Best buy price match nightmare

    I placed an order online for in store pick up. A few days later the item went on sale on best buy.com. I am still within the return policy/ price adjustment time frame. The item was for the special edition ps4 white "bundle". It is the special editio

  • No sound at ALL upgrading to iTunes 7

    after upgrading to iTunes 7, I can't play ANY of my songs AT ALL in my library, they hang entirely, even the ones I purchased from the iTunes store. am having the same problem inside the iTunes store trying to listen to 30 second clips of songs, eith

  • Reply from same account...

    My boss recently asked me if it's possible to reply from the one account always regardless of where the original email came from. For example, if he has accounts A, B and C set up in his mail, and receives an email in account B or C, is there a prefe

  • Photoshop Elements 11 Canon 6D

    I am SO confused I have Photoshop Elements 11 and just got a new Canon 6D I have recently started shooting in RAW and Photoshop says it does not recognize the RAW form I tried downloading 3 of the plug ins but to no avail Is there a specific one for

  • Filtering white noise

    Hello, I am looking for a way to filter the white noise from an input signal sine wave I took two inputs  sine wave and white noise. I added the 2 signals and then I gave to a LMS filter. I have to get the original signal sine wave but I am getting s