Making backups of zones created in Zeta File System (ZFS)

Hello,
We have several zones installed in /export/zones. /export/zones is using ZFS as its file system.
We would like to make backups (preferrably to tape), but get the following messages when we try:
tcas:root> ufsdump 0uf /dev/rmt/0cn /export/zones/idmtest
DUMP: `/export/zones/idmtest' is not on a locally mounted filesystem
DUMP: The ENTIRE dump is aborted.
What I am thinking of is to somehow take a snapshot of the zone and save it to /export/home (which is UFS), then running ufsdump on that.
Is this a viable solution or are there better ways of backing up? I'm sure other people have the same type of setup.
Best!
Frank

I can't believe that Sun would put a file system on
public release without having a viable tape backup
strategy. The choice was to delay or release it without one. 'zfs send' is a low-level mechanism that could be part of a real backup system (rather like 'tar' forms part of NetBackup).
It sounds like though stable, ZFS is far from being
being ready for production.Production is different things to different people. There were some commercial sites for whom the features were so compelling, they were using it in production before it was part of any Solaris release.
Obviously you'll want to (try to) understand all the drawbacks before you use any software or system.
I will log a tar to Sun and see what the response is.
I will post it here if I get one that works for us.one what?
Darren

Similar Messages

  • Problem in create index on file system repository

    I want to create an index on file system repository,but have some problem. 
    1) create a windows filesystem repository
    2) I can found the repository entry in KM content
    3) go into "index administration" and create index,but I can not found the entry when I pressed "add" button on datasources page

    hello Karsten
    thanks for your consideration.
    After create a Windows file system Repository,I create the index as the fowllowing step.
    1) go to the page System Administration=>Knowledge Managment=>Index Administration and click Create button for creating index, then a form appeared.
    2) fill in the for specified value, then press Create Index button to create index
    3) click on DataSource link, in the Datasource Tab, click the Add button, then I could see all of folders on the screen( such as documents,calendar,etc), but I can not found the Windows File system Entry that just I created.
    I can not create a index for the Windows File System
    what is the wrong with it?
    thanks

  • How to create a network file system on a small ethernet lan?

    Hi there,
    Have been wondering about creating a network file system on a small ethernet based lan. i don't know anything about how you do this but figure you need more than a normal external hardrive Many thanks for any responses

    Hi there,
    As far as I understand, you would like to be able to have a central location that all your computers can access and store files?
    If this is the case, and you want to use your lan router, there are many products that do such a thing. These devices are known as "Network Attached Storages" or NAS. They plug into your router like your computers do, and then allow your computers to access hard drives stored in the devices.
    The following link shows some NAS units on the tech shopping site Newegg.
    http://www.newegg.com/Store/SubCategory.aspx?SubCategory=124&name=Network-Storag e-NAS
    Hope that helps, and let me know if you have any more questions!
    Elliott

  • The form base xlm can't creat in the File System Repository

    Hi all
    A document created by  the sap demo news can display in the folder base the cm Repository corrected,but can't in File System Repository.
    What is the reason?
    Edited by: joecui on Apr 9, 2010 7:11 AM

    Hi,
    You should not use the /etc repository for creating and storing xmlforms or any content!! This repository is used for configuration purposes and should not be visible to end users, otherwise there would be a very strong possibility that your CM config could be modified, deleted etc.
    If you want you should create your own FS repository pointing to a separate FS location to which your end users have access.
    Regards,
    Lorcan.

  • ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

    Hi,
    i am getting following error, while taking RMAN backup.
    ORA-01580: error creating control backup file /backup/snapcf_TST.f
    ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
    DB Version:10.2.0.4.0
    while taking datafiles i am not getting any errors , i am using same mount points for both (data&control files)

    [oracle@localhost dbs]$ oerr ora 27054
    27054, 00000, "NFS file system where the file is created or resides is not mounted with correct options"
    // *Cause:  The file was on an NFS partition and either reading the mount tab
    //          file failed or the partition wass not mounted with the correct
    //          mount option.
    // *Action: Make sure mount tab file has read access for Oracle user and
    //          the NFS partition where the file resides is mounted correctly.
    //          For the list of mount options to use refer to your platform
    //          specific documentation.

  • SAP/ORACLE File Systems Disappeared in Local Zone

    I created some 20-30 files systems as meta devices for a SAP/Oracle installation on my 6320 SAN for a local zone.
    I did a newfs on all of them and then mounted them in the global zone with their appropriate mount points.
    I then did a zonecfg with the following input.
    zonepath: /export/zones/zsap21
    autoboot: true
    pool:
    inherit-pkg-dir:
    dir: /lib
    inherit-pkg-dir:
    dir: /platform
    inherit-pkg-dir:
    dir: /sbin
    inherit-pkg-dir:
    dir: /opt/sfw
    inherit-pkg-dir:
    dir: /usr
    fs:
    dir: /oracle
    special: /oracle_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/stage/920_64
    special: /oracle/stage/920_64_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /temp
    special: /temp_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/local
    special: /usr/local_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap
    special: /usr/sap_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/install
    special: /usr/sap/install_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /usr/sap/trans
    special: /usr/sap/trans_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /export/home/zsap21
    special: /export/home_zsap21
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1
    special: /oracle/FS1
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/920_64
    special: /oracle/FS1/920_64
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/mirrlogA
    special: /oracle/FS1/mirrlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/oraarch
    special: /oracle/FS1/oraarch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogA
    special: /oracle/FS1/origlogA
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/origlogB
    special: /oracle/FS1/origlogB
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/saparch
    special: /oracle/FS1/saparch
    raw not specified
    type: lofs
    options: []
    fs:
    dir: /oracle/FS1/sapdata1
    special: /oracle/FS1/sapdata1
    raw not specified
    type: lofs
    options: []
    ***********more available but I truncated it here**********************
    I successfully installed and configured the zone.
    I then logged into the local zone and installed a new version of Oracle and a working SAP instance. I know that it worked because I was able to log into it.
    Upon rebooting the server and the local zone I have lost access to all of my file systems in the Oracle/SAP zone.
    It's almost as if the global zone has not mounted its file systems over the local zones mount points. In the local zone I can see no files in the directories where I would expect Oracle or SAP files.
    I either mis-configured the zonecfg or missed a step somewhere.
    I suspect that my file system contents are still around somewhere waiting to be hooked up with a zone's mount point.
    In the local zone a df -k shows all of the file systems (like the local zone knows about what file systems belong) but they all have the same size and free space (probably from the root zone).
    Any thoughts appreciated.
    Atis

    Do you have amount point within the zone path for these oradb mounts?
    You have to add a directory entry for /oradb in the zone's root directory, i.e. ``zone_path''/root/oradb

  • Years of data gone, backup DMG won't mount "No mountable file systems"

    Could someone help me here, I have this problem.
    The reason for this was that I ran into some complications with my boot camp windows install and wanted to start over. After returning the boot camp FAT partition space to Mac OS X it froze while attempting to create a new boot camp partition. "Boot Camp Assistant" suggested I backup my "Macintosh HD" and re-partition my HD because "some files could not be moved." Ok so, I backed-up my "Macintosh HD" volume to a DMG using Disk Utility selecting the volume and clicking new image icon, it is compressed, it is Leopard. I then booted with Leopard install DVD and began the restore process. Everything was working smoothly until the progress bar stopped animating, waited awhile and forced restart. When I restarted my internal HD had only an "Applications" folder which ended its contents with a half copied iDVD.app preceded by 73 other apps successfully restored. Now worst of all my Backup DMG produces the "No mountable file systems" error. Help, oh please help!
    hdiutil imageinfo produces this:
    Format: UDRW
    Backing Store Information:
    Name: Macintosh HD.dmg
    URL: file://localhost/Volumes/Macintosh%20HD/Macintosh%20HD.dmg
    Class Name: CBSDBackingStore
    Format Description: raw read/write
    Checksum Type: none
    partitions:
    appendable: false
    partition-scheme: none
    block-size: 512
    burnable: false
    partitions:
    0:
    partition-length: 92073985
    partition-synthesized: true
    partition-hint: unknown partition
    partition-name: whole disk
    partition-start: 0
    Properties:
    Partitioned: false
    Software License Agreement: false
    Compressed: no
    Kernel Compatible: true
    Encrypted: false
    Checksummed: false
    Checksum Value:
    Size Information:
    Total Bytes: 47141880320
    Compressed Bytes: 47141880320
    Total Non-Empty Bytes: 47141880320
    Sector Count: 92073985
    Total Empty Bytes: 0
    Compressed Ratio: 1
    Class Name: CRawDiskImage
    Segments:
    0: /Volumes/Macintosh HD/Macintosh HD.dmg
    Resize limits (per hdiutil resize -limits):
    92073985 92073985 92073985
    Message was edited by: Marcus S

    Ok, here is the output. It shows there is a problem with the resource fork XML, just like yours Shad Guy. If only their were some specs available I could try to fix it myself. An Apple engineer could most likely fix this in 20 min.
    Also, thanks for your help man, having people tell me to give up is the worst.
    hdiutil udifxmldet
    "Macintosh HD.dmg" has 1365301792551680312 bytes of embedded XML data.
    hdiutil: udifxmldet: unable to read XML data at offset 2841561349885781055 from "Macintosh HD.dmg": 29 (Illegal seek).
    hdiutil: udifxmldet failed - Illegal seek
    hdiutil udifderez
    hdiutil: udifderez: could not get resource fork of "Macintosh HD.dmg": Function not implemented (78)
    hdiutil: udifderez failed - Function not implemented
    ------------------------

  • Create EF DF file

    I just begin with java card i want to know how to creat DF and EF file with java card applet and how to get write and read acces to them later

    Hi,
    There is no built-in funcationality available for java cards to create files (although for SIM it is). Standards are available and by these you can create your own file system.

  • Best practices for ZFS file systems when using live upgrade?

    I would like feedback on how to layout the ZFS file system to deal with files that are constantly changing during the Live Upgrade process. For the rest of this post, lets assume I am building a very active FreeRadius server with log files that are constantly updating and must be preserved in any boot environment during the LU process.
    Here is the ZFS layout I have come up with (swap, home, etc omitted):
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              11.0G  52.0G    94K  /rpool
    rpool/ROOT                         4.80G  52.0G    18K  legacy
    rpool/ROOT/boot1                   4.80G  52.0G  4.28G  /
    rpool/ROOT/boot1/zones-root         534M  52.0G    20K  /zones-root
    rpool/ROOT/boot1/zones-root/zone1   534M  52.0G   534M  /zones-root/zone1
    rpool/zone-data                      37K  52.0G    19K  /zones-data
    rpool/zone-data/zone1-runtime        18K  52.0G    18K  /zones-data/zone1-runtimeThere are 2 key components here:
    1) The ROOT file system - This stores the / file systems of the local and global zones.
    2) The zone-data file system - This stores the data that will be changing within the local zones.
    Here is the configuration for the zone itself:
    <zone name="zone1" zonepath="/zones-root/zone1" autoboot="true" bootargs="-m verbose">
      <inherited-pkg-dir directory="/lib"/>
      <inherited-pkg-dir directory="/platform"/>
      <inherited-pkg-dir directory="/sbin"/>
      <inherited-pkg-dir directory="/usr"/>
      <filesystem special="/zones-data/zone1-runtime" directory="/runtime" type="lofs"/>
      <network address="192.168.0.1" physical="e1000g0"/>
    </zone>The key components here are:
    1) The local zone / is shared in the same file system as global zone /
    2) The /runtime file system in the local zone is stored outside of the global rpool/ROOT file system in order to maintain data that changes across the live upgrade boot environments.
    The system (local and global zone) will operate like this:
    The global zone is used to manage zones only.
    Application software that has constantly changing data will be installed in the /runtime directory within the local zone. For example, FreeRadius will be installed in: /runtime/freeradius
    During a live upgrade the / file system in both the local and global zones will get updated, while /runtime is mounted untouched in whatever boot environment that is loaded.
    Does this make sense? Is there a better way to accomplish what I am looking for? This this setup going to cause any problems?
    What I would really like is to not have to worry about any of this and just install the application software where ever the software supplier sets it defaults to. It would be great if this system somehow magically knows to leave my changing data alone across boot environments.
    Thanks in advance for your feedback!
    --Jason                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Hello "jemurray".
    Have you read this document? (page 198)
    http://docs.sun.com/app/docs/doc/820-7013?l=en
    Then the solution is:
    01.- Create an alternate boot enviroment
    a.- In a new rpool
    b.- In the same rpool
    02.- Upgrade this new enviroment
    03.- Then I've seen that you have the "radious-zone" in a sparse zone (it's that right??) so, when you update the alternate boot enviroment you will (at the same time) upgrading the "radious-zone".
    This maybe sound easy but you should be carefull, please try this in a development enviroment
    Good luck

  • RAC 10gr2 using ASM for RMAN a cluster file system or a Local directory

    The environment is composed of a RAC with 2 nodes using ASM. I have to determine what design is better for Backup and Recovery with RMAN. The backups are going to be saved to disk only. The database is only transactional and small in size
    I am not sure how to create a cluster file system or if it is better to use a local directory. What's the benefit of having a recovery catalog that is optional to the database?
    I very much appreciate your advice and recommendation, Terry

    Arf,
    I am new to RAC. I analyzed Alejandro's script. He is main connection is to the first instance; then through sql*plus, he gets connected to the second instance. he exits the second instance and starts with RMAN backup to the database . Therefore the backup to the database is done from the first instance.
    I do not see where he setenv again to change to the second instance to run RMAN to backup the second instance. It looks to me that the backup is only done to the first instance, but not to the second instance. I may be wrong, but I do not see the second instance backup.
    Kindly, I request your assistance on the steps/connection to backup the second instance. Thank you so much!! Terry

  • Spanned file system with zip

    Hi
    Is there a way of creating a spanned file system with any of the current zip utilities (Native LabView or OpenG etc.)?
    Effectively the result should be z01, z02, etc. This format is something that both WinRAR and WinZIP accomplish, so the output would have to be compatible.
    The main issue is when compressing a whole directory. Files are rearranged in such a way that the output of all z01, z02 etc. is of the same size (users choice), and I don't quite know how the files are re-arranged in order to achieve that.
    Thanks  

    OpenG ZIP does not support spanning or stripped datafiles. The
    zerotolerance wrote:
    First, thanks for your reply.
    I actually have a zip utility that compresses strings rather than files. I believe that this utility can be used to create this spanning, but the trouble is it only gives the actual compressed body and not the required headers. 
    I don't know much about how the headers and how to create them (i.e. the crc32, relative path etc.) in a zip file.
    Most likely you just have the deflate algorrtihme in that "ZIP" utility. It is the compression algorithme normally used in ZIP but has nothing to do with ZIP itself. ZLIB is the most commonly used Open Source library for the deflate() algorithme and is also used in the OpenG ZIP library to implement the ZIP functionality. The function ZLIB Deflate.vi in OpenG ZIP exposes this functionality directly and does mthe same.
    zerotolerance wrote:
    First, thanks for your reply.
    But more importantly, how are the files within a directory arranged so that constant zip files (i.e. z01, z02 etc.) come out. 
    What I mean by this is the convept that WinRAR uses. So, if you have a directory with mixed file sizes in them, and you choose to have split size of 10MB, the output of each compressed file (i.e. z01, z02 etc.) is always 10MB, appart from the last one of course. 
    Some help in the headers and the rearrangement of files within a nested direcotry would go a long way.
    As for the cmd line, I don't like using that feature with labview because its very heavily limited. It does not give any intermitante control when a job is started. Just to see a simple progress as exe runs we have to jump through hoops, i.e. writting the output to file or clipborad and then reading etc. let alone anything else. and I'm also looking to add alot of other small features along the way, i.e. pausing, cancel, resume progress save, etc. So I definatly cannot take this route.
    So ZIP uses deflate() to compress the individucal file streams but the ZIP format is about creating management around it that allows to store multiple such streams into one physical file. It is a little like a filesystem inside a file.
    Adding Spanning and striping to OpenG ZIP would be possible and most of it could be done in the LabVIEW part of it, but there would also need to be some modifications to the underlaying shared library to support that. The spanning or striping is done in such a way that filestreams are cut up when necessary and continued in the next data file. The unzipping algorithme has then to be smart enough to recognize that and combine the streams again when uncompressing. For that the compression changes various flags in the header for each stream, which is the part where the shared library would need to be modified to allow those changed values to be writting (and also recognized on decompression). All in all quite a serious project undertaking.
    Refusing to use command line tools because they are ugly is not really a smart decision.
    Under Unix/Linux it is totally normal to do so and it makes the whole unix system very powerful as you don't need to write huge monotlithic monsters that do everything from making coffee, to cleaning the house and making the laundry but simply make a tool for each task and tie them together through command line chaining. Divide and master is the magic here, not not conquer and swallow!
    Also you have here the choice to use a command line tool and be done in 5 minutes, or write a tool like OpenG ZIP that took me probably several man months of work over time. Or wait until me or someone else changes OpenG to support this feature which could be 10 years.
    Alternatively you can also try to write a LabVIEW library to use the ZIPArchive shared library from http://www.artpol-software.com/ziparchive/KB/0610051553.aspx. But that is likely going to be a similar effort as writing OpenG ZIP was.
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Why would anyone want to use ASM Clustered File system?

    DB Version: 11gR2
    OS : Solaris, AIX, HP-UX
    I've read about the new feature ACFS.
    http://www.oracle-base.com/articles/11g/ACFS_11gR2.php
    But why would anyone want to store database binaries in a separate Filesystem created by Oracle?

    Hi Vitamind,
    how do these binaries interact with the CPU when they want something to be done?
    ACFS should work with Local OS (Solaris) to communicate with the CPU . Isn't this kind of double work?ACFS dont work with .... but provide filesystem to Local S.O
    There may be extra work, but that's because there are more resources that a common filesystem.
    Oracle ACFS executes on operating system platforms as a native file system technology supporting native operating system file system application programming interfaces (APIs).
    ACFS is a general purpose POSIX compliant cluster file system. Being POSIX compliant, all operating system utilities we use with ext3 and other file systems can also be used with Oracle ACFS given it belongs to the same family of related standards.
    ACFS Driver Model
    An Oracle ACFS file system is installed as a dynamically loadable vendor operating system (OS) file system driver and tool set that is developed for each supported operating system platform. The driver is implemented as a Virtual File System (VFS) and processes all file and directory operations directed to a specific file system.
    It makes sense you use the ACFS if you use some of the features below:
    • Oracle RAC / RAC ONE NODE
    • Oracle ACFS Snapshots
    • Oracle ASM Dynamic Volume Manager
    • Cluster Filesystem for regular files
    ACFS Use Cases
    • Shared Oracle DB home
    • Other “file system” data
    • External tables, data loads, data extracts
    • BFILES and other data customer chooses not to store in db
    • Log files (consolidates access)
    • Test environments
    • Copy back a previous snapshot after testing
    • Backups
    • Snapshot file system for point-intime backups
    • General purpose local or cluster file system
    • Leverage ASM manageability
    Note : Oracle ACFS file systems cannot be used for an Oracle base directory or an Oracle grid infrastructure home that contains the software for Oracle Clusterware, Oracle ASM, Oracle ACFS, and Oracle ADVM components.
    Regards,
    Levi Pereira

  • Difference between ASM Disk Group, ADVM Volume and ACFS File system

    Q1. What is the difference between an ASM Disk Group and an ADVM Volume ?
    To my mind, an ASM Disk Group is effectively a logical volume for Database files ( including FRA files ).
    11gR2 seems to have introduced the concepts of ADVM volumes and ACFS File Systems.
    An 11gR2 ASM Disk Group can contain :
    ASM Disks
    ADVM volumes
    ACFS file systems
    Q2. ADVM volumes appear to be dynamic volumes.
    However is this therefore not effectively layering a logical volume ( the ADVM volume ) beneath an ASM Disk Group ( conceptually a logical volume as well ) ?
    Worse still if you have left ASM Disk Group Redundancy to the hardware RAID / SAN level ( as Oracle recommend ), you could effectively have 3 layers of logical disk ? ( ASM on top of ADVM on top of RAID/SAN ) ?
    Q3. if it is 2 layers of logical disk ( i.e. ASM on top of ADVM ), what makes this better than 2 layers using a 3rd party volume manager ( eg ASM on top of 3rd party LVM ) - something Oracle encourages against ?
    Q4. ACFS File systems, seem to be clustered file systems for non database files including ORACLE_HOMEs, application exe's etc ( but NOT GRID_HOME, OS root, OCR's or Voting disks )
    Can you create / modify ACFS file systems using ASM.
    The oracle toplogy diagram for ASM in the 11gR2 ASM Admin guide, shows ACFS as part of ASM. I am not sure from this if ACFS is part of ASM or ASM sits on top of ACFS ?
    Q5. Connected to Q4. there seems to be a number of different ways, ACFS file systems can be created ? Which of the below are valid methods ?
    through ASM ?
    through native OS file system creation ?
    through OEM ?
    through acfsutil ?
    my head is exploding
    Any help and clarification greatly appreciated
    Jim

    Q1 - ADVM volume is a type of special file created in the ASM DG.  Once created, it creates a block device on the OS itself that can be used just like any other block device.  http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmfilesystem.htm#OSTMG30000
    Q2 - the asm disk group is a disk group, not really a logical volume.  It combines attributes of both when used for database purposes, as the database and certain other applications know how to talk "ASM" protocol.  However, you won't find any general purpose applications that can do so.  In addition, some customers prefer to deal directly with file systems and volume devices, which ADVM is made to do.  In your way of thinking, you could have 3 layers of logical disk, but each of them provides different attributes and characteristics.  This is not a bad thing though, as each has a slightly different focus - os file system\device, database specific, and storage centric.
    Q3 - ADVM is specifically developed to extend the characteristics of ASM for use by general OS applications.  It understands the database performance characteristics and is tuned to work well in that situation.  Because it is developed in house, it takes advantage of the ASM design model.  Additionally, rather than having to contact multiple vendors for support, your support is limited to calling Oracle, a one-stop shop.
    Q4 - You can create and modify ACFS file systems using command line tools and ASMCA.  Creating and modifying logical volumes happens through SQL(ASM), asmcmd, and ASMCA.  EM can also be used for both items.  ACFS sits on top of ADVM, which is a file in an ASM disk group.  ACFS is aware of the characteristics of ASM\ADVM volumes, and tunes it's IO to make best use of those characteristics. 
    Q5 - several ways:
    1) Connect to ASM with SQL, use 'alter diskgroup add volume' as Mihael points out.  This creates an ADVM volume.  Then, format the volume using 'mkfs' (*nix) or acfsformat (windows).
    2) Use ASMCA - A gui to create a volume and format a file system.  Probably the easiest if your head is exploding.
    3) Use 'asmcmd' to create a volume, and 'mkfs' to format the ACFS file system.
    Here is information on ASMCA, with examples:
    http://docs.oracle.com/cd/E16655_01/server.121/e17612/asmca_acfs.htm#OSTMG94348
    Information on command line tools, with examples:
    Basic Steps to Manage Oracle ACFS Systems

  • File System Sharing using Sun Cluster 3.1

    Hi,
    I need help on how to setup and configure the system to share a remote file system that is created on a SAN disk (SAN LUN ) between two Sun Solaris 10 servers.
    The files in the remote system should be read/writabe from both the solaris servers concurrently.
    As a security policy NFS mount is not allowed. Some one suggested it can be done by using Sun Cluster 3.1 agents on both servers. Any details on how I can do this using Sun Cluster 3.1 is really appreciated.
    thanks
    Suresh

    You could do this by installing Sun Cluster on both systems and then creating a global file system on the shared LUN. However, if there was significant write activity on both nodes, then the performance will not necessarily what you need.
    What is wrong with the security of NFS? If it is set up properly I don't think this should be a problem.
    The other option would be to use shared QFS, but without Sun Cluster.
    Regards,
    Tim
    ---

  • Java IO on top of a second file system layer

    I have created a second file system on top of an existing file system, and I have java executing Java and Python(Jython) how would I get the IO they use to link to my file system and not the OS' file system?

    Dennis56 wrote:
    I have created a second file system on top of an existing file system, and I have java executing Java and Python(Jython) how would I get the IO they use to link to my file system and not the OS' file system?Huh?
    Are you suggesting that you have created something like windows shared drives or windows condensed drives?
    Because that is the only way you could create a IO system on "top" of an existing system. And if you have successfully done that then you do not need to do anything to java/python except 'point' them to the new location such as using a driver letter for a windows shared drive.
    On the other hand if you have a bunch of java/python applications and you want to implement some other sort of file system then you would need to modify the virtual machines and the APIs of both of those.
    And regardless all of the above WILL be a lot of work.
    Now if you have some idea to implement a library that is useable (not replacement) for java/python apps then you
    1. Create the library
    2. Use the library
    That solution requires a lot less work. Although certainly more than a trivial amount.

Maybe you are looking for

  • Functional module table parameters values not getting displayed in Java sys

    Hi, We are calling the Table parameter through Java code from functional module ZCRM_ICSS_PROJ_CUST_USR is not giving any rows value .If I execute the same functional module with passing the import parameter value User id: MLDL010 its giving value in

  • I have two apple ids and want to merge them.  can someone tell me how to do this

    I have two apple ids and want to merge them.  Can someone tell me how to do this.  What I am trying to do is sync my apps from my computer to my ipad, but can't seem to be able to access one of my apple ids. Thank You

  • Airport Express Audio Port Blocked?

    Hello all, I'm setting up my Airport Express to play music through my speakers. I've previously done this with no problems, but when I've come to set it up again, this time, the audio port isn't deep enough for the audio jack to fit in. The jack is a

  • HTTP Call Service in 2013 Workflow

    How can you configure a workflow (Sharepoint 2013)  to call an HTTP Web Service request and evaluate the response headers to determine if the POST to an MVC Controller end-point was a success or failure

  • Route determination in Rush Order

    Hi All, i have a pick/pack time of 1 day in the shipping point & 1 day of route in my regular sales order, however when i am trying to do the same scenario with a rush order, both these components are not flowing in, the requested delivery is being c