Shared disk devices across zones/containers

Hi there,
To add a disk device to a zone i think that i can do this:
Add a device.
zonecfg:my-zone> add device
Set the device match, /dev/dsk/c1t0d0* in this procedure.
zonecfg:my-zone:device> set match=/dev/dsk/c1t0d0*
End the device specification.
zonecfg:my-zone:device> end
Could i then create another zone and add the same disk device?
I am thinking about installing 10g RAC across two zones for dev/training purposes.
Thanks
Dermot

Dermot,
There is a special forum for Solaris 10 Zones.
If you launch this question there, David Comay might answer you.
... Eric

Similar Messages

  • How to add an virtual disk device

    I remember i have seen one doc describe how to add an virtual disk device in solaris system, but I forget it now, can somebody help me?

    not sure what you are trying to achieve...
    However you should not be referring to Luns via their logical device i.e. c0t0d0, but using the unique did device or the /global device. refer to scdidam -L which will list all the did devices across all nodes and map them to the logical device path.

  • Path to windows shared disk

    Hi,
    what would be a typical path to mount a windows shared drive as a device? I suppose I have to choose SMB/cifs but then? SHould it look lije this:
    //SERVERNAME/SHAREDFOLDERNAME or use the IP like this 192.168.5.555/SHAREDFOLDERNAE

    Thank-You Jason and Changhai,
    i think i've found the problem.
    i do not have enough memory for the second node.
    i am running OUL 5 2.6.9-42.0.0.0.1.ELsmp with PV
    and could see the shared disks in vm.cfg on both nodes. if i shutdown node1 and startup node2 i could see the shared disk.
    Thanks for your helps.

  • SPARC 64bit Clusterware 10.2.0.4 - shared raw device

    Dear group,
    I have been asked to install RAC 10.2.0.4 SE on Solaris 10 (SPARC). I am not a Solaris admin altough I have done many RAC installations on RHEL4/5.
    Does anyone have suggestions/pointers to documentation (apart from the docs on OTN which I already consulted) on how to place OCR and vorting disk on shared raw devices? The setup is unlikely to use any multipathing but will rely on a low end SAN for shared storage. I am especially interested in the proper use of the "format" tool (I know for instance that I need to start from cyliner 1 for ASM slices).
    So far all I found were documents explaining SFRAC, Sun Cluster etc which I either don't have at my disposal or can't use due to license restrictions.
    On a comparable single-pathing Linux setup, we'd simply use fdisk to create partitions on the shared disk(s) which then are made available as raw devices thanks to udev.
    Thanks and regards,
    Martin

    Hi Martin,
    Take a look on this doc http://download.oracle.com/docs/cd/B19306_01/install.102/b14205/storage.htm#sthref675
    Regards,
    Rodrigo Mufalani
    http://mufalani.blogspot.com

  • Problems sharing disks using OpenFiler.

    Hi There
    I am following below link to prepare shared disks between two Oracle VM Nodes. After creating rules files "/etc/udev/rules.d/55-openiscsi.rules" and "/etc/udev/scripts/iscsidev.sh". I have restarted the Service but i do not see iscsi directory under /dev directory. I have tried creating rules and iscsidev.sh files several times.. but no luck so i was thinking if one of you faced this issue may be able to help me.
    I do see files under /dev/disk/by-path/ .......... but i should be able to see files under /dev/iscsi
    http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.html
    Please help.
    Sorry if this is not the correct category . I have posted this to VM section but did not get any response.
    Cheers
    P

    Did you follow Jeffrey's guide? ... because I did one yaear ago and got the openfiler iscsi devices working fine. Did you check the OS messages file?, did you find any problem starting iscsi demon?

  • SVM: migrating to a new disk device

    Hello,
    We have a 70GB LUN from one SAN disk array on a Solaris 10 server. This LUN has been configured into SVM using a single disk device and then carved up into soft partitions. We will need to migrate this LUN to a new disk array. Since SVM only sees this LUN as a disk slice and not as a metavolume, is there any way to mirror/replace the disk without disabling the applications that have been built on the soft partitions? In this case, we have several active Solaris Containers root file systems living on these soft partitions. Some metastat output to give you a better picture:
    # metastat -c
    d112 p 1.5GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d111 p 3.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d110 p 3.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d109 p 1.5GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d108 p 3.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d107 p 1.5GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d106 p 3.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d105 p 5.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d104 p 1.5GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d103 p 3.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d102 p 3.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    d101 p 3.0GB /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    # metastat d101
    d101: Soft Partition
    Device: /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0
    State: Okay
    Size: 6291456 blocks (3.0 GB)
    Device Start Block Dbase Reloc
    /dev/dsk/c4t600507630EFE08ED0000000000000105d0s0 0 No Yes
    Extent Start Block Block count
    0 1 6291456
    Device Relocation Information:
    Device Reloc Device ID
    /dev/dsk/c4t600507630EFE08ED0000000000000105d0 Yes id1,ssd@n600507630efe08ed0000000000000105
    I looked at the manual for "metareplace" and the language indicates that it would only work with a metavolume and not directly with the disk device.
    Thanks.

    Okay, there's two ways to use SPs and mirrors. The easist way in most cases is to create a large mirror, then carve the SPs out of it. But you really have to start off doing that. Trying to convert this layout to that would be annoying, assuming it's possible at all.
    The other way is to take the SP, toss a concat/stripe on it, then use that as the submirror base. It's a bit more annoying because you have to create the SPs on the other disk in the same manner.
    So you'd take a look at 'metastat -p' output. That shows the commands to recreate the current layout. Use the SP creation items, but run them on the second disk. This sets up a second set of SPs.
    Then take all the SPs and put a concat/stripe on them. So for one of the existing SPs (like d101), you'd do like this;
    metainit d151 1 1 d101
    Do that for all the SPs (on both disks). Now you can create the one-way mirrors from the currently-used side. As an example:
    metainit d201 -m d151
    Now you've got the mirror. The problem is that the filesystem is still mounted from d101. You've got to unmount from d101 and mount d201 before you can sync the mirror to the other side (that's the only step that can't be done online). Once the side is no longer mounted, you can sync up.
    metattach d201 d171 (or whatever you've named it).
    Good luck.
    Darren

  • Shared raw devices not discover during Oracle11g r2 RAC/Grid Installations

    Dear Gurus
    Our Platform: Redhat Linux Enterprise Editions5.3 64bit
    We are installing Oracle11g r2 Grid(Clusterware+ASM) and want to use ASM for storage OCR,VOTING,DATA and FLASH Storage.
    We are not want to use ASMLib.
    plz find the shared raw partitions details
    RAC Node-1
    [root@xyz-ch-aaadb-01 ~]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 13 104391 83 Linux
    /dev/sda2 14 6540 52428127+ 83 Linux
    /dev/sda3 6541 11370 38796975 83 Linux
    /dev/sda4 11371 17750 51247350 5 Extended
    /dev/sda5 11371 17358 48098578+ 82 Linux swap / Solaris
    /dev/sda6 17359 17750 3148708+ 83 Linux
    Disk /dev/sdb: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdc: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdd: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 66837 536868171 83 Linux
    Disk /dev/sde: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 66837 536868171 83 Linux
    Disk /dev/sdf: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 66837 536868171 83 Linux
    Disk /dev/sdg: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 1014 3143369 83 Linux
    Disk /dev/sdh: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdh1 1 66837 536868171 83 Linux
    RAC Node-2
    [root@xyzl-ch-aaadb-02 ~]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 13 104391 83 Linux
    /dev/sda2 14 6540 52428127+ 83 Linux
    /dev/sda3 6541 11240 37752750 82 Linux swap / Solaris
    /dev/sda4 11241 17750 52291575 5 Extended
    /dev/sda5 11241 15940 37752718+ 83 Linux
    /dev/sda6 15941 16332 3148708+ 83 Linux
    Disk /dev/sdp: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdq: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdr: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdr1 1 66837 536868171 83 Linux
    Disk /dev/sds: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sds1 1 66837 536868171 83 Linux
    Disk /dev/sdt: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdt1 1 66837 536868171 83 Linux
    Disk /dev/sdu: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdu1 1 1014 3143369 83 Linux
    Disk /dev/sdv: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdv1 1 66837 536868171 83 Linux
    Disk /dev/sdw: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdx: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdy: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdy1 1 66837 536868171 83 Linux
    Disk /dev/sdz: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdz1 1 66837 536868171 83 Linux
    Disk /dev/sdaa: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdaa1 1 66837 536868171 83 Linux
    Disk /dev/sdab: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdab1 1 1014 3143369 83 Linux
    Disk /dev/sdac: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdac1 1 66837 536868171 83 Linux
    plz suggest the solutions.....
    Edited by: hitgon on Aug 20, 2011 3:49 AM
    Edited by: hitgon on Aug 20, 2011 3:50 AM

    we are still not able to discover the shared raw device partitions,
    plz help us......
    now my fdisk -l shown the shown the consistent raw device partitions
    plz find the details...........
    [root@xyz-ch-aaadb-01 grid]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 13 104391 83 Linux
    /dev/sda2 14 6540 52428127+ 83 Linux
    /dev/sda3 6541 11370 38796975 83 Linux
    /dev/sda4 11371 17750 51247350 5 Extended
    /dev/sda5 11371 17358 48098578+ 82 Linux swap / Solaris
    /dev/sda6 17359 17750 3148708+ 83 Linux
    Disk /dev/sdb: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdc: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdd: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 66837 536868171 83 Linux
    Disk /dev/sde: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 66837 536868171 83 Linux
    Disk /dev/sdf: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 66837 536868171 83 Linux
    Disk /dev/sdg: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 1014 3143369 83 Linux
    Disk /dev/sdh: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdh1 1 66837 536868171 83 Linux
    RAC Node-2
    [root@xyz-ch-aaadb-02 ~]# fdisk -l
    Disk /dev/sda: 145.9 GB, 145999527936 bytes
    255 heads, 63 sectors/track, 17750 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 4700 37752718+ 83 Linux
    /dev/sda2 4701 11227 52428127+ 83 Linux
    /dev/sda3 11228 11619 3148740 83 Linux
    /dev/sda4 11620 17750 49247257+ 5 Extended
    /dev/sda5 11620 17750 49247226 83 Linux
    Disk /dev/sdb: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdc: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sdd: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdd1 1 66837 536868171 83 Linux
    Disk /dev/sde: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 66837 536868171 83 Linux
    Disk /dev/sdf: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 66837 536868171 83 Linux
    Disk /dev/sdg: 3221 MB, 3221225472 bytes
    100 heads, 62 sectors/track, 1014 cylinders
    Units = cylinders of 6200 * 512 = 3174400 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 1014 3143369 83 Linux
    Disk /dev/sdh: 549.7 GB, 549755813888 bytes
    255 heads, 63 sectors/track, 66837 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdh1 1 66837 536868171 83 Linux
    find the following details.........
    ===============================================
    RAC Node-1
    [root@xyz-ch-xxxdb-01 grid]# ls -l /dev/sdb
    brw-r----- 1 root disk 8, 16 Aug 19 09:15 /dev/sdb
    [root@xyz-ch-xxxdb-01 grid]#
    [root@xyz-ch-xxxdb-01 grid]# ls -l /dev/sdg
    brw-r----- 1 root disk 8, 96 Aug 19 09:15 /dev/sdg
    RAC Node-2
    [root@xyz-ch-xxxdb-02 ~]# ls -l /dev/sdb
    brw-r----- 1 root disk 8, 16 Aug 19 18:41 /dev/sdb
    [root@xyz-ch-xxxdb-02 ~]#
    [root@xyz-ch-xxxdb-02 ~]#
    [root@xyz-ch-xxxdb-02 ~]# ls -l /dev/sdg
    brw-r----- 1 root disk 8, 96 Aug 19 18:41 /dev/sdg
    Edited by: hitgon on Aug 20, 2011 3:50 AM

  • ASM Spfile on shared raw device

    Hi,
    I am building two nodes Cluster on Linux 4 update 5. I have successfully created the CRS. Now I am trying to create the ASM instance using the ./dbca. When I select both nodes to manage diskgroup, during the instance creation, and after I enter the SPFILE file location, I am receiving this message
    {The /dev/raw/raw10/spfile.asm.ora location is not valid. The directory "[/dev/raw/raw10/] is not a shared system partition across nodes1 and 2.}
    The Oracle:dba has the ownership and read/write permission on this partition.
    Your help highly appreciated.
    Majid

    1) Check to make sure that the partition /dev/raw/raw10 is a shared raw device between the nodes (you can do this by using: dd if=/dev/zero of=/dev/raw/raw10 bs=1M count=x from both the nodes while logged in oracle).
    2) Since /dev/raw/raw10 is a raw device, you can not specify /dev/raw/raw10/spfile.asm.ora as an SPFILE location. Alternatively, under $ORACLE_HOME/dbs create a link to /dev/raw/raw10 with the name spfile$ORACLE_SID.ora from both the nodes.
    ln -s /dev/raw/raw10 $ORACLE_HOME/dbs/spfile$ORACLE_SID.ora (both the nodes)
    HTH
    Thanks
    Chandra

  • Shared Disks For RAC

    Hi,
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? OCFS2? or any other option?
    Can some one lead me to a documnet on how can I use the shared disks for RAC?
    Thanks.

    javed555 wrote:
    I plan to use shared disks to create Oracle RAC using ASM. What options do I have? You have two options:
    1. Create shared virtual, i.e. file-backed disks. These files will be stored in /OVS/sharedDisk/ and made available to each guest
    2. Expose physical devices directly to each guest, e.g. an LVM partition or a multipath LUN.
    With both options, the disks show up as devices in the guests and you would then provision them with ASM, exactly the same way as if your RAC nodes were physical.
    OCFS2 or NFS are required to create shared storage for Oracle VM Servers. This is to ensure the /OVS mount point is shared between multiple Oracle VM Servers.

  • Windows sharing disk problem

    Hello,
    I have LAN network at home in windows 7, I have a sharing disk and when I turn on the iMac I did not find my sharing disk, and only after 5 minutes I can find him. After that the disk works fine. Why It happens?
    Thanks

    Sometimes it takes a while before devices on a network become visible. If you can't wait, note the IP address of the device you're looking for, and next time connect with it manually.

  • Doubts about shared disk for RAC

    Hi All,
    I am really new to RAC.Even after reading various documents,I still have many doubts regarding shared storage and file systems needed for RAC.
    1.Clusterware has to be installed on a shared file system like OCFS2.Which type of hard drive is required to install OCFS2 so that it can be accessed from all nodes??
    It has to be an external hard drive???Or we can use any simple hard disk for shared storage??
    If we use external hard drive then does it need to be connected to a seperate server alltogether or can it be connected to any one of the nodes in the cluster???
    Apart from this shared drives,approximately what size of hard disk is required for all nodes(for just a testing environment).
    Sincerely appreciate a reply!!
    Thanks in advance.

    Clusterware has to be installed on shared storage. RAC also requires shared storage for the database.
    Shared storage can be managed via many methods.
    1. Some sites using Linux or UNIX-based OSes choose to use RAW disk devices. This method is not frequently used due to the unpleasant management overhead and long-term manageability for RAW devices.
    2. Many sites use cluster filesystems. On Linux and Windows, Oracle offers OCFS2 as one (free) cluster filesystem. Other vendors also offer add-on products for some OSes that provide supported cluster filesystems (like GFS, GPFS, VxFS, and others). Supported cluster filesystems may be used for Clusterware files (OCR and voting disks) as well as database files. Check Metalink for a list of supported cluster filesystems.
    3. ASM can be used to manage shared storage used for database files. Unfortunately, due to architecture decisions made by Oracle, ASM cannot currently be used for Clusterware files (OCR and voting disks). It is relatively common to see ASM used for DB files and either RAW or a cluster filesystem used for Clusterware files. In other words, ASM and cluster filesystems and RAW are not mutually exclusive.
    As for hardware--I have not seen any hardware capable of easily connecting multiple servers to internal storage. So, shared storage is always (in my experience) housed externally. You can find some articles on OTN and other sites (search Google for them) that use firewire drives or a third computer running openfiler to provide the shared storage in test environments. In production environments, SAN devices are commonly employed to provide concurrent access to storage from multiple servers.
    Hope this helps!
    Message was edited by:
    Dan_Norris

  • Changing Shared Disks Password

    Hi All
    I want to change the setting of my disk for shared disks from "With a disk password" to "with accounts" and when I do non of the files that was set up with a disk password show in with accounts.  I do disconnect and then reconnect but no joy. 
    Has anyone elso come across thisproblem and if so how do you resolve it?

    If you want to work with accounts you need to do it from original setup.. you cannot apply accounts to an already loaded TC and expect to access the files.
    Dump all the files to USB.. setup accounts and then copy all the files back.
    But you should realise you have no improvement in security whatsoever if anyone has access to the TC.. they can just press reset button and all the accounts are gone and any files can be accessed.. it is totally inadequate as a security measure.. you are better setting up a encrypted file and adding files to it. And use encrypted TM if you are trying to stop other people getting access to your files.

  • Shared Disks on AEBS - Secure Passwords - Multiple Users

    I posted this on the Digital Life forum. No joy. Not sure if this is an AEBS issue or a Leopard issue.
    I need to have different access rights to disks or folders on disks connected to the AEBS. One disk (or folder) is the Everyone shares disk (or folder) for Time Machine backups and shared files.
    And, I need to have a separate disk (or folder) for personal information that I can access but other users cannot access.
    I already tried setting permissions on the shared disks, but it will not allow me to add users. I already tried changing the AEBS from a Disk Access password to an Accounts schema, but it will not allow me to change folder permissions based on the named Accounts. (.. and I don't want to buy Leopard Server version.)
    Anyone got a scheme to make this work?

    Yep. You solved my problem. Thank you!
    I did not go far enough.
    Just in case anyone else wants to try this, some notes.
    1.) I had to reformat (Partition and Erase) the disk. When making the conversion from a shared disk to accounts, it loses access to the previous data on the disk. In order to avoid losing that space, it was necessary to physically connect the drive to the computer, reformat, then reattach to the AEBSn; then set up the accounts.
    2.) I had to restart the computers to connect to the new shared drive.
    3.) This set up is using a separate drive (LaCie Quadra 300 gb) connected via USB to an AEBSn.
    I am a happy camper. Thank you for your assistance!

  • RAC with ASM and shared disks?

    Hi all,
    Can someone clarify this little point please. If I use ASM as my storage with a RAC database, I have to configure these nodes to shared disks. At least this is what the UG says ...
    When you create a disk group for a cluster or add new disks to an existing clustered disk group, you only need to prepare the underlying physical storage on shared disks. The shared disk requirement is the only substantial difference between using ASM in a RAC database compared to using it in a single-instance Oracle database. ASM automatically re-balances the storage load after you add or delete a disk or disk group.
    With my 9i databases, I used HCAMP to allow for concurrent VG access among the nodes. My questions are ...
    1) How can I share this storage as stated above without using HACMP? My understanding is with 10g I no longer have to use it.
    2) Can Oracle's clusterware be used to share storage? I have not seen any indication that it does.
    3) Does this mean I still have to use HCAMP with 10g crs to allow shared storage?
    Thank you

    "...meaning visible to all the participating nodes, which you don't need HACMP..."
    This is one step forward, but still not clear. On unix, storage is presented to ASM as raw volumes. As such, how can these volumes be visible on all nodes without using HCAMP (or whatever 3rd party clusterware you are using). Presenting raw volumes on several nodes is something that is not done at OS level without using some clusterware functionality.
    I do understand that storage or LUNs can be shared at the SAN fabric level. But then, these LUNs are carved in bug chunks and I would like to be able to allocate storage at much granular level using raw partitions.
    So all in all, here are my questions ...
    1) On unix platforms, can ASM disks be LUNs, raw volumes, or may be both?
    2) If raw volumes, how are these shared (or made visible) without using 3rd party clusterware? Having managed 9i RAC, it was the function of HACMP to make these volumes visible on all nodes, otherwise, we had to imp/exp VGs on all nodes to make them visible.
    Thank you

  • Extreme with shared disk and two users on same mac issue

    Hi,
    just got my airport extreme + a 1TB hard drive for disk sharing and everything worked great until my wife came home (not her fault )
    My setup:  I have 1 mac which we both share with two user accounts plus I have a Windows 7 machine. I setup the Airport Extreme drive sharing with 'accounts' as I thought/know this will work with Windows (smb I think) which it sure enough did and I could connect to my share from both my Windows machine and my macbook air and was rather pleased how easy it was.
    My wife then came home logged on and she could see the drive and even see the root folder in Finder but saw no files whatsoever and instead of the 'sharing icon' (not sure what to call it looks like a hard disk with wifi signals) she gets a circle with a red bar in it in the finder title... so I disconnected the shared disk in finder and reconnected entering the account details again and things worked for her.  I then signed into my account and now I had the same issue... I could see disk but see nothing on it which means I have to disconnect and reconnect etc. etc.
    I even setup two seperate accounts now in the AirPort Utility for disk sharing and use one for my account and one for my wifes but the issue remains, by the way no issue on the Windows 7 box.
    HELP    the whole reason why I got this is so we can share our pictures and videos etc. on 2 machines and 2 computers with 2 accounts.

    Yes, I'd like a solution to this one as well! Did you ever find the answer?

Maybe you are looking for

  • Server error when trying to download?

    I have reported this to Nokia care, but have yet to receive a response.  I tried to download Guitar Hero 5 - went to choose payment method - chose phone bill - then received a server error.  So I tried again - chose phone bill as my payment method -

  • DVD software that supports adding 24 bit/96 kHz stereo PCM to video

    Does anyone know which DVD authoring software supports adding stereo PCM 24 bit/96 kHz files created by say recording using the Audigy 2ZS ?Most DVD authoring/burning software I've seen only seems to recognize/support 48 kHz sampled PCM audio, but vi

  • PO released dates by releases ?

    Dear MM gurus, Request pl let me know is there any Standard Report to see the PO Releases along with dates & release levels with sap user id. (PO releases history) Pl help thanx in advance regards Srihari

  • Help with production jvm issues at customer site

    Following is a brief overview of two production issues experienced by an Oracle customer. Customer is currently running 32 bit Java 1.5_20 on Solaris 64 bit OS. 1)     OOM heap issues – SR #3-2611053901 – Recently we have been seeing OOM errors due t

  • JPEGS won't open  in Bridge-even after setting JPEG files to "Open With" Bridge

    I have two different computers both with PSCS3. On one I have a number of JPEGs on desktop which I can double left click and Bridge opens and shows the file . On the other PC I am not able to open the JPEGS as above...I double left click...andnothing