11gR2 Verification of shared storage accessibility

Friends,
I do not understand how is this possible. I am trying to apply 11.2.0.2.5 psu on a 2-node cluster running on RHEL 5.5 vms. Followed ORACLE-BASE examples when installed this laptop RAC.
I am not using any ACFS and none of GI and DB homes are shared. But, on node 2, cluvfy THINKS database home is shared.
[oracle@rac1 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle/product/11.2.0.2/db_1 -n rac1,rac2 -display_status
Verifying shared storage accessibility
Checking shared storage accessibility...
"/oracle/product/11.2.0.2/db_1" is not shared
Shared storage check failed on nodes "rac2,rac1"
Verification of shared storage accessibility was unsuccessful on all the specified nodes.
NODE_STATUS::rac2:VFAIL
NODE_STATUS::rac1:VFAIL
OVERALL_STATUS::VFAIL
[oracle@rac1 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle_grid/product/11.2.0.2 -n rac1,rac2 -display_status
Verifying shared storage accessibility
Checking shared storage accessibility...
"/oracle_grid/product/11.2.0.2" is not shared
Shared storage check failed on nodes "rac2,rac1"
Verification of shared storage accessibility was unsuccessful on all the specified nodes.
NODE_STATUS::rac2:VFAIL
NODE_STATUS::rac1:VFAIL
OVERALL_STATUS::VFAIL
[oracle@rac1 trace]$ hostname
rac1
[oracle@rac1 trace]$ echo $ORACLE_HOSTNAME
rac1
[oracle@rac1 trace]$
[oracle@rac2 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle/product/11.2.0.2/db_1 -n rac1,rac2 -display_status
Verifying shared storage accessibility
Checking shared storage accessibility...
"/oracle/product/11.2.0.2/db_1" is shared
Shared storage check was successful on nodes "rac2,rac1"
Verification of shared storage accessibility was successful.
NODE_STATUS::rac2:SUCC
NODE_STATUS::rac1:SUCC
OVERALL_STATUS::SUCC
[oracle@rac2 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle_grid/product/11.2.0.2/ -n rac1,rac2 -display_status
Verifying shared storage accessibility
Checking shared storage accessibility...
"/oracle_grid/product/11.2.0.2/" is not shared
Shared storage check failed on nodes "rac2,rac1"
Verification of shared storage accessibility was unsuccessful on all the specified nodes.
NODE_STATUS::rac2:VFAIL
NODE_STATUS::rac1:VFAIL
OVERALL_STATUS::VFAIL
[oracle@rac2 trace]$ hostname
rac2
[oracle@rac2 trace]$ echo $ORACLE_HOSTNAME
rac2
[oracle@rac2 trace]$
I can not determine any reasons and do not know how to fix.
Any help?
Thank you.

Hi,
CLUFVY COMP SSA checks if the used storage/location is shared.
If you do "cluvfy comp ssa -t software" cluvfy checks if your software home is shared.
It tells you it is not. Hence the checks fails (which is correct, because you said DB Home is not shared).
So where is the problem?
CLUVFY COMP SSA only makes sense to check the sharedness. If it is not shared, then there is no sense in testing for it.
Regards
Sebastian

Similar Messages

  • Runcluvfy.bat comp ssa - Shared storage check failed

    I've run the cluvfy on Windows 2003 64bits on SAN with 3 nodes and found that it's unsuccessful for the shared storage checking. (whereas it's successful on Windows2003 32 bits.)
    C:\Documents and Settings\Administrator>C:\_Ly1\102010_win64_x64_clusterware\clusterware\cluvfy\runcluvfy.bat comp ssa -n rac1,rac2,rac3
    The system cannot find the file specified.
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    Shared storage check failed on nodes "rac2,rac1,rac3".
    Verification of shared storage accessibility was unsuccessful on all the nodes.
    C:\Documents and Settings\Administrator>
    I'm not sure that it may result in the Clusterware Installation fail or not.
    Here, I captured the failure screen :
    http://alkaspace.com/is.php?i=30223&img=clusterware-fai.jpg
    Please help me out. Thank you!!

    I just ran into this myself while building an enterprise system on Win Server 2003. The answers here did not sit well with me and I needed to be sure of the shared storage prior to proceeding further. Researching Metalink did not uncover any relevant information either. I then opened an SR with Oracle and I did get back a satisfactory response which allowed me to verify my shared storage. The entire text of their solution can be found at http://www.webofwood.com/rac/oracle-response-to-shared-storage-check-failed-on-nodes/. Basically, it is a method of using another utility to identify the storage device names used by Windows and then writing and reading to them from each node to verify each node 'sees' what is written by the other node(s). If this check is successful, you can then proceed.

  • Shared storage check failed on nodes

    hi friends,
    I am installing rac 10g on vmware and os is OEL4.i completed all the prerequisites but when i run the below command
    ./runclufy stage -post hwos -n rac1,rac2, i am facing below error.
    node connectivity check failed.
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sde on nodes:
    rac2,rac2,rac2,rac2,rac2,rac1,rac1,rac1,rac1,rac1
    Shared storage check failed on nodes "rac2,rac1"
    please help me anyone ,it's urgent
    Thanks,
    poorna.
    Edited by: 958010 on 3 Oct, 2012 9:47 PM

    Hello,
    It seems that your storage is not accessible from both the nodes. If you want you can follow these steps to configure 10g RAC on VMware.
    Steps to configure Two Node 10 RAC on RHEL-4
    Remark-1: H/W requirement for RAC
    a) 4 Machines
    1. Node1
    2. Node2
    3. storage
    4. Grid Control
    b) 2 switchs
    c) 6 straight cables
    Remark-2: S/W requirement for RAC
    a) 10g cluserware
    b) 10g database
    Both must have the same version like (10.2.0.1.0)
    Remark-3: RPMs requirement for RAC
    a) all 10g rpms (Better to use RHEL-4 and choose everything option to install all the rpms)
    b) 4 new rpms are required for installations
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    ------------ Start Machine Preparation --------------------
    1. Prepare 3 machines
    i. node1.oracle.com
    etho (192.9.201.183) - for public network
    eht1 (10.0.0.1) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    ii. node2.oracle.com
    etho (192.9.201.187) - for public network
    eht1 (10.0.0.2) - for private n/w
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    iii. openfiler.oracle.com
    etho (192.9.201.182) - for public network
    gateway (192.9.201.1)
    subnet (255.255.255.0)
    NOTE:-
    -- Here eth0 of all the nodes should be connected by Public N/W using SWITCH-1
    -- eth1 of all the nodes should be connected by Private N/W using SWITCH-2
    2. network Configuration
    #vim /etc/host
    192.9.201.183 node1.oracle.com node1
    192.9.201.187 node2.oracle.com node2
    192.9.201.182 openfiler.oracle.com openfiler
    10.0.0.1 node1-priv.oracle.com node1
    10.0.0.2 node2-priv.oracle.com node2-priv
    192.9.201.184 node1-vip.oracle.com node1-vip
    192.9.201.188 node2-vip.oracle.com node2-vip
    2. Prepare Both the nodes for installation
    a. Set Kernel Parameters (/etc/sysctl.conf)
    kernel.shmall = 2097152
    kernel.shmmax = 2147483648
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 65536
    net.ipv4.ip_local_port_range = 1024 65000
    net.core.rmem_default = 262144
    net.core.rmem_max = 262144
    net.core.wmem_default = 262144
    net.core.wmem_max = 262144
    b. Configure /etc/security/limits.conf file
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    c. Configure /etc/pam.d/login file
    session required /lib/security/pam_limits.so
    d. Create user and groups on both nodes
    # groupadd oinstall
    # groupadd dba
    # groupadd oper
    # useradd -g oinstall -G dba oracle
    # passwd oracle
    e. Create required directories and set the ownership and permission.
    # mkdir –p /u01/crs1020
    # mkdir –p /u01/app/oracle/product/10.2.0/asm
    # mkdir –p /u01/app/oracle/product/10.2.0/db_1
    # chown –R oracle:oinstall /u01/
    # chmod –R 755 /u01/
    f. Set the environment variables
    $ vi .bash_profile
    ORACLE_BASE=/u01/app/oracle/; export ORACLE_BASE
    ORA_CRS_HOME=/u01/crs1020; export ORA_CRS_HOME
    #LD_ASSUME_KERNEL=2.4.19; export LD_ASSUME_KERNEL
    #LANG=”en_US”; export LANG
    3. storage configuration
    PART-A Open-filer Set-up
    Install openfiler on a machine (Leave 60GB free space on the hdd)
    a) Login to root user
    b) Start iSCSI target service
    # service iscsi-target start
    # chkconfig –level 345 iscsi-target on
    PART –B Configuring Storage on openfiler
    a) From any client machine open the browser and access openfiler console (446 ports).
    https://192.9.201.182:446/
    b) Open system tab and update the local N/W configuration for both nodes with netmask (255.255.255.255).
    c) From the Volume tab click "create a new physical volume group".
    d) From "block Device managemrnt" click on "(/dev/sda)" option under 'edit disk' option.
    e) Under "Create a partition in /dev/sda" section create physical Volume with full size and then click on 'CREATE'.
    f) Then go to the "Volume Section" on the right hand side tab and then click on "Volume groups"
    g) Then under the "Create a new Volume Group" specify the name of the volume group (ex- racvgrp) and click on the check box and then click on "Add Volume Group".
    h) Then go to the "Volume Section" on the right hand side tab and then click on "Add Volumes" and then specify the Volume name (ex- racvol1) and use all space and specify the "Filesytem/Volume type" as ISCSI and then click on CREATE.
    i) Then go to the "Volume Section" on the right hand side tab and then click on "iSCSI Targets" and then click on ADD button to add your Target IQN.
    j) then goto the 'LUN Mapping" and click on "MAP".
    k) then goto the "Network ACL" and allow both node from there and click on UPDATE.
    Note:- To create multiple volumes with openfiler we need to use Multipathing that is quite complex that’s why here we are going for a single volume. Edit the property of each volume and change access to allow.
    f) install iscsi-initiator rpm on both nodes to acces iscsi disk
    #rpm -ivh iscsi-initiator-utils-----------
    g) Make entry in iscsi.conf file about openfiler on both nodes.
    #vim /etc/iscsi.conf (in RHEL-4)
    and in this file you will get a line "#DiscoveryAddress=192.168.1.2" remove comment and specify your storage ip address here.
    OR
    #vim /etc/iscsi/iscsi.conf (in RHEL-5)
    and in this file you will get a line "#ins.address = 192.168.1.2" remove comment and specify your storage ip address here.
    g) #service iscsi restart (on both nodes)
    h) From both Nodes fire this command to access volume of openfiler-
    # iscsiadm -m discovery -t sendtargets -p 192.2.201.182
    i) #service iscsi restart (on both nodes)
    j) #chkconfig –level 345 iscsi on (on both nodes)
    k) make the partition 3 primary and 1 extended and within extended make 11 logical partition
    A. Prepare partitions
    1. #fdisk /dev/sdb
    :e (extended)
    Part No. 1
    First Cylinder:
    Last Cylinder:
    :p
    :n
    :l
    First Cylinder:
    Last Cylinder: +1024M
    2. Note the /dev/sdb* names.
    3. #partprobe
    4. Login as root user on node2 and run partprobe
    B. On node1 login as root user and create following raw devices
    # raw /dev/raw/raw5 /dev/sdb5
    #raw /dev/raw/taw6 /dev/sdb6
    # raw /dev/raw/raw12 /dev/sdb12
    Run ls –l /dev/sdb* and ls –l /dev/raw/raw* to confirm the above
    -Repeat the same thing on node2
    C. On node1 as root user
    # vi .etc/sysconfig/rawdevices
    /dev/raw/raw5 /dev/sdb5
    /dev/raw/raw6 /dev/sdb6
    /dev/raw/raw7 /dev/sdb7
    /dev/raw/raw8 /dev/sdb8
    /dev/raw/raw9 /dev/sdb9
    /dev/raw/raw10 /dev/sdb10
    /dev/raw/raw11 /dev/sdb11
    /dev/raw/raw12 /dev/sdb12
    /dev/raw/raw13 /dev/sdb13
    /dev/raw/raw14 /dev/sdb14
    /dev/raw/raw15 /dev/sdb15
    D. Restart the raw service (# service rawdevices restart)
    #service rawdevices restart
    Assigning devices:
    /dev/raw/raw5 --> /dev/sdb5
    /dev/raw/raw5: bound to major 8, minor 21
    /dev/raw/raw6 --> /dev/sdb6
    /dev/raw/raw6: bound to major 8, minor 22
    /dev/raw/raw7 --> /dev/sdb7
    /dev/raw/raw7: bound to major 8, minor 23
    /dev/raw/raw8 --> /dev/sdb8
    /dev/raw/raw8: bound to major 8, minor 24
    /dev/raw/raw9 --> /dev/sdb9
    /dev/raw/raw9: bound to major 8, minor 25
    /dev/raw/raw10 --> /dev/sdb10
    /dev/raw/raw10: bound to major 8, minor 26
    /dev/raw/raw11 --> /dev/sdb11
    /dev/raw/raw11: bound to major 8, minor 27
    /dev/raw/raw12 --> /dev/sdb12
    /dev/raw/raw12: bound to major 8, minor 28
    /dev/raw/raw13 --> /dev/sdb13
    /dev/raw/raw13: bound to major 8, minor 29
    /dev/raw/raw14 --> /dev/sdb14
    /dev/raw/raw14: bound to major 8, minor 30
    /dev/raw/raw15 --> /dev/sdb15
    /dev/raw/raw15: bound to major 8, minor 31
    done
    E. Repeat the same thing on node2 also
    F. To make these partitions accessible to oracle user fire these commands from both Nodes.
    # chown –R oracle:oinstall /dev/raw/raw*
    # chmod –R 755 /dev/raw/raw*
    F. To make these partitions accessible after restart make these entry on both nodes
    # vi /etc/rc.local
    Chown –R oracle:oinstall /dev/raw/raw*
    Chmod –R 755 /dev/raw/raw*
    4. SSH configuration (User quivalence)
    On node1:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node2:- $ssh-keygen –t rsa
    $ssh-keygen –t dsa
    On node1:- $cd .ssh
    $cat *.pub>>node1
    On node2:- $cd .ssh
    $cat *.pub>>node2
    On node1:- $scp node1 node2:/home/oracle/.ssh
    On node2:- $scp node2 node2:/home/oracle/.ssh
    On node1:- $cat node*>>authowized_keys
    On node2:- $cat node*>>authowized_keys
    Now test the ssh configuration from both nodes
    $ vim a.sh
    ssh node1 hostname
    ssh node2 hostname
    ssh node1-priv hostname
    ssh node2-priv hostname
    $ chmod +x a.sh
    $./a.sh
    first time you'll have to give the password then it never ask for password
    5. To run cluster verifier
    On node1 :-$cd /…/stage…/cluster…/cluvfy
    $./runcluvfy stage –pre crsinst –n node1,node2
    First time it will ask for four New RPMs but remember install these rpms by double clicking because of dependancy. So better to install these rpms in this order (rpm-3, rpm-4, rpm-1, rpm-2)
    1. compat-gcc-7.3-2.96.128.i386.rpm
    2. compat-gcc-c++-7.3-2.96.128.i386.rpm
    3. compat-libstdc++-7.3-2.96.128.i386.rpm
    4. compat-libstdc++-devel-7.3-2.96.128.i386.rpm
    And again run cluvfy and check that "It should given a clean cheat" then start clusterware installation.

  • CVU report for shared storage

    During Configuration RAC
    I am running CVU and getting this error message.
    Verification of shared storage accessiblity was unsuccessful on all the nodes.
    Need suggestion

    I don't know. I don't know whether my knowledge is useful in your situation.
    I have really no idea what kind of enviroinment you are using. After pulling teeth, I finally find out you are using EMC - of some sort. I think you might be running Linux, but I'm not quite sure because you don't tell us. I think you have a logical volume visible, and hope that volume is visible as raw devices on a hared environment or using a commercial cluster file system.
    Perhaps you might open a service request with Oracle. Or just assume it is and proceed.
    However - I, as a volunteer, hereby walk away from this question.

  • My wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign

    MMy wife an I share one iTunes account but have separate Apple id for our devices - prior to 8.1 we shared storage on iCloud now we can't and when I try to buy more storage on her phone instead of accessing the shared iTunes account it try's to sign in to iTunes using her Apple id - I checked the iTunes id and password on both devices - can anyone help

    Have a look here...
    http://macmost.com/setting-up-multiple-ios-devices-for-messages-and-facetime.htm l

  • The best option to create  a shared storage for Oracle 11gR2 RAC in OEL 5?

    Hello,
    Could you please tell me the best option to create a shared storage for Oracle 11gR2 RAC in Oracel Enterprise Linux 5? in production environment? And could you help to create shared storage? Because there is no additional step in Oracle installation guide. There are steps for only asm disk creation.
    Thank you.

    Here are names of partitions and permissions. Partitions which have 146 GB, 438 GB, 438 GB of capacity are my storage. Two of three disks which are 438 GB were configured as RAID 5 and remaining disk was configured as RAID 0. My storage is Dell MD 3000i and connected to nodes through ethernet.
    Node 1
    [root@rac1 home]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:39 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:40 /dev/sda1
    brw-r----- 1 root disk 8, 16 Aug 8 17:39 /dev/sdb
    brw-r----- 1 root disk 8, 17 Aug 8 17:39 /dev/sdb1
    brw-r----- 1 root disk 8, 32 Aug 8 17:40 /dev/sdc
    brw-r----- 1 root disk 8, 48 Aug 8 17:41 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 18:26 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:43 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 18:34 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:43 /dev/sdf1
    brw-r----- 1 root disk 8, 96 Aug 8 18:34 /dev/sdg
    brw-r----- 1 root disk 8, 97 Aug 8 18:43 /dev/sdg1
    [root@rac1 home]# fdisk -l
    Disk /dev/sda: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8844 71039398+ 83 Linux
    Disk /dev/sdb: 72.7 GB, 72746008576 bytes
    255 heads, 63 sectors/track, 8844 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdb1 * 1 4079 32764536 82 Linux swap / Solaris
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 17784 142849948+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    Disk /dev/sdg: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdg1 1 53352 428549908+ 83 Linux
    Node 2
    [root@rac2 ~]# ll /dev/sd*
    brw-r----- 1 root disk 8, 0 Aug 8 17:50 /dev/sda
    brw-r----- 1 root disk 8, 1 Aug 8 17:51 /dev/sda1
    brw-r----- 1 root disk 8, 2 Aug 8 17:50 /dev/sda2
    brw-r----- 1 root disk 8, 16 Aug 8 17:51 /dev/sdb
    brw-r----- 1 root disk 8, 32 Aug 8 17:52 /dev/sdc
    brw-r----- 1 root disk 8, 33 Aug 8 18:54 /dev/sdc1
    brw-r----- 1 root disk 8, 48 Aug 8 17:52 /dev/sdd
    brw-r----- 1 root disk 8, 64 Aug 8 17:52 /dev/sde
    brw-r----- 1 root disk 8, 65 Aug 8 18:54 /dev/sde1
    brw-r----- 1 root disk 8, 80 Aug 8 17:52 /dev/sdf
    brw-r----- 1 root disk 8, 81 Aug 8 18:54 /dev/sdf1
    [root@rac2 ~]# fdisk -l
    Disk /dev/sda: 145.4 GB, 145492017152 bytes
    255 heads, 63 sectors/track, 17688 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 8796 70653838+ 83 Linux
    /dev/sda2 8797 12875 32764567+ 82 Linux swap / Solaris
    Disk /dev/sdc: 146.2 GB, 146278449152 bytes
    255 heads, 63 sectors/track, 17784 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdc1 1 17784 142849948+ 83 Linux
    Disk /dev/sdd: 20 MB, 20971520 bytes
    1 heads, 40 sectors/track, 1024 cylinders
    Units = cylinders of 40 * 512 = 20480 bytes
    Device Boot Start End Blocks Id System
    Disk /dev/sde: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sde1 1 53352 428549908+ 83 Linux
    Disk /dev/sdf: 438.8 GB, 438835347456 bytes
    255 heads, 63 sectors/track, 53352 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sdf1 1 53352 428549908+ 83 Linux
    [root@rac2 ~]#
    Thank you.
    Edited by: user12144220 on Aug 10, 2011 1:10 AM
    Edited by: user12144220 on Aug 10, 2011 1:11 AM
    Edited by: user12144220 on Aug 10, 2011 1:13 AM

  • Access iphoto 08 file on shared storage device from multiple machines

    I recently installed ilife 08 on both an imac and macbook. Previously (iphoto 06), both devices accessed the iphoto library on a shared storage device without any problems. After the upgrade, my imac is able to view thelibrary but my macbook (the second machine to be upgraded) no longer has access. 'Sharing' is too slow over the wireless network and doesn't represent a reasonable option.
    Is anyone else experiencing this issue? Any suggestions.

    Actually, neither repairing permissions or changing them with Get Info worked for me. What did work for me was deleting the empty iPhoto Library in the user folder who couldn't access the shared library, and put an alias of the shared library in that user's Pictures folder. Everything then worked as it did prior to upgrading. Thanks.

  • How to Reorganize CSM200 Shared Storage in Solaris 10 x86 Oracle 10gR2

    I could use some guidance from those who are more experienced in RAC administration in a Solaris environment with ASM. I have a three-node RAC with Oracle 10gR2 instances on top of Solaris 10 x86 where the shared storage is a Sun CSM200 disk array which looks like a single disk to the rest of the world. I'm not very familiar with the CSM200 Common Array Manager but I do have access to use it.
    During initial setup, I followed the Oracle cookbook and defined a storage slice for each of the following: OCR, OCR mirror, three voting disks, and +DATA, for a total of six slices. I brought up the RAC and we've used it for a couple of weeks.
    This is a Dev and QA environment, so it changes pretty fast. The new requirement is to add a +FRA and to add a mount point for a file system on the shared storage, so that all three Oracle instances can refer to the same external table(s).
    However, I've already used all the available slices in the VTOC on the shared logical drive. I'm not sure how to proceed.
    1) Is it necessary to use the CAM to create two logical disks out of the single existing logical disk?
    2) If so, how destructive is that? I don't need to keep the contents of the database, but I do not want to reinstall CRS or ASM or the DB instances.
    3) Is it possible to combine the OCR and its mirror on the same slice, thus freeing a slice for reuse?
    4) Is it possible to combine all three voting disks on the same slice, thus freeing two slices for reuse?
    Edited by: user12006221 on Mar 29, 2011 3:30 PM
    Another question: Under 10.2.0.4, is it possible for the OCR and voting disks to be managed by ASM? I know it would be possible under 11g, but that's not an option as I am trying to match a customer's environment and they aren't going to 11g any time real soon.

    What you see is what happens when the Java runtime running on Solaris 10 x86 tries to load a library which is compiled for SPARC.
    Because of the native parts in SAP GUI for Java, compilations and installers are required for each OS - HW combination.
    The supported platforms can be seen in SAP note 954572. For Solaris only SPARC is currently supported.
    Because of the effort needed for compiling, testing, support etc. it is required to focus on OS - HW combinations widely used on desktop machines and Solaris 10 on x86 currently does not seem to be one of those.

  • Maxtor Shared Storage (NAS) + Airport Express problems

    Dear forum readers and network pundits,
    I have just recently bought a 500Gb Maxtor Shared Storage (MSS) ethernet drive, to which I'm planning to make regular backup of files from my wirelessly connected MacBook Pro and Mac Mini. Both computers are connected to the Internet via Airport Express, which, in turn, is connected to a router/modem. The MSS is attached to an RJ45 socket on the modem, next to the Airport Express RJ45 cable.
    The problem is that I cannot seem to make a proper connection to the MSS. I can access the drive's web interface through Safari, but I cannot mount it as a network drive in the Finder. When hooking up my MacBook Pro to the router/modem directly using a RJ45 cable everything works like a charm, so I know the drive works fine. The problem seems to be the transition from the wired router to Airport Express.
    I'm no network wizard in any way. When things don't work from the start I'm left scratching my beard, which is why I turn to the vast bank of knowledge contained here. Please help me. Is there a button in the Airport Express settings I can push to make it work, or do I need to by more hardware?
    Looking forward to your help.
    Cheers,
    Andreas
    P.S. I'll churn out brownie points by the bucketload for any helpful answers
    MacBook Pro & MacMini   Mac OS X (10.4.7)  

    This seems to me related to this problem
    http://discussions.apple.com/thread.jspa?threadID=795814&tstart=30
    I think it's the mixture of wired and wireless in combination with certain routers.

  • Shared Storage is not showing in SErver pool physical disk .

    Hi i have added couple of images to give you exact picture what i am doing .. please have a look
    OVM can see Shared storage from Openfiler
    http://www.picpanda.com/viewer.php?file=u1pf8hre1oqufkx6vyr3.gif
    OVM , SErver Pool (physical disk ) can see Shared Storage name
    http://www.picpanda.com/viewer.php?file=0h2dn48x6gh5avr6fob.gif
    OVM does not show the physical disk from Shared storage
    http://www.picpanda.com/viewer.php?file=c3njd7bg4exyjpymsit.gif
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    thanks
    is there any thing do i have to do to make it work ??

    Fosiul wrote:
    if i remmeber, When i tryed with NFS share from openfiler, its worked before ... but this time i am trying with ISCSI target ..
    is there any thing do i have to do to make it work ??Check to see if the physical disk appears under the default volume group for your san. If not, refresh the san and wait until it does. Also, make sure you have an admin server configured for that san and that your Oracle VM servers are in the default access group.

  • Concurrent users unable to open a shared Microsoft Access MDB database

    I have a share on a Cisco NSS2000, the NSS is in Workgroup mode (Firmware 1.13).
    Every user have read/write rights on this share.
    When I try to open an access MDB database from this share it happens that only a single user at time can open the database.
    The second user that try to open the database get a "Cannot lock file" error message.
    So, when a user has the Access DB opened, no-one other can open (or connect to) the same DB.
    It's impossible to open the DB using the access IDE ("Cannot lock file" error message).
    It's impossible to make multiple connection to the DB using OleDB ("Cannot lock file" error message).
    I've tried give full access to the mdb file.
    I've tried give full access to the mdb folder.
    I've even tried to give full access to the temporary LDB file.
    I always get the same error message.
    Is there a workaround?
    Thanks, Max

    I've finally been able to update the firmware to the latest 1.16.
    Now everything works fine, problem solved! :)
    Thanks,
    Max
    catung wrote:Max,Please update your firmware to the latest posted version, 1.16.Thanks-carl--Carl TungSBTG - PE, Storage
    From: IsiSviluppo <[email protected]>
    Reply-To: "[email protected]"
    <[email protected]>
    Date: Wed, 07 Apr 2010 00:07:20 -0600
    To: Carl Tung <[email protected]>
    Subject: Small Business Network Storage New message: "Concurrent users
    unable to open a shared Microsoft Access MDB database" YbIMb-16E-b7l
    catung,
    A new message was posted in the thread "Concurrent users unable to open a
    shared Microsoft Access MDB database":
    https://www.myciscocommunity.com/message/42739#42739
    Author  : IsiSviluppo
    Profile : https://www.myciscocommunity.com/people/IsiSviluppo
    Message:

  • How do i move files from ipad to shared storage, how do i move files from ipad to shared storage

    I have run out of room on my 64GB iPad.  It contains two series of a TV program and I want to keep the episodes.  However I would like to be able to get at these episodes wherever I am located be it in Australia, Europe or the USA so would like to have them located in shared storage.  What is the best way to achieve this.  I am somewhat confused with my choices and my last update experience of losing a number of movies makes me nervous.  Any suggestions would be great.  Thanks

    All of your purchases are available free again from the stores. You can delete them and when you want to access them again just go to the store under purchased and select the ones you want. I do this all the time on my 16 gig ipad.

  • Shared Storage RAC

    Hello,
    This is my Oracle RAC 11gR2 real world installation, I need to configure the shared storage for RAC 2 nodes on redhat enterprise linux 5.
    could please send me a step by step how to do it? I want to use Device Mapper Multipath for that and ASM for Storage.
    Thank you

    899660 wrote:
    Hello,
    This is my Oracle RAC 11gR2 real world installation, I need to configure the shared storage for RAC 2 nodes on redhat enterprise linux 5.
    could please send me a step by step how to do it? I want to use Device Mapper Multipath for that and ASM for Storage.
    Thank youHi,
    Shared storage is a hardware device, which cant be created by you. Ofcourse you can use that shared device to configure ASM.
    Check the below links
    http://martincarstenbach.wordpress.com/2010/11/16/configuration-device-mapper-multipath-on-oel5-update-5/
    http://www.oracle.com/technetwork/database/device-mapper-udev-crs-asm.pdf

  • Problem of using OCFS2 as shared storage to install RAC 10g on VMware

    Hi, all
    I am installing a RAC 10g cluster with two linux nodes on VMware. I created a shared 5G disk for the two nodes as shared storage partition. By using OCFS2 tools, i formatted this shared storage partition and successfully auto mounted it on both nodes.
    Before installing, i use the command "runcluvfy.sh stage -pre crsinst -n node1,node2" to determine the installation prerequisites. Everything is ok except an error "Could not find a suitable set of interfaces for VIPs.". By searching the web, i found this error could be safely ignored.
    The OCFS2 works well on both nodes, i formatted the shared partition as ocfs2 file system and configure o2bc to auto start ocfs service. I mounted the shared disk on both nodes at /ocfs directory. By adding an entry into both nodes' /etc/fstab, this partition can be auto mounted at system boots. I could access files in shared partition on both nodes.
    My problem is that, when installing clusterware, at the stage "Specify Oracle Cluster Registry", I enter "/ocfs/OCRFILE" for Specify OCR Location and "/ocfs/OCRFILE_Mirror" for Specify OCR Mirror Location. But got an error as following:
    ----- Error Message ----
    The location /ocfs/OCRFILE, entered for the Oracle Cluster Registry(OCR) is not shared across all the nodes in the cluster. Specify a shared raw partition or cluster file system that is visible by the same name on all nodes of the cluster.
    ------ Error Message ---
    I don't know why the OUI can't recognize /ocfs as shared partition. On both nodes, using command "mounted.ocfs2 -f", i can get the result:
    Device FS Nodes
    /dev/sdb1 ocfs2 node1, node2
    What's the possible wrong? Any help is appreciated!
    Addition information:
    1) uname -r
    2.6.9-42.0.0.0.1.EL
    2) Permission of shared partition
    $ls -ld /ocfs/
    drwxrwxr-x 6 oracle dba 4096 Aug 3 18:22 /ocfs/

    Hello
    I am not sure how far this following solution is relevant to your problem (regardless when it was originally posted - may help someone who is reading this thread), here is what I faced and here is how I fixed it:
    I was setting up RAC using VMWare. I prepared rac1 [installed OS, configured disks, users, etc] and the made a copy of it as rac2. So far so good. When, as per the guide I was following for RAC configuration, I started OCFS2 configuration, faced the following error on RAC2 when I tried to mount the /dev/adb1:
    ===================================================
    [Root @ *rac2* ~] # mount - t ocfs2 - o datavolume, nointr / dev / sdb1 / ocfs
    ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid
    mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted" mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted"
    ===================================================
    After a lot of "googling around", I finally bumped into a page, the kind person who posted the solution said [in my words below and more detailed ]:
    o shutdown both rac1 and rac2
    o in VMWare, "edit virtual machine settings" for rac1
    o remove the disk [make sure you drop the correct one]
    o recreate it and select *"allocate all disk space now"* [with same name and in the same directory where it was before]
    o start rac1 and login as *"root"* and *"fdisk /dev/sdb"* [or whichever is/was your disk where you r installing ocfs2]
    Once done, repeat the steps for configuring OCFS2. I was successfully able to mount the disk on both machines.
    All this problem was apparently caused by not choosing "allocate all disk space now" option while creating the disk to be used for OCFS2.
    If you still have any questions or problem, email me at [email protected] and I'll try to get back to you at my earliest.
    Good luck!
    Muhammad Amer
    [email protected]

  • WRT610N Shared Storage

    I recently purchased a WRT610N and have been having some problems setting up the USB shared storage feature.  I have a 1.5 Terabyte Seagate drive that I have created two partiions on (I had read elsewhere in the forums that the WRT610N only handled partitions/dirves up to 1 TB).  Both partitions are NTFS with the first one being 976,561 GB and the second one being 420,700 GB. Both drives show up in the "Disk" section of the admin console and I can create/define a share for the larger of the two partitions without any problems. 
    The first of my problems comes when I try to create/define a share for the smaller partition.  I can create a share but the admin console does not save the access privleges that I assign to it.  Despite setting them up in the admin console they don't show up when I go back and look (in both the detail and summary views) the Access rights show as blank.  I do not have this issue with the larger partition where I can add and later view groups in the Access section.
    The second problem comes when I try to attach to the larger share from a network client.  I can look at the shares if I use ..........Start - Run - and Type \\192.168.1.1.  If I enter in my admin User ID and password, I can see the new share on the WRT610N.  When I try to double click on it, I am then pompted again for a Username & Password,  When I try to re-enter the admin user ID and password, the logon comes right back to me with "WRT610n\admin" populated in the User ID field.  From there it won't accept the admin password.  There are no error messages.
    Help with either problem would be appreciated.

    When you select your Storage partition and open it, and if it ask you for the Username and Password, thats the username and password is of your storage drive, Might be you must have set some password for your storage drivers.
    Login to your Routers GUI. and click on the Storage Tab and Below you will find the Sub tab "Administration" click on it, if you wish you can Modify the "admin" rights, Like change the password or else you can Create your Own User and password. So whenever you login to your Storage Partition, and it ask for the username and password then you can input that username and password and click OK. This way you will be able to access your Storage Driver. 

Maybe you are looking for