Running probes on shared storage?

Hi. I'm trying to avoid the hassle of having to manually shut down the probes every time I need to do maintenance (mode) on my hosts. As long as the probes are deployed on local datastores this is mandatory.
How about instead putting the probes on shared storage? Would this allow for putting the hosts in Maintenance Mode?
Would the AppSpeed server somehow make sure they return to their respective hosts when the hosts return from Maintenance Mode? If not, could I create DRS host-VM-affinity rules to keep each probe on it's host?
Thanks in advance!

Hmm, apparently the probes are automatically shut down when the host is put in Maintenance Mode. I thought they had to be shut down manually. How is this done?
Well, you learn something new every day.

Similar Messages

  • Is Shared storage provided by VirtualBox better or as good as Openfiler ?

    Grid version : 11.2.0.3
    Guest OS           : Solaris 10 (64-bit )
    Host OS           : Windows 7 (64-bit )
    Hypervisor : Virtual Box 4.1.18
    In the past , I have created 2-node RAC in virtual environment (11.2.0.2) in which the shared storage was hosted in OpenFiler.
    Now that VirtualBox supports shared LUNs. I want to try it out. If VirtualBox's shared storage is as good as Openfiler , I would definitely go for VirtualBox as Openfiler requires a third VM (Linux) to be created just for hosting storage .
    For pre-RAC testing, I created a VirtualBox VM and created a Stand alone DB in it. Below test is done in VirtualBox's LOCAL storage (I am yet to learn how to create Shared LUNs in Virtual Box )
    I know that a datafile creation is not a definite test to determine I/O throughput. But i did a quick Test by creating a 6gb tablespace.
    Is the duration of 2 minutes and 42 seconds acceptable for a 6gb datafile ?
    SQL> set timing on
    SQL> create tablespace MHDATA datafile '/u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf' SIZE 6G AUTOEXTEND off ;
    Tablespace created.
    Elapsed: 00:02:42.47
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    $
    $ du -sh /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    6.0G   /u01/app/hldat1/oradata/hcmbuat/mhdata01.dbf
    $ df -h /u01/app/hldat1/oradata/hcmbuat
    Filesystem             size   used  avail capacity  Mounted on
    /dev/dsk/c0t0d0s6       14G    12G   2.0G    86%    /u01

    well once i experimented with Openfiler and built a 2-node 11.2 RAC on Oracle Linux 5 using iSCSI storage (3 VirtualBox VMs in total, all 3 on a desktop PC: Intel i7 2600K, 16GB memory)
    CPU/memory wasnt a problem, but as all the 3 VMs were on a single HDD, performance was awful
    didnt really run any benchmarks, but a compressed full database backup with RMAN for an empty database (<1 GB) took like 15 minutes...
    2 VMs + VirtualBox shared disk on the same single HDD provided much better performance, still using this kind of setup for my sandbox RAC databases
    edit: 6 GB in 2'42" is about 37 MB/sec
    with the above setup using Openfiler, it was nowhere near this
    edit2: made a little test
    host: Windows 7
    guest:2 x Oracle Linux 6.3, 11.2.0.3
    hypervisor is VirtualBox 4.2
    PC is the same as above
    2 virtual cores + 4GB memory for each VM
    2 VMs + VirtualBox shared storage (single file) on a single HDD (Seagate Barracuda 3TB ST3000DM001)
    created a 4 GB datafile (not enough space for 6 GB):
    {code}SQL> create tablespace test datafile '+DATA' size 4G;
    Tablespace created.
    Elapsed: 00:00:31.88
    {code}
    {code}RMAN> backup as compressed backupset database format '+DATA';
    Starting backup at 02-OCT-12
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=22 instance=RDB1 device type=DISK
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=+DATA/rdb/datafile/system.262.790034147
    input datafile file number=00002 name=+DATA/rdb/datafile/sysaux.263.790034149
    input datafile file number=00003 name=+DATA/rdb/datafile/undotbs1.264.790034151
    input datafile file number=00004 name=+DATA/rdb/datafile/undotbs2.266.790034163
    input datafile file number=00005 name=+DATA/rdb/datafile/users.267.790034163
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/nnndf0_tag20121002t192133_0.389.795640895 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
    channel ORA_DISK_1: starting compressed full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current control file in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 02-OCT-12
    channel ORA_DISK_1: finished piece 1 at 02-OCT-12
    piece handle=+DATA/rdb/backupset/2012_10_02/ncsnf0_tag20121002t192133_0.388.795640919 tag=TAG20121002T192133 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 02-OCT-12
    {code}
    Now i dont know much about Openfiler, maybe i messed up something, but i think its quite good, so i wouldnt use a 3rd VM just for the storage.

  • How to Reorganize CSM200 Shared Storage in Solaris 10 x86 Oracle 10gR2

    I could use some guidance from those who are more experienced in RAC administration in a Solaris environment with ASM. I have a three-node RAC with Oracle 10gR2 instances on top of Solaris 10 x86 where the shared storage is a Sun CSM200 disk array which looks like a single disk to the rest of the world. I'm not very familiar with the CSM200 Common Array Manager but I do have access to use it.
    During initial setup, I followed the Oracle cookbook and defined a storage slice for each of the following: OCR, OCR mirror, three voting disks, and +DATA, for a total of six slices. I brought up the RAC and we've used it for a couple of weeks.
    This is a Dev and QA environment, so it changes pretty fast. The new requirement is to add a +FRA and to add a mount point for a file system on the shared storage, so that all three Oracle instances can refer to the same external table(s).
    However, I've already used all the available slices in the VTOC on the shared logical drive. I'm not sure how to proceed.
    1) Is it necessary to use the CAM to create two logical disks out of the single existing logical disk?
    2) If so, how destructive is that? I don't need to keep the contents of the database, but I do not want to reinstall CRS or ASM or the DB instances.
    3) Is it possible to combine the OCR and its mirror on the same slice, thus freeing a slice for reuse?
    4) Is it possible to combine all three voting disks on the same slice, thus freeing two slices for reuse?
    Edited by: user12006221 on Mar 29, 2011 3:30 PM
    Another question: Under 10.2.0.4, is it possible for the OCR and voting disks to be managed by ASM? I know it would be possible under 11g, but that's not an option as I am trying to match a customer's environment and they aren't going to 11g any time real soon.

    What you see is what happens when the Java runtime running on Solaris 10 x86 tries to load a library which is compiled for SPARC.
    Because of the native parts in SAP GUI for Java, compilations and installers are required for each OS - HW combination.
    The supported platforms can be seen in SAP note 954572. For Solaris only SPARC is currently supported.
    Because of the effort needed for compiling, testing, support etc. it is required to focus on OS - HW combinations widely used on desktop machines and Solaris 10 on x86 currently does not seem to be one of those.

  • How can use all vcenter feature such as HA and drs without shared storage

    Hi
    i have 2 esxi host dl 380g8
    but i don't have any shared storage or san now i want use vcenter feature like ha and drs now for use this features do i have to run VSAN?
    i readed for for use vsan we have to buy ATLEAST ONE ssd disk is this trure?
    ssd disk is necessary?
    atleast one ssd disk for each host ?
    can use from all of vcenter feature with VSAN ?
    please some help me

    Hi,
    For VSAN you need at LEAST 3 hosts (4 recommended).
    Each host must have 1 SSD for read cache/write buffer.
    Do you have any storage at all? How about running a NAS such as "OpenFiler" or "FreeNAS". That way you can present shared storage via that way. (I'd use this in Lab only). These NAS O/S's also work as virtual machines so you can turn any disk into shared storage.

  • DFSr supported cluster configurations - replication between shared storage

    I have a very specific configuration for DFSr that appears to be suffering severe performance issues when hosted on a cluster, as part of a DFS replication group.
    My configuration:
    3 Physical machines (blades) within a physical quadrant.
    3 Physical machines (blades) hosted within a separate physical quadrant
    Both quadrants are extremely well connected, local, 10GBit/s fibre.
    There is local storage in each quadrant, no storage replication takes place.
    The 3 machines in the first quadrant are MS clustered with shared storage LUNs on a 3PAR filer.
    The 3 machines in the second quadrant are also clustered with shared storage, but on a separate 3PAR device.
    8 shared LUNs are presented to the cluster in the first quadrant, and an identical storage layout is connected in the second quadrant. Each LUN has an associated HAFS application associated with it which can fail-over onto any machine in the local cluster.
    DFS replication groups have been set up for each LUN and data is replicated from an "Active" cluster node entry point, to a "Passive" cluster node that provides no entry point to the data via DFSn and a Read-Only copy on it's shared cluster
    storage.
    For the sake of argument, assume that all HAFS application instances in the first quadrant are "Active" in a read/write configuration, and all "Passive" instances of the HAFS applications in the other quadrants are Read-Only.
    This guide: http://blogs.technet.com/b/filecab/archive/2009/06/29/deploying-dfs-replication-on-a-windows-failover-cluster-part-i.aspx defines
    how to add a clustered service to a replication group. It clearly shows using "Shared storage" for the cluster, which is common sense otherwise there effectively is no application fail-over possible and removes the entire point of using a resilient
    cluster.
    This article: http://technet.microsoft.com/en-us/library/cc773238(v=ws.10).aspx#BKMK_061 defines the following:
    DFS Replication in Windows Server 2012 and Windows Server 2008 R2 includes the ability to add a failover cluster
    as a member of a replication group. The DFS Replication service on versions of Windows prior to Windows Server 2008 R2
    is not designed to coordinate with a failover cluster, and the service will not fail over to another node.
    It then goes on to state, quite incredibly: DFS Replication does not support replicating files on Cluster Shared Volumes.
    Stating quite simply that DFSr does not support Cluster Shared Volumes makes absolutely no sense at all after stating clusters
    are supported in replication groups and a technet guide is provided to setup and configure this configuration. What possible use is a clustered HAFS solution that has no shared storage between the clustered nodes - none at all.
    My question:  I need some clarification, is the text meant to read "between" Clustered
    Shared Volumes?
    The storage configuration must to be shared in order to form a clustered service in the first place. What
    we am seeing from experience is a serious degradation of
    performance when attempting to replicate / write data between two clusters running a HAFS configuration, in a DFS replication group.
    If for instance, as a test, local / logical storage is mounted to a physical machine the performance of a DFS replication group between the unshared, logical storage on the physical nodes is approaching 15k small files per minute on initial write and even high
    for file amendments. When replicating between two nodes in a cluster, with shared clustered storage the solution manages a weak 2,500 files per minute on initial write and only 260 files per minute when attempting to update data / amend files.
    By testing various configurations we have effectively ruled out the SAN, the storage, drivers, firmware, DFSr configuration, replication group configuration - the only factor left that makes any difference is replicating from shared clustered storage, to another
    shared clustered storage LUN.
    So in summary:
    Logical Volume ---> Logical Volume = Fast
    Logical Volume ---> Clustered Shared Volume = ??
    Clusted Shared Volume ---> Clustered Shared Volume = Pitifully slow
    Can anyone explain why this might be?
    The guidance in the article is in clear conflict with all other evidence provided around DFSr and clustering, however it seems to lean towards why we may be seeing a real issue with replication performance.
    Many thanks for your time and any help/replies that may be received.
    Paul

    Hello Shaon Shan,
    I am also having the same scenario at one of my customer place.
    We have two FileServers running on Hyper-V 2012 R2 as guest VM using Cluster Shared Volume.  Even the data partition drive also a part of CSV.
    It's really confusing whether the DFS replication on CSV are supported or not, then what would be consequence if using.
    In my knowledge we have some customers they are using Hyper-V 2008 R2 and DFS is configured and running fine on CSV since more than 4 years without any issue.
    Appreciate if you can please elaborate and explain in details about the limitations on using CSV.
    Thanks in advance,
    Abul

  • Qmaster error: shared storage client timed out while subscribing to...

    Here's my Qmaster setup:
    computer 1: CONTROLLER, no nodes
    - 8TB RAID hooked up via Fiber
    - connected to the GigE network switch via a 6-port bond
    - cluster storage set to a path on the RAID
    computers 2, 3, 4, 5: RENDER NODES
    - each computer has a 2-port bonded connection with the GigE switch
    computer 6: Client, with FCS2 installed.
    - connected with a single GigE link
    I have set up this cluster primarily for command-line renders, and it works great. I submit command-line renders from the client computer, which get distributed and executed on each node. The command line renders specify a source file on the RAID, and a destination path on the RAID. Everything works great.
    I run into trouble when trying to use Compressor with this same setup. The files are on the RAID, and all my computers have an NFS automount that puts it in the /Volumes folder on each computer.
    I set up my Compressor job and submit it to the cluster. It submits sucessfully, and distributes the work. After a few seconds, each node gives me a timeout error:
    "Shared storage client timed out while subscribing to [computer1.local/path to cluster storage]"
    Is this a bandwidth issue? Command line renders work fine, I can render 16 simultaneous Quicktimes to the RAID over NFS. I don't see much network activity on any of the computers when it's trying to start the Compressor render, it's as if it's not even trying to connect.
    If I submit the SAME compressor job to a cluster with nodes ONLY on the controller computer, it renders fine. Clearly the networked nodes are having trouble connecting to the share for some reason.
    Does anybody have any ideas? I have tried almost everything to get this to work. Hooking up each node locally to the RAID is NOT an option unfortunately.

    WELL I DO NOW!
    Thanks. it's taken 6th months and several paid 'professionals' and then you come in here...swinging your minimalist genius. one line. one single line. and its done.
    if you are in london, lets lift a beer or five together.
    thank you sir. thankyou!

  • BWA OS Upgrade Error Shared Storage

    We have recently upgraded our IBM BWA to SUSE Linux 11.1, upgraded the GPFS 3.4.0-7 and recompiled RDAC. Everything seemed to work fine, the queries execute, but when I run the checkBIA, I get these warning/error messages:
    OK: ====== Shared Storage ======
    OK: Storage: /usr/sap/BXQ/TRX00/index
    OK: Logdir: /usr/sap/BXQ/SYS/global/trex/checks/report_2012-02-28_102556
    OK: Local avg directory creation/deletion: 1330.9 dirs/s, exp: 700.0 dirs/s
    ERROR: Parallel remote avg 4 hosts: 24.4 MB/s, exp: 31.0 MB/s
    ERROR: 1 hosts parallel iterative remote avg: 57.9 MB/s, exp: 90.0 MB/s
    WARNING: 2 hosts parallel iterative remote avg: 42.8 MB/s, exp: 45.0 MB/s
    WARNING: 4 hosts parallel iterative remote avg: 26.4 MB/s, exp: 31.0 MB/s
    ERROR: Serial remote avg: 59.4 MB/s, exp: 90.0 MB/s
    OK: ====== Landscape Reorganization ======
    Anyone seen this before, or have any idea what may be the cause? Thank you
    Karl

    Hello Karl,
    first, BWA is not supported for all SUSE versions. Please check http://service.sap.com/pam > "BW Accelerator 7.20". It's certainly not supported for BWA 7.0.
    Secondly, you should not upgrade the OS unless specifically instructed to do so by SAP Support or if security patches are required. Especially moving from SUSE 9 or 10 to 11 is not something that is recommended.
    Finally, because of #1 please contact IBM to make sure you have a supported OS version for your specific BWA hardware. IBM also needs to make sure that your hardware fulfills the minimum specifications for network and storage throughput (which is check by the script).
    Thanks,
    Marc Bernard
    SAP Customer Solution Adoption (CSA)

  • How do i move files from ipad to shared storage, how do i move files from ipad to shared storage

    I have run out of room on my 64GB iPad.  It contains two series of a TV program and I want to keep the episodes.  However I would like to be able to get at these episodes wherever I am located be it in Australia, Europe or the USA so would like to have them located in shared storage.  What is the best way to achieve this.  I am somewhat confused with my choices and my last update experience of losing a number of movies makes me nervous.  Any suggestions would be great.  Thanks

    All of your purchases are available free again from the stores. You can delete them and when you want to access them again just go to the store under purchased and select the ones you want. I do this all the time on my 16 gig ipad.

  • Advantages of Shared Storage in SOA Cluster

    HI,
    Enterprise deployment guide ( http://download.oracle.com/docs/cd/E15523_01/core.1111/e12036/toc.htm) , installing binaries in shared storage.
    We have NAS as shared storage.
    My Question is what are the advantages/disadvantages of installing binaries in shared storage?
    One advantage i know as mentioned in the guide is that, we can create multiple soa servers from single installation.
    Thanks
    Manish

    It has always been my understanding that the shared storage is prerequsite, not a recommendation, meaning if you want a cluster configuration you must have shared storage. I have done a quick look through the EDG can't can see any reference installing binaries on a non-shared storage.
    I'm not 100% on this but I don't believe the WLS and SOA home are used during run time. The run time files used are in the managed server location, e.g. user_projects. By default this sit in the WLS home.
    Also don't know much about shared storage, e.g. NAS over SAN but if you already have NAS, this seems to be the logical choice.
    cheers
    James

  • Choice of shared storage for Oralce VM clustering feature

    Hi,
    I would like to experiment the Oracle VM clustering feature over multiple OVM servers. One requirement is the shared storage which can be provided by iSCSI/FC SAN, or NFS. These types of external storage are usually very expensive. For testing purpose, what other options of shared storage can be used? Can someone share your experience?

    You don't need to purchase an expensive SAN storage array for this. A regular PC running Linux or Solaris will do just fine to act as an iSCSI target or to provide NFS shares via TCP/IP. Googling for "linux iscsi target howto" reveals a number of hits like this one: "RHEL5 iSCSI Target/Initiator" - http://blog.hamzahkhan.com/?p=55
    For Solaris, this book might be useful: "Configuring Oracle Solaris iSCSI Targets and Initiators (Tasks)" - http://download.oracle.com/docs/cd/E18752_01/html/817-5093/fmvcd.html

  • Problem of using OCFS2 as shared storage to install RAC 10g on VMware

    Hi, all
    I am installing a RAC 10g cluster with two linux nodes on VMware. I created a shared 5G disk for the two nodes as shared storage partition. By using OCFS2 tools, i formatted this shared storage partition and successfully auto mounted it on both nodes.
    Before installing, i use the command "runcluvfy.sh stage -pre crsinst -n node1,node2" to determine the installation prerequisites. Everything is ok except an error "Could not find a suitable set of interfaces for VIPs.". By searching the web, i found this error could be safely ignored.
    The OCFS2 works well on both nodes, i formatted the shared partition as ocfs2 file system and configure o2bc to auto start ocfs service. I mounted the shared disk on both nodes at /ocfs directory. By adding an entry into both nodes' /etc/fstab, this partition can be auto mounted at system boots. I could access files in shared partition on both nodes.
    My problem is that, when installing clusterware, at the stage "Specify Oracle Cluster Registry", I enter "/ocfs/OCRFILE" for Specify OCR Location and "/ocfs/OCRFILE_Mirror" for Specify OCR Mirror Location. But got an error as following:
    ----- Error Message ----
    The location /ocfs/OCRFILE, entered for the Oracle Cluster Registry(OCR) is not shared across all the nodes in the cluster. Specify a shared raw partition or cluster file system that is visible by the same name on all nodes of the cluster.
    ------ Error Message ---
    I don't know why the OUI can't recognize /ocfs as shared partition. On both nodes, using command "mounted.ocfs2 -f", i can get the result:
    Device FS Nodes
    /dev/sdb1 ocfs2 node1, node2
    What's the possible wrong? Any help is appreciated!
    Addition information:
    1) uname -r
    2.6.9-42.0.0.0.1.EL
    2) Permission of shared partition
    $ls -ld /ocfs/
    drwxrwxr-x 6 oracle dba 4096 Aug 3 18:22 /ocfs/

    Hello
    I am not sure how far this following solution is relevant to your problem (regardless when it was originally posted - may help someone who is reading this thread), here is what I faced and here is how I fixed it:
    I was setting up RAC using VMWare. I prepared rac1 [installed OS, configured disks, users, etc] and the made a copy of it as rac2. So far so good. When, as per the guide I was following for RAC configuration, I started OCFS2 configuration, faced the following error on RAC2 when I tried to mount the /dev/adb1:
    ===================================================
    [Root @ *rac2* ~] # mount - t ocfs2 - o datavolume, nointr / dev / sdb1 / ocfs
    ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid ocfs2_hb_ctl: OCFS2 DIRECTORY corrupted WHILE reading uuid
    mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted" mount.ocfs2: Error WHEN attempting TO run / sbin / ocfs2_hb_ctl: "Operation not permitted"
    ===================================================
    After a lot of "googling around", I finally bumped into a page, the kind person who posted the solution said [in my words below and more detailed ]:
    o shutdown both rac1 and rac2
    o in VMWare, "edit virtual machine settings" for rac1
    o remove the disk [make sure you drop the correct one]
    o recreate it and select *"allocate all disk space now"* [with same name and in the same directory where it was before]
    o start rac1 and login as *"root"* and *"fdisk /dev/sdb"* [or whichever is/was your disk where you r installing ocfs2]
    Once done, repeat the steps for configuring OCFS2. I was successfully able to mount the disk on both machines.
    All this problem was apparently caused by not choosing "allocate all disk space now" option while creating the disk to be used for OCFS2.
    If you still have any questions or problem, email me at [email protected] and I'll try to get back to you at my earliest.
    Good luck!
    Muhammad Amer
    [email protected]

  • 10g RAC on varitas Cluster Software and Shared storage

    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    These are 3 things i am wondering how to handle, I did all these on Oracle Clusterware , but never on Veritas Cluster ware ...all these 3 steps are the same or different. In someone can help..

    How we can do this while using varitas cluster software
    1. Install oracle binaries and patches (RAC install)
    2. Configure Cluster control interface (shared storage) for Oracle
    3. Create instances of oracle
    If we install RDBMS 10.2.0.1 with standard installer it will catch the vcs and when we will be running dbca it will ask for RAC db option?
    what is Configure Cluster control interface (shared storage) for Oracle??

  • Shared Storage Check

    Hi all,
    We are planning to add a node to our existing RAC deployment (Database: 10gr2 and Sun Solaris 5.9 OS). Currently the shared storage is IBM SAN.
    When i run shared storage check using cluvfy, it fails to detect any shared storage. Given that i can ignore this error message (since cluvfy doesn't work wth SAN i beleive), how can i check whether the storage is shared or not?
    Note
    When i see partition table from both servers, it looks same (for the SAN drive, of course) but the name/label of the storages are different (For example: In existing node it show c6t0d0 but in the new node, which is to be added, it shows something different. Is it ok?).
    regards,
    Muhammad Riaz

    Never mind. I found solution from http://www.idevelopment.info.
    (1) Create following directory structure on second node (same as first node) with the same permissions on existins node:
    /asmdisks
    - crs
    -disk1
    -disk2
    - vote
    (2) use ls -lL /dev/rdsk/<Disk> to find out major and minor ids of shared disk and attach those ids to relveant direcotries above using mknod command:
    # ls -lL /dev/rdsk/c4t0d0*
    crw-r-----   1 root     sys       32,256 Aug  1 11:16 /dev/rdsk/c4t0d0s0
    crw-r-----   1 root     sys       32,257 Aug  1 11:16 /dev/rdsk/c4t0d0s1
    crw-r-----   1 root     sys       32,258 Aug  1 11:16 /dev/rdsk/c4t0d0s2
    crw-r-----   1 root     sys       32,259 Aug  1 11:16 /dev/rdsk/c4t0d0s3
    crw-r-----   1 root     sys       32,260 Aug  1 11:16 /dev/rdsk/c4t0d0s4
    crw-r-----   1 root     sys       32,261 Aug  1 11:16 /dev/rdsk/c4t0d0s5
    crw-r-----   1 root     sys       32,262 Aug  1 11:16 /dev/rdsk/c4t0d0s6
    crw-r-----   1 root     sys       32,263 Aug  1 11:16 /dev/rdsk/c4t0d0s7
    mknod /asmdisks/crs      c 32 257
    mknod /asmdisks/disk1      c 32 260
    mknod /asmdisks/disk2      c 32 261
    mknod /asmdisks/vote      c 32 259
    # ls -lL /asmdisks
    total 0
    crw-r--r--   1 root     oinstall  32,257 Aug  3 09:07 crs
    crw-r--r--   1 oracle   dba       32,260 Aug  3 09:08 disk1
    crw-r--r--   1 oracle   dba       32,261 Aug  3 09:08 disk2
    crw-r--r--   1 oracle   oinstall  32,259 Aug  3 09:08 vote

  • 11gR2 Verification of shared storage accessibility

    Friends,
    I do not understand how is this possible. I am trying to apply 11.2.0.2.5 psu on a 2-node cluster running on RHEL 5.5 vms. Followed ORACLE-BASE examples when installed this laptop RAC.
    I am not using any ACFS and none of GI and DB homes are shared. But, on node 2, cluvfy THINKS database home is shared.
    [oracle@rac1 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle/product/11.2.0.2/db_1 -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle/product/11.2.0.2/db_1" is not shared
    Shared storage check failed on nodes "rac2,rac1"
    Verification of shared storage accessibility was unsuccessful on all the specified nodes.
    NODE_STATUS::rac2:VFAIL
    NODE_STATUS::rac1:VFAIL
    OVERALL_STATUS::VFAIL
    [oracle@rac1 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle_grid/product/11.2.0.2 -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle_grid/product/11.2.0.2" is not shared
    Shared storage check failed on nodes "rac2,rac1"
    Verification of shared storage accessibility was unsuccessful on all the specified nodes.
    NODE_STATUS::rac2:VFAIL
    NODE_STATUS::rac1:VFAIL
    OVERALL_STATUS::VFAIL
    [oracle@rac1 trace]$ hostname
    rac1
    [oracle@rac1 trace]$ echo $ORACLE_HOSTNAME
    rac1
    [oracle@rac1 trace]$
    [oracle@rac2 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle/product/11.2.0.2/db_1 -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle/product/11.2.0.2/db_1" is shared
    Shared storage check was successful on nodes "rac2,rac1"
    Verification of shared storage accessibility was successful.
    NODE_STATUS::rac2:SUCC
    NODE_STATUS::rac1:SUCC
    OVERALL_STATUS::SUCC
    [oracle@rac2 trace]$ /oracle_grid/product/11.2.0.2/bin/cluvfy comp ssa -t software -s /oracle_grid/product/11.2.0.2/ -n rac1,rac2 -display_status
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    "/oracle_grid/product/11.2.0.2/" is not shared
    Shared storage check failed on nodes "rac2,rac1"
    Verification of shared storage accessibility was unsuccessful on all the specified nodes.
    NODE_STATUS::rac2:VFAIL
    NODE_STATUS::rac1:VFAIL
    OVERALL_STATUS::VFAIL
    [oracle@rac2 trace]$ hostname
    rac2
    [oracle@rac2 trace]$ echo $ORACLE_HOSTNAME
    rac2
    [oracle@rac2 trace]$
    I can not determine any reasons and do not know how to fix.
    Any help?
    Thank you.

    Hi,
    CLUFVY COMP SSA checks if the used storage/location is shared.
    If you do "cluvfy comp ssa -t software" cluvfy checks if your software home is shared.
    It tells you it is not. Hence the checks fails (which is correct, because you said DB Home is not shared).
    So where is the problem?
    CLUVFY COMP SSA only makes sense to check the sharedness. If it is not shared, then there is no sense in testing for it.
    Regards
    Sebastian

  • WRT610N Shared Storage

    I recently purchased a WRT610N and have been having some problems setting up the USB shared storage feature.  I have a 1.5 Terabyte Seagate drive that I have created two partiions on (I had read elsewhere in the forums that the WRT610N only handled partitions/dirves up to 1 TB).  Both partitions are NTFS with the first one being 976,561 GB and the second one being 420,700 GB. Both drives show up in the "Disk" section of the admin console and I can create/define a share for the larger of the two partitions without any problems. 
    The first of my problems comes when I try to create/define a share for the smaller partition.  I can create a share but the admin console does not save the access privleges that I assign to it.  Despite setting them up in the admin console they don't show up when I go back and look (in both the detail and summary views) the Access rights show as blank.  I do not have this issue with the larger partition where I can add and later view groups in the Access section.
    The second problem comes when I try to attach to the larger share from a network client.  I can look at the shares if I use ..........Start - Run - and Type \\192.168.1.1.  If I enter in my admin User ID and password, I can see the new share on the WRT610N.  When I try to double click on it, I am then pompted again for a Username & Password,  When I try to re-enter the admin user ID and password, the logon comes right back to me with "WRT610n\admin" populated in the User ID field.  From there it won't accept the admin password.  There are no error messages.
    Help with either problem would be appreciated.

    When you select your Storage partition and open it, and if it ask you for the Username and Password, thats the username and password is of your storage drive, Might be you must have set some password for your storage drivers.
    Login to your Routers GUI. and click on the Storage Tab and Below you will find the Sub tab "Administration" click on it, if you wish you can Modify the "admin" rights, Like change the password or else you can Create your Own User and password. So whenever you login to your Storage Partition, and it ask for the username and password then you can input that username and password and click OK. This way you will be able to access your Storage Driver. 

Maybe you are looking for

  • How can I start a second window of a second profile through the command line?

    I have two profiles set up, 'default' and 'second'. If both are already running how can I start a second window of profile 'second'? firefox -p 'second' : opens a new window of the default and, firefox -p 'second' -no-remote : tells me that firefox i

  • Global shared memory

    hi all, im need to maintain some data insideEJB container which will be frequently accesed by almost all of my application components. Like Global Shared Memory (GSM) in C++, is there any way of maintaining this data im memory since the application i

  • Are the mirroring airplay work on macbook pro core 2 duo 2010 version

    are the mirroring airplay work on macbook pro core 2 duo 2010 version

  • How to traverse an array with a for-loop in this certain way...

    so I need to traverse an array starting from the first element, then from the last element, then the second element, then the second last element, then third element and so on... in a for loop how would that look in code? Any help would be appreciate

  • HT3275 Time machine question

    My hard drive has 373 GB used (2TB total).  I am trying to back it up onto  a 1TB external hard drive and I get a message that the backup is 1.2TB, and my backup drive is too small. Why is the backup so large??