RAC CLUSTER FILES

hi,
i am planning to create RAC using ASM and not using HACMP/GPFS and i am planning to store OCR and voting Disk on
raw disk devices.I want to know for the above case .i have to follow all the below doc or only 3.3.2 for storing OCR
and voting disk on raw disk devices .Kindly help
AIX version 5,3 and oracle 10.2.0.4
ORACLE DOC LINK
http://download.oracle.com/docs/cd/B19306_01/install.102/b14201/storage.htm#sthref587
3.3.2 Configuring Raw Disk Devices for Oracle Clusterware Without HACMP or GPFS
3.3.3 Configuring Raw Logical Volumes for Oracle Clusterware
3.3.4 Creating a Volume Group for Oracle Clusterware
3.3.5 Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group
3.3.6 Importing the Volume Group on the Other Cluster Nodes
3.3.7 Activating the Volume Group in Concurrent Mode on All Cluster Nodes
Regards

Hi,
i have to follow all the below doc or only 3.3.2 for storing OCR and voting disk on raw disk devices .Kindly helpYou are right in your understanding. You only need to follow 3.3.2 because other parts are related to configuration of raw disks> raw volume group > raw logical volume which is in term related to HACMP which you are not using.
Salman

Similar Messages

  • Does /sapmnt need in cluster file system(SAP ECC 6.0 with oracle RAC)

    We are going to be installing SAP with Oracle 10.2.0.4 RAC on Linux SuSE 10 and OCFS2. The Oracle RAC documentation states:
    You must store the following components in the cluster file system when you use RAC
    in the SAP environment:
    - Oracle Clusterware (CRS) Home
    - Oracle RDBMS Home
    - SAP Home (also /sapmnt)
    - Voting Disks
    - OCR
    - Database
    What I want to ask is if I really need put SAP Home(also /sapmnt) on cluster file system? I will build a two nodes oracel 10g RAC and I also have another two nodes to install SAP CI and DI. My orginial think is sapmnt is a NFS share, and mount to all four nodes(RAC node and CI/DI), and all oracle stuff was on OCFS2(only two rac nodes are OCFS), anybody can tell me if SAP Home(also /sapmnt) can be NFS mount not OCFS2, thanks.
    Best regards,
    Peter

    Hi Peter,
    I don't think you need to keep /sapmnt in  ocfs2 . Reason any file system  need  to be in cluster is,in RAC environment, data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network  and preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
    AS this applies to redo files, datafiles and control files only ,  you should be fine with nfs mount of /sapmnt sharing across and not having ocfs2.
    -SV

  • Linux Cluster File System partitions for 10g RAC

    Hi Friends,
    I planned to install 2 Node Oracle 10g RAC on RHEL and I planned to use Linux File system itself for OCR,Voting Disk and datafiles (no OCFS2/RAW/ASM)
    I am having SAN storage.
    I would like to know how do i create shared/cluster partitions for OCR,Voting Disk and Datafiles (common storage on SAN).
    Do i need to install any Linux cluster file system for creating these shared partitions (as we have sun cluster in solaris)?
    If so let me know what versions are supported and provide the necessary Note / Link
    Regards,
    DB

    Hi ,
    Below link may be useful to you:
    ORACLE-BASE - Oracle 10g RAC On Linux Using NFS

  • RAC 10gr2 using ASM for RMAN a cluster file system or a Local directory

    The environment is composed of a RAC with 2 nodes using ASM. I have to determine what design is better for Backup and Recovery with RMAN. The backups are going to be saved to disk only. The database is only transactional and small in size
    I am not sure how to create a cluster file system or if it is better to use a local directory. What's the benefit of having a recovery catalog that is optional to the database?
    I very much appreciate your advice and recommendation, Terry

    Arf,
    I am new to RAC. I analyzed Alejandro's script. He is main connection is to the first instance; then through sql*plus, he gets connected to the second instance. he exits the second instance and starts with RMAN backup to the database . Therefore the backup to the database is done from the first instance.
    I do not see where he setenv again to change to the second instance to run RMAN to backup the second instance. It looks to me that the backup is only done to the first instance, but not to the second instance. I may be wrong, but I do not see the second instance backup.
    Kindly, I request your assistance on the steps/connection to backup the second instance. Thank you so much!! Terry

  • Raw devices versus Cluster File Systems in RAC 10gR2

    Hi,
    Does anyone using cluster file systems in a RAC 10gR2 installation, specifically IBM’s GPFS?
    I’ve visited a company that is running RAC 10gR2 in AIX over raw devices. Why someone would choose to use raw devices , with all the problems to administer , when all the modern file systems are so powerful? Is there any issues when using cluster file systems + RAC? Is there considerable performance benefits when using raw devices with RAC ?
    I´ve always used Oracle stand alone instances over file systems (since version 7) , and performance was always very good. I´ve tested raw devices almost 10 years ago , and even in that time (the hardware today is much better - SAN , 15K rpm disks , huge caches - and the file systems software today is much better) the cost to administer it does not compensate the benefits (only 5% more faster than file systems in Oracle 7).
    So , besides any limitations imposed by RAC , why use raw devices nowadays ?
    Regards,
    Antonio Belloni

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • Oracle RAC Cluster Health Monitor on Windows 2008 R2 64Bit

    Hello colleaques,
    i run a 2-node RAC Cluster 11.2.0.2 64Bit on Windows 2008 R2 64bit. I did installl the Berkeley DB Version 4.6.21 succesfully.
    After that i installed the crfpack.zip (CHM) as described in the README.txt.
    F:\Software\ClusterHealthcheck\install>perl crfinst.pl -i eirac201,eirac202 -b F:\BerkeleyDB -m eirac201
    Performing checks on nodes: "eirac201 eirac202" ...
    Assigning eirac202 as replica
    Installing on nodes "eirac202 eirac201" ...
    Generating cluster wide configuration file for eirac202...
    Generating cluster wide configuration file for eirac201...
    Configuration complete on nodes "eirac202 eirac201" ...
    Please run "perl C:\"program files"\oracrf\install\crfinst.pl -f, optionally specifying BDB location with -b <bdb location> as Admin
    user on each node to complete the install process.
    F:\Software\ClusterHealthcheck\install>c:
    C:\Users\etmtst>cd \
    C:\>cd "Program Files"
    C:\Program Files>cd oracrf
    C:\Program Files\oracrf>cd install (on both nodes as described in the README.txt)
    C:\Program Files\oracrf\install>perl crfinst.pl -f -b F:\BerkeleyDB
    01/30/12 16:42:21 OracleOsToolSysmService service installed
    Installation completed successfully at C:\"program files"\oracrf...
    C:\Program Files\oracrf\install>runcrf
    01/30/12 16:44:03 StartService(OracleOsToolSysmService) failed:(1053) The service did not respond to the start or control request i
    n a timely fashion.
    01/30/12 16:44:03 OracleOsToolSysmService service started
    It says here OracleOsToolSysmService was started, but it was not !!!
    Manual starting gives the same error. !!!!
    Please has anybody had the same problem ??
    regards and greetings, Abraham

    There will be a new version of the Standalone CHM/OS for Windows that will work with 11.2.0.2 and earlier versions available on OTN in the near future. The older version that you are using has not been tested and due to the infrastructure changes in 11.2 it is not expected it will work. The integrated CHM/OS that is included as part of the 11.2.0.3 GI installation does work as does the new GUI (CHMOSG) now available for download.

  • Asmca has grayed out Volumes and ASM Cluster File Systems 11.2.0.3

    I've got a two node cluster which is up and running with the latest 11.2.0.3 grid install on Oracle Linux 6.3
    I need to get a shared storage location I can use for File I/O testing, ASM looks like the solution with an ASM Cluster File System.
    When I run asmca I do not have the ability to create these volumes or file systems as they are Grayed out.
    I found some instructions on how to get it to work, and they said to use acfsload to start up the required daemons:
    [root@oracleA bin]# ./acfsload start -s
    ACFS-9459: ADVM/ACFS is not supported on this OS version: '2.6.39-300.17.3.el6uek.x86_64'
    I installed Patches: 13146560, 14596051 - Which I thought would fix the problem. Rebooted after successfully applying the patches, but asmca still shows them greyed out
    and not supported on this OS error persists.
    I see some posts online saying to edit osds_acfslib.pm and update it to allow for the supported ORACLE version
    Right now it shows: ($release =~ /^oraclelinux-release/))) # Oracle Linux
    under /etc it only has oracle-release - could that have something to do with it not passing the check?
    uname -r
    2.6.39-300.17.3.el6uek.x86_64
    From what I can tell this kernal should support asm..
    Any help in getting these shared storage asm disks setup would be very helpful, oracleasm creates them and sees them fine for databases. Thanks.

    Turns out the Kernel version 2.6.39 does not have support for the ASM Drivers for the ACFS mounting.
    I'm going to have to use Oracle Linux 6.2 (instead of Oracle Linux 6.3) and rebuild my RAC to get a supported version of the drivers -> Kernel version 2.6.32
    http://docs.oracle.com/cd/E11882_01/install.112/e16763/oraclerestart.htm#BGBGEDGA

  • Oracle10g RAC Cluster Interconnect issues

    Hello Everybody,
    Just a brief overview as to what i am currently doing. I have installed Oracle10g RAC database on a cluster of two Windows 2000 AS nodes.These two nodes are accessing an external SCSI hard disk.I have used Oracle cluster file system.
    Currently i am facing some performance issues when it comes to balancing workload on both the nodes.(Single instance database load is faster than a parallel load using two database instances).
    I feel the performance issues could be due to IPC using public Ethernet IP instead of private interconnect.
    (During a parallel load large amount of packets of data are sent over the Public IP and not Private interconnect).
    How can i be sure that the Private interconnect is used for transferring cluster traffic and not the Public IP? (Oracle mentions that for a Oracle10g RAC database, private IP should be used for heart beat as well as transferring cluster traffic).
    Thanks in advance,
    Regards,
    Salil

    You find the answers here:
    RAC: Frequently Asked Questions
    Doc ID: NOTE:220970.1
    At least crossover interconnect is completely unsupported.
    Werner

  • JVM patch required for DST on 10.2.0.2 RAC cluster

    I have looked all over the internet and Metalink for information regarding the JVM patching on a RAC cluster and haven't found anything useful, so I apologize if this question has already been asked multiple times. Also if there is a forum dedicated to DST issues, please point me in that direction.
    I have a 10.2.0.2 RAC cluster so I know I have to do the JVM patching required because of the DST changes. The README for 5075470 says to follow post-implementation steps in the fix5075470README.txt file. Step 3 of those instructions say to bounce the database, and then not allow the use of java until step 4 is complete (which is to run the fix5075470b.sql script).
    Here's my question: since this is a RAC database, does that mean I have to shutdown both instances, start them back up, run the script, and then let users log back in? IN OTHER WORDS, AN OUTAGE IS REQUIRED?
    Is there a way around having to take an outage? Can I bounce each instance separately (in a rolling fashion) so there's no outage, and then run the script even though users are logged on if I think java isn't being used by the application? Is there a way to confirm whether or not it's being used? If I confirm the application isn't using java, is it ok to run the script while users are logged on?
    Any insight would be greatly appreciated.
    Thanks,
    Susan

    According to Note: 414309.1 USA 2007 DST Changes: Frequently Asked Questions an Problem for Oracle JVM Patches, question 4 Does the database need to be down before the OVJM patch is applied, the bounce is necessary. That says nothing about a rolling upgrade in RAC.
    You might file an SR asking if a rolling upgrade is possible.

  • Sun QFS cluster file system with Veritas Volume Manager

    Hi,
    Can someone confirm whether it is possible to create a Sun QFS cluster file system (for Oracle RAC datafiles) using a VxVM volume?
    Or must we use Solaris Volume Manager with QFS?
    Thinking of storing the static part of the Oracle RAC DB on VxVM raw devices, and the dynamic part on a QFS file system to avoid the overhead of constantly adding new raw devices when i want to create datafiles.
    Thanks,
    Steve

    Steve,
    No, shared QFS is only supported on Solaris Volume Manager. I've not heard of any plans to test it on VxVM.
    Why not keep the static parts of the DB on raw SVM devices? Why keep them on raw devices at all?
    Tim
    ---

  • Routing all connections through a one node in a 2 node RAC cluster

    Hi everyone
    My client has the following requirement: an active/active RAC cluster (eg node1/node2), but with only one of the nodes being used (node1) and the other sitting there just in case.
    For things like services, i'm sure this is straightforward enough - just have them set to preferred on node1 and available on node 2.
    For connections, I imagine I would just have the vips in order in the tns file, but with LOAD_BALANCING=OFF, so they go through the tns entries in order (i.e node 1, then node 2), so this would still allow the vip to failover if node 1 is down.
    Does that sound about right? Have I missed anything?
    Many thanks
    Rup

    user573914 wrote:
    My client has the following requirement: an active/active RAC cluster (eg node1/node2), but with only one of the nodes being used (node1) and the other sitting there just in case.Why? What is the reason for a "+just in case+" node - and when and how is is "enabled" when that just-in-case situation occurs?
    This does not many any kind of sense from a high availability or redundancy view.
    For connections, I imagine I would just have the vips in order in the tns file, but with LOAD_BALANCING=OFF, so they go through the tns entries in order (i.e node 1, then node 2), so this would still allow the vip to failover if node 1 is down.
    Does that sound about right? Have I missed anything?Won't work on 10g - may not work on 11g. The Listener can and does handoff connections, depending on what the TNS connection string say. If you do not connect via a SID entry but via a SERVICE entry, and that service is available on multiple nodes, you may not (and often will not) be connected to instance on the single IP that you used in your TNS connection.
    Basic example:
    // note that this TEST-RAC alias refers to a single specific IP of a cluster, and use
    // SERVICE_NAME as the request
    /home/billy> tnsping test-rac
    TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 18-JAN-2011 09:06:33
    Copyright (c) 1997, 2005, Oracle.  All rights reserved.
    Used parameter files:
    /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS=(PROTOCOL=TCP)(HOST= 196.1.83.116)(PORT=1521)) (LOAD_BALANCE=no) (CONNECT_DATA=(SERVER=shared)(SERVICE_NAME=myservicename)))
    OK (50 msec)
    // now connecting to the cluster using this TEST-RAC TNS alias - and despite we listing a single
    // IP in our TNS connection, we are handed off to a different RAC node (as the service is available
    // on all nodes)
    // and this also happens despite our TNS connection explicitly requesting no load balancing
    /home/billy> sqlplus scott/tiger@test-rac
    SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jan 18 09:06:38 2011
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Real Application Clusters, Data Mining and Real Application Testing options
    SQL> !lsof -n -p $PPID | grep TCP
    sqlplus 5432 billy    8u  IPv4 2199967      0t0     TCP 10.251.93.58:33220->196.1.83.127:37031 (ESTABLISHED)
    SQL> So we connected to RAC node 196.1.83.116 - and that listener handed us off to RAC node 196.1.83.127. The 11gr2 Listener seems to behave differently - it does not do a handoff (from a quick test I did on a 11.2.0.1 RAC) in the above scenario.
    This issue aside - how do you deal with just-in-case situation? How do you get clients to connect to node 2 when node 1 is down? Do you rely on the virtual IP of node 1 to be switched to node 2? Is this a 100% safe and guaranteed method?
    It can take some time (minutes, perhaps more) for a virtual IP address to fail over to another node. During that time, any client connection using that virtual IP will fail. Is this acceptable?
    I dunno - I dislike this concept of your client of treating the one RAC node as some kind of standby database for a just-in-case situation. I fail to see any logic in that approach.

  • Cluster file systems performace issues

    hi all,
    I've been running a 3 node 10gR2 RAC cluster on linux using OCFS2 filesystem for some time as a test environment which is due to go into production.
    Recently I noticed some performance issues when reading from disk so I did some comparisons and the results don't seem to make any sense.
    For the purposes of my tests I created a single node instance and created the following tablespaces:
    i) a local filesystem using ext3
    ii) an ext3 filesystem on the SAN
    iii) an OCFS2 filesystem on the SAN
    iv) and a raw device on the SAN.
    I created a similar table with the exact data in each tablespace containing 900,000 rows and created the same index one each table.
    (i was trying to generate a i/o intensive select statement, but also one which is reallistic to our application)
    I then ran the same query against each table (making sure to flush the buffer cache between each query execution).
    I checked that the explain plan were the same for all queries (they were) and the physical reads (from an autotrace) were also comparable.
    The results from the ext3 filesystems (both local and SAN) were approx 1 second, whilst the results from OCFS2 and the raw device were between 11 and 19 seconds.
    I have tried this test every day for the past 5 days and the results are always in this ballpark.
    we currently cannot put this environment into production as queries which read from disk are cripplingly slow....
    I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db.
    judging from this, and many other forums, OCFS2 is in quite wide use so this cannot be an inherent problem with this type of filesystem.
    Also, given the results from my raw device test I am not sure that moving to ASM would provide any benefits either...
    if anyone has any advice, I'd be very grateful

    Hi,
    spontaneously, my question would be: How did you eliminate the influence of the Linux File System Cache on ext3? OCFS2 is accessed with the o_direct flag - there will be no caching. The same holds true for RAW devices. This could have an influence on your test and I did not see a configuration step to avoid it.
    What I saw, though, is "counter test": "I have tried comparing simple file copies from an OS level and the speed differences are not apparent - so the issue only manifests itself when the data is read via an oracle db." and I have no good answer to that one.
    Maybe this paper has: http://www.oracle.com/technology/tech/linux/pdf/Linux-FS-Performance-Comparison.pdf - it's a bit older, but explains some of the interdependencies.
    Last question: While you spent a lot of effort on proving that this one query is slower on OCFS2 or RAW than on ext3 for the initial read (that's why you flushed the buffer cache before each run), how realistic is this scenario when this system goes into production? I mean, how many times will this query be read completely from disk as opposed to use some block from the buffer? If you consider that, what impact does the "IO read time from disk" have on the overall performance of the system? If you do not isolate the test to just a read, how do writes compare?
    Just some questions. Thanks.

  • Change system time in RAC cluster

    Hi,
    We need to change the system time of two nodes involded in a RAC cluster, both the nodes are almost 15 minutes ahead of real time but are in sync with each other.
    Does Oracle database require to bounce to get the latest changed time?
    or does the whole cluster services needed to be stopped for 15 minutes after changing the time and then started? just in case not to confused with the transactions happened when the system was 15 minutes ahead.
    The metalink note ID 368539.1 explains that PMON process does the service registration when the database was started. The PMON will not take the changed time dynamically?
    We have oracle 10.2.0.2 database on HP UX 11.31
    Please clear this confusion.
    Thanks.

    Your RAC Cluster would be using either of
    a. NTP --- external to Oracle
    b. Oracle Cluster Time Synchronization Service -- part of the Oracle Grid Infrastructure
    It seems that you are not using NTP -- else the two nodes would have the same time as "the rest of the world".
    You should see "Cluster Time Synchronization Service" in your clusterware logs.
    Since you plan to reset the time backwards, I suggest that you shutdown and reset the time manually.
    Remember that timestamps written to various log files and "last update" time of various files themselves, will duplicate or overlap with existing files.
    Note : PMON service registration doesn't come into play here.
    Hemant K Chitale

  • OWB paris install on 10g rel1 RAC cluster

    We are trying to install OWB paris on our 10.1.0.4 RAC cluster using OCFS. When launching OUI without setting oracle home the OUI prompts to install on both nodes, the install fails because it is trying to use the database oraInventory.
    as below
    Error in invoking target 'isqlldr' of makefile
    '/u01/app/oracle/OraHome_2/rdbms/lib/ins_rdbms.mk'. See
    '/u01/app/oracle/oraInventory/logs/installActions2005-07-11_10-28-24AM.log' for details.
    Metalink suggested the following:
    Note:317510.1
    Create a seperate oraInventory folder and oraInst.loc file for OWB and specify the file oraInst.loc as parameter while invoking the installer.
    Re-install the OWB in a new ORACLE_HOME i.e altogether in a new location and don't install on top of database ORACLE_HOME.
    In addition to new location(ORACLE_HOME), create a new Inventory and inventory location file as follows.
    1. Set the new location for installing OWB like
    export ORACLE_HOME=/u01/app/oracle/product/owb
    ensure that the location /u01/app/oracle/product/ exists.
    2. Create a diectory oraInventory inside the ORACLE_HOME
    3. Create a file oraInst.loc pointing to oraInventory in $ORACLE_HOME by doing:
    echo "inventory_loc=$ORACLE_HOME/oraInventory" > oraInst.loc
    Note: the directory oraInventory and file oraInst.loc will be present under ORACLE_HOME
    4. Invoke the installer by providing the parmater as
    ./runInstaller -invPtrLoc $ORACLE_HOME/oraInst.loc
    When we do this, the OUI does not prompt to install on both nodes.. how can we work around this?, do we have to install owb on each individual node individually?

    Hello.
    I had the same problem as you did. But i managed to install OWB on both nodes with user equvalents set and by slightly modifying the steps from metalink as follows:
    unset ORACLE_HOME (if it is set)
    create owb directory
    create oraInst.loc in owb
    create oraInventory directory in owb
    export ORACLE_HOME=/u01........./owb_1
    ./runInstaller
    (note no command line parameters so it picks up the other node)
    Basically if you specify the oraInst.loc it goes tot he oraInventory direct instead of doing checks (one of which for the cluster)
    Hope this helps.
    Thanks
    Vix

  • RAC-DATA FILE ACCESSING ISSUE FROM ONE NODE

    Dear All,
    We have a two node RAC (10.2.0.3)running on Hp Unix. From yesterday onwards, from one instance accessing data from a specific data file showing the below error, whereas accessing from other node to the same datafile is working properly.
    Errors in file /oracle/product/admin/tap3plus/bdump/tap3plus4_dbw0_24950.trc:
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    ORA-27041: unable to open file
    HPUX-ia64 Error: 19: No such device
    Additional information: 2
    Tue Jan 31 08:52:09 2012
    Errors in file /oracle/product/admin/tap3plus/bdump/tap3plus4_dbw0_24950.trc:
    ORA-01186: file 75 failed verification tests
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    Tue Jan 31 08:52:09 2012
    File 75 not verified due to error ORA-01157
    Tue Jan 31 08:52:09 2012
    Thanks in Advance

    user585870 wrote:
    We have a two node RAC (10.2.0.3)running on Hp Unix. From yesterday onwards, from one instance accessing data from a specific data file showing the below error, whereas accessing from other node to the same datafile is working properly.That would be due to some kind of failure in the shared storage layer.
    RAC needs the very same storage layer to be visible and available on each RAC node - thus this needs to be some form of shared cluster storage.
    Should a piece of it fails on one node, that node would not be able to access the RAC database files on that shared storage layer - and will throw the type of errors you are seeing.
    So how does this shared storage layer look like? Fibre channels (HBAs) connected to a Fibre Channel Switch and SAN - making SAN LUNs available as shared storage devices?
    Typically a shared storage failure would throw errors in the kernel log. This is because the error is not an Oracle error, but a kernel error. As it is in your case. The bottom error on the error stack points to the root cause:
    ORA-01157: cannot identify/lock data file 75 - see DBWR trace file
    ORA-01110: data file 75: '/dev/vg_rac/rraw_tap3plus_temp_live05'
    ORA-27041: unable to open file
    HPUX-ia64 Error: 19: No such device
    So HP-UX on that node is not seeing a specific shared storage device.

Maybe you are looking for

  • SOAP adapter Header data

    This is SOAP to PROXY We are getting userid information in SOAP Header . This information has to be captured and pass to receiver. 1) How to access the SOAP header data 2) Can I Map it to target strcture, if so how target structure shud be formed. Pl

  • Timedependent Business partner !!

    Hi All, I am facing a problem... in the tcode BP....if i am maintaining a business partner DEVT01 which is assigned to a company 'ABC'....after 3 months the same id 'DEVT01' is assigned to a company 'XYZ'....is there any way to check in tables that i

  • Cost Estimate Mistakenly deleted - ML Active

    Dear all, One of the user mistakenly delete the cost estimate released for a material. This material already have the status "quantity and values entered" in Material ledger/ The Material ledger is active with price determination 3. What things usual

  • Dot matrix printer

    Is it possible to use any dot matrix printer ? How ?

  • Typekit

    I may well have missed something fundamental, but in one of the Muse tutorial videos the presenter demonstrates how Typekit fonts can be used. I do not have the Typekit option on my font selector drop-down - how do I enable it? Thanks!