SC 3.0 file system failover for Oracle 8i/9i

I'm a Oracle DBA for our company. And we have been using shared NFS mounts successfully for the archivelog space on our production 8i 2-node OPS Oracle databases. From each node, both archivelog areas are always available. This is the setup recommended by Oracle for OPS and RAC.
Our SA team is now wanting to change this to a file system failover configuration instead. And I do not find any information from Oracle about it.
The SA request states:
"The current global filesystem configuration on (the OPS production databases) provides poor performance, especially when writing files over 100MB. To prevent an impact to performance on the production servers, we would like to change the configuration ... to use failover filesystems as opposed to the globally available filesystems we are currently using. ... The failover filesystems would be available on only one node at a time, arca on the "A" node and arcb on the "B" node. in the event of a node failure, the remaining node would host both filesystems."
My question is, does anyone have experience with this kind of configuration with 8iOPS or 9iRAC? Are there any issues with the auto-moving of the archivelog space from the failed node over to the remaining node, in particular when the failure occurs during a transaction?
Thanks for your help ...
-j

The problem with your setup of NFS cross mounting a filesystem (which could have been a recommended solution in SC 2.x for instance versus in SC 3.x where you'd want to choose a global filesystem) is the inherent "instability" of using NFS for a portion of your database (whether it's redo or archivelog files).
Before this goes up in flames, let me speak from real world experience.
Having run HA-OPS clusters in the SC 2.x days, we used either private archive log space, or HA archive log space. If you use NFS to cross mount it (either hard, soft or auto), you can run into issues if the machine hosting the NFS share goes out to lunch (either from RPC errors or if the machine goes down unexpectedly due to a panic, etc). At that point, we had only two options : bring the original machine hosting the share back up if possible, or force a reboot of the remaining cluster node to clear the stale NFS mounts so it could resume DB activities. In either case any attempt at failover will fail because you're trying to mount an actual physical filesystem on a stale NFS mount on the surviving node.
We tried to work this out using many different NFS options, we tried to use automount, we tried to use local_mountpoints then automount to the correct home (e.g. /filesystem_local would be the phys, /filesystem would be the NFS mount where the activity occurred) and anytime the node hosting the NFS share went down unexpectedly, you'd have a temporary hang due to the conditions listed above.
If you're implementing SC 3.x, use hasp and global filesystems to accomplish this if you must use a single common archive log area. Isn't it possible to use local/private storage for archive logs or is there a sequence numbering issue if you run private archive logs on both sides - or is sequencing just an issue with redo logs? In either case, if you're using rman, you'd have to back up the redologs and archive log files on both nodes, if memory serves me correctly...

Similar Messages

  • Linux Cluster File System partitions for 10g RAC

    Hi Friends,
    I planned to install 2 Node Oracle 10g RAC on RHEL and I planned to use Linux File system itself for OCR,Voting Disk and datafiles (no OCFS2/RAW/ASM)
    I am having SAN storage.
    I would like to know how do i create shared/cluster partitions for OCR,Voting Disk and Datafiles (common storage on SAN).
    Do i need to install any Linux cluster file system for creating these shared partitions (as we have sun cluster in solaris)?
    If so let me know what versions are supported and provide the necessary Note / Link
    Regards,
    DB

    Hi ,
    Below link may be useful to you:
    ORACLE-BASE - Oracle 10g RAC On Linux Using NFS

  • Unable to get the file system information for: \\****servername\E$\; error = 64 Unable to distribute content to DP

    One of our DPs has stopped loading content. 
    I've research for quite a bit and cannot find a clear cut reason to this.  This server only has a DP role, I verified sharing permissions, all looked good. This DP has been running just fine for the last year or so and all sudden it will no longer load
    packages.  The disk drive is still present I can still reach the hidden share \\servername.com\E$
    Verified that the SMSSIG$ folder is there and the last entry is from 4/23/2015 
    SCCM 2012 R2 
    OS 2008 R2 Standard
    Any help is greatly appreciated!
    Here's a snipit from the distmgr.log
    Start updating the package on server ["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\...
    Attempting to add or update a package on a distribution point.
    Will wait for 1 threads to end.
    Thread Handle = 0000000000001E48
    STATMSG: ID=2342 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=***.com SITE=1AB PID=2472 TID=8252 GMTDATE=Thu Apr 30 19:12:01.972 2015 ISTR0="SYSMGMT Source" ISTR1="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=2 AID0=400 AVAL0="CAS00087" AID1=404 AVAL1="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    SMS_DISTRIBUTION_MANAGER 4/30/2015 2:12:01 PM
    8252 (0x203C)
    The current user context will be used for connecting to ["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\.
    Successfully made a network connection to \\*****.com\ADMIN$.
    Ignoring drive \\*****.com\C$\.  File \\*****.com\C$\NO_SMS_ON_DRIVE.SMS exists.
    Unable to get the file system information for: \\*****.com\E$\; error = 64.
    Failed to find a valid drive on the distribution point ["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\
    Cannot find or create the signature share.
    STATMSG: ID=2324 SEV=E LEV=M SOURCE="SMS Server" COMP="SMS_DISTRIBUTION_MANAGER" SYS=sccmprdpr1sec2.mmm.com SITE=1AB PID=2472 TID=8252 GMTDATE=Thu Apr 30 19:12:55.206 2015 ISTR0="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    ISTR1="CAS00087" ISTR2="" ISTR3="30" ISTR4="94" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=2 AID0=400 AVAL0="CAS00087" AID1=404 AVAL1="["Display=\\*****.com\"]MSWNET:["SMS_SITE=1AB"]\\*****.com\"
    Error occurred. Performing error cleanup prior to returning.
    Cancelling network connection to \\*****.com\ADMIN$.

    Error 64 is being returned which is simply "the network name is no longer available".
    There can be a number of reasons for this from SMB compatibility issues (2003 servers wont support SMB2), to the expected and actual computer name of the boxes don't match (tries to authenticate with server.tld.com when the actual name is srv-01.tld.com and
    you just put a C-name in). I'd start from the top:  Try opening said share from the Primary Site server as that's the box doing the work.  Verify the IP and computer name is legit and that no one has played ACL games between the two systems (remember
    RPC only initiates/listens on port 135 but established connections are up in the dynamic port range).
    At the end of the day it's an issues "underneath" SCCM, and not an SCCM problem specifically. 

  • Warming up File System Cache for BDB Performance

    Hi,
    We are using BDB DPL - JE package for our application.
    With our current machine configuration, we have
    1) 64 GB RAM
    2) 40-50 GB -- Berkley DB Data Size
    To warm up File System Cache, we cat the .jdb files to /dev/null (To minimize the disk access)
    e.g
         // Read all jdb files in the directory
         p = Runtime.getRuntime().exec("cat " + dirPath + "*.jdb >/dev/null 2>&1");
    Our application checks if new data is available every 15 minutes, If new Data is available then it clears all old reference and loads new data along with Cat *.jdb > /dev/null
    I would like to know that if something like this can be done to improve the BDB Read performance, if not is there any better method to Warm Up File System Cache ?
    Thanks,

    We've done a lot of performance testing with how to best utilize memory to maximize BDB performance.
    You'll get the best and most predictable performance by having everything in the DB cache. If the on-disk size of 40-50GB that you mention includes the default 50% utilization, then it should be able to fit. I probably wouldn't use a JVM larger than 56GB and a database cache percentage larger than 80%. But this depends a lot on the size of the keys and values in the database. The larger the keys and values, the closer the DB cache size will be to the on disk size. The preload option that Charles points out can pull everything into the cache to get to peak performance as soon as possible, but depending on your disk subsystem this still might take 30+ minutes.
    If everything does not fit in the DB cache, then your best bet is to devote as much memory as possible to the file system cache. You'll still need a large enough database cache to store the internal nodes of the btree databases. For our application and a dataset of this size, this would mean a JVM of about 5GB and a database cache percentage around 50%.
    I would also experiment with using CacheMode.EVICT_LN or even CacheMode.EVICT_BIN to reduce the presure on the garbage collector. If you have something in the file system cache, you'll get reasonably fast access to it (maybe 25-50% as fast as if it's in the database cache whereas pulling it from disk is 1-5% as fast), so unless you have very high locality between requests you might not want to put it into the database cache. What we found was that data was pulled in from disk, put into the DB cache, stayed there long enough to be promoted during GC to the old generation, and then it was evicted from the DB cache. This long-lived garbage put a lot of strain on the garbage collector, and led to very high stop-the-world GC times. If your application doesn't have latency requirements, then this might not matter as much to you. By setting the cache mode for a database to CacheMode.EVICT_LN, you effectively tell BDB to not to put the value or (leaf node = LN) into the cache.
    Relying on the file system cache is more unpredictable unless you control everything else that happens on the system since it's easy for parts of the BDB database to get evicted. To keep this from happening, I would recommend reading the files more frequently than every 15 minutes. If the files are in the file system cache, then cat'ing them should be fast. (During one test we ran, "cat *.jdb > /dev/null" took 1 minute when the files were on disk, but only 8 seconds when they were in the file system cache.) And if the files are not all in the file system cache, then you want to get them there sooner rather than later. By the way, if you're using Linux, then you can use "echo 1 > /proc/sys/vm/drop_caches" to clear out the file system cache. This might come in handy during testing. Something else to watch out for with ZFS on Solaris is that sequentially reading a large file might not pull it into the file system cache. To prevent the cache from being polluted, it assumes that sequentially reading through a large file doesn't imply that you're going to do a lot of random reads in that file later, so "cat *.jdb > /dev/null" might not pull the files into the ZFS cache.
    That sums up our experience with using the file system cache for BDB data, but I don't know how much of it will translate to your application.

  • Link for JAVA homogenous system copy for oracle

    hi
    I am unable to find the document for JAVA homogenous system copy for Oracle for NW04 & NW04s
    kindly find me the link
    thx
    regards
    Shoeb

    Hi
    Here you go...-
    [JAVA Sytem Copy for ORACLE |https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cd76e07d-0c01-0010-298d-f699863f2ce4]
    Regards
    Jhony

  • Run queries against system tables for oracle BPEL processes

    I want to run queries against system tables for oracle BPEL processes. It is becoming very difficult for me to us EM as it is very slow,
    and not all the time, it is sufficient for our needs.
    We are doing load testing and we want find out info like how many requests came in and how many faulted and what time is taken by each request...
    So do any of you have the query that I can use and tables that I need to go against?

    Use the BPEL hydration database table "cube_instance".
    There should be plenty of example in the forum regarding this table.

  • Using old file system backup for Cloning

    I have taken an off-line backup of Oracle 11i (11.5.10.2) 15 days ago. Before taking backup of file system , I verified that all the latest Rapid Clone Patches are applied. No changes or patch work in APPL_TOP or DB has been done since that backup. Now I need to do cloning of this instance, how I can use this backup for Cloning.
    Rapid Clone scripts create and generate some files/directories so I am not sure whether my Old backup of file system will work or not. What is the best way to use old backup for cloning , also what are the file and directories in addition to the old backup of file system which I need to copy to Target System.
    Thanks for reviewing and suggestions.
    Samar

    Samar,
    If you have run preclone before backing it up, your backup should be valid for cloning.
    2.1 in the cloning doc has to be in the backup.
    These doc's should clear out yours doubts on cloning.
    Cloning Oracle Applications Release 11i with Rapid Clone
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=230672.1
    FAQ: Cloning Oracle Applications Release 11i
    https://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216664.1

  • Decision on File system management in Oracle+SAP

    Hi All,
    In my production system we use to have /oracle/SID/sapdata1 and oracle/SID/sapdata2. Initially there was many datafiles assigned to the table sapce PSAPSR3, few as autoextend on and few as autoextend off. As per my understanding DB02 shows you the information just tablespace wise it will report AUTOEXTEND ON as soon as at least one of the datafiles has AUTOEXTEND ON. In PSAPSR3 all the datafile with autoextend ON are from SAPDATA1 which has only 50 GBs left. All the files as Autoextend OFF are from SAPDATA2 which has 900 GBs of sapce left.
    Now the question is :
    1.Do I need to request for additional space for SAPDATA1 as some of the tablespaces are at the edge of autoextend and that much space is not left in the FS(sapadat1) , then how will they extend? DB growth is 100GB per month.
    2.We usually were adding 10 GB of datafile in the tablespace with 30GB as autoextend.
    Can we add another datafile from sapdata2 this time with autoextend ON and the rest will be taken care automatically.
    Pleae suggest.
    Regards,
    VIcky

    Hi Vicky,
    As you have 100GB/month growth suggestion here would be
    1) Add 2 more mount points sapdata3 and sapdata4 with around 1 TB space.
       This is to distribute data across 4 data partitions for better performance
    2) As sapdata1 has datafiles with auto extend ON, you need to extend the file system to 500 GB atleast so that whenever data is written on datafiles under sapdata1, it will have space to grow using autoextend feature. Without sufficient disk space it may lead to space problem and transaction may result in dump.
    3) No need to change anything on sapdata2 as you already have 900GB free space
    Hope this helps.
    Regards,
    Deepak Kori

  • Shared file system  recomended for OCR and voting Disk in 10g R2

    Dear Friends,
    For Oracle 10g R2 (10.2.0.5) 64 bit which shared file system is recomended for OCR and voting Disk (NFS / raw devices / OCFS2)
    For Datafiles and FRA i planned to use ASM
    Regards,
    DB

    Hi,
    If your using standard edition then you got no choice but raw devices
    http://docs.oracle.com/cd/B19306_01/license.102/b14199/options.htm#CJAHAGJE
    for ocfs2 you need to take extra care
    Heartbeat/Voting/Quorum Related Timeout Configuration for Linux, OCFS2, RAC Stack to Avoid Unnecessary Node Fencing, Panic and Reboot [ID 395878.1]

  • Need help with File system creation fro Oracle DB installation

    Hello,
    I am new to Solaris/Unix system landscape. I have a Sun enterprise 450 with 18GB hard drive. It has Solaris 9 on it and no other software at this time. I am planning on adding 2 more hard drives 18gb and 36gb to accommodate Oracle DB.
    Recently I went through the Solaris Intermediate Sys admin training, knows the basic stuff but not fully confident to carry out the task on my own.
    I would appreciate some one can help me with the sequence of steps that I need perform to
    1. recognize the new hard drives in the system,
    2. format,
    3. partition. What is the normal strategy for partitioning? My current thinking is to have 36+18gb drives as data drives. This is where I am little bit lost. Can I make a entire 36GB drive as 1 slice for data, I am not quite sure how this is done in the real life, need your help.
    4. creating the file system to store the database files.
    Any help would be appreciated.

    Hello,
    Here is the rough idea for HA from my experience.
    The important thing is that the binaries required to run SAP
    are to be accessible before and after switchover.
    In terms of this file system doesn't matter.
    But SAP may recommend certain filesystem on linux
    please refer to SAP installation guide.
    I always use reiserfs or ext3fs.
    For soft link I recommend you to refer SAP installation guide.
    In your configuration the files related to SCS and DB is the key.
    Again those files are to be accessible both from hostA and from hostB.
    Easiest way is to use share these files like NFS or other shared file system
    so that both nodes can access to these files.
    And let the clustering software do mount and unmount those directory.
    DB binaries, data and log are to be placed in shared storage subsystem.
    (ex. /oracle/*)
    SAP binaries, profiles and so on to be placed in shared storage as well.
    (ex. /sapmnt/*)
    You may want to place the binaries into local disk to make sure the binaries
    are always accessible on OS level, even in the connection to storage subsystem
    losts.
    In this case you have to sync the binaries on both nodes manually.
    Easiest way is just put on shared storage and mount them!
    Furthermore you can use sapcpe function to sync necessary binaries
    from /sapmnt to /usr/sap/<SID>.
    For your last question /sapmnt should be located in storage subsystem
    and not let the storage down!

  • Set up the best file system partioning for a solaris machine

    I have a ultra 10 running solaris 8. Its a small machine but good enough to get me started. i want to use this machine as a sole web application server. I am newly installing the Solaris enviroment and want the best possible file system.
    I have 256 ram, 9 gig hard drive.
    I am currently thinking to set the swap to 768. I'm thinking I may need it. I will run java apps on this server, and as a mail server. Soon I have an extra NT box where I will install nt server and run an oracle 8i database on it. This is then my question: What is the best way to set this machine up, considering its power limitations and its all I got?
    I would greatly appreacite your assistance ASAP.
    I could use some serious advice.
    Thank you.
    I could use some serious advice.

    Welcome to the Discussions.
    I am not exactly sure what it is that you seek to do. However, I believe you could give this a try, when you call up the info window for a folder and make it available to 'everyone' then there is a small gear like icon at the bottom, click on it, and select 'Apply to enclosed items' if this is greyed out look on the right of the window and if there is a lock in locked state, then click on it to unlock, enter your admin password and you can now make the change. Hope this helps and I hope this is what you are looking to do. Try it and post back here if it solves your problem.

  • Creating File System Repository for a remote system

    Hi Experts,
    My requirement is that I need to create a KM repository in EP from where I need to upload documents into the BW system. Is File System Repository the right type of repository which I need to create for this purpose?
    If yes, then in what format do I have to specify the value of the Root Directory property of the repository? I have a folder /data/docs created in the BW system within which I want to upload documents using this repository. But since this folder is located on the BW system which is a remote system for EP, I am not sure how I have to enter the path to this folder.
    Can anyone give me any hints on this?
    Warm Regards,
    Saurabh

    Hello Saurabh,
    I don't think an FS repository is what you are looking for in this scenario, you could instead use a BI Repository Manager, for more information see:
    http://help.sap.com/saphelp_nw70/helpdata/en/43/c1475079642ec5e10000000a11466f/frameset.htm
    Kind regards,
    Lorcan.

  • Export 500gb database size to a 100gb file system space in oracle 10g

    Hi All,
    Please let me the know the procedure to export 500gb database to a 100gb file system space. Please let me know the procedure.

    user533548 wrote:
    Hi Linda,
    The database version is 10g and OS is linux. Can we use filesize parameter for the export. Please advice on this.FILESIZE will limit the size of a file in case you specify multiple dumpfiles. You could also could specify multiple dump directory (in different FS) when given multiple dumpfiles.
    For instance :
    dumpfile=dump_dir1:file1,dump_dir2:file2,dump_dir3:file3...Nicolas.

  • Which jar file to include for oracle.jdeveloper.webservices.runtime.Wrapped

    I am trying to make a Web Service client on my jakarta tomcat server....
    I used JDeveloper for generating my client .java file but i dont know which jar to include for oracle.jdeveloper.webservices.runtime.WrappedDocLiteralStub.
    Somebody pls tell me which jar file i need to copy for this thing to run successfully .... thx

    Hi,
    After compatibility problems between stub generated with JDeveloper 10.1.3 and Oracle Application Server 10g (9.0.4), i decide to downgrade the project to JDeveloper 10.1.2, then more problems... the wsdl wich JDeveloper 10.1.3 accept perfectlly was not so well interpreted by JDev 10.1.2, and i have to do some adjustments on wsdl, then the code generated didn´t format well the soap message (namespaces).....and i made my self some code at the stub class....uffff.
    Now I have the java.lang.NoClassDefFoundError: oracle/jdeveloper/webservices/runtime/WrappedDocLiteralStub when the application run´s at the Application Server, and yes i include the jdev-rt.jar (included on 'JDeveloper Runtime' library)
    Any clues (changing the technology is not an option ...for now)?
    Thank´s

  • Maown - file system monitor for shared group directories

    Maown Info Page
    I needed a way to manage ownership and permissions of files in a shared directory. ACLs and "chmod g+s" alone were not enough so I wrote maown.
    Maown is a file system monitor written in C. It uses inotify to recursively watch a directory tree for file creation and attribute modification. It automatically chowns files to user:group and adjusts group permissions to match user permissions.
    The package includes a daemon with a simple configuration file. Each line in the configuration file specifies a user, a group and a list of directories to monitor:
    <user> <group> <directory> [<directory>...]
    Last edited by Xyne (2012-05-21 02:35:24)

    Maown has been replaced with Autochown.

Maybe you are looking for

  • Date selection and date format in MSSQL

    i have to make a select transaction with a microsoft SQL db. I have to select the entry in a given range of time. So i've made my little form with the datepicker of jquery. The datetime field in the database has this output: Jan 1 2014 12:00:00:000AM

  • Can I use field symbol in AT events? How?

    Hi all, I want to use field symbol in <b>AT END OF</b> event Can I use field symbol in such event as it takes table fields only. Kinldy look in to pseudo: Loop itab.          AT END OF <FS1>.          ENDAT. Endloop. I tried in my program but it's gi

  • Schematool : Table '...' doesn't exist on MySQL

    I'm trying to use schematool to create the db tables that it has defined in various jdo files. When I run it I get the following schematool -action refresh target/classes/net/ajsoft/WebShop/Product/*.jdo Exception in thread "main" java.sql.Exception:

  • TS2972 Home Sharing Issue

    When using Home Sharing for the first time, we enabled Home Sharing, signed in with my appleID on 3 laptops on our home network. Computer 1 could see Computer 2 and 3 which both download in a few seconds and no problem sharing music. Computer # 2 and

  • Safari 4.0.3 and Mouse Problems???

    Since installing Safari4.0.3 over 4.0.2 a little while ago, none of my mice's right button double click (or triple click) functions work any more i.e. Can't open a folder using double click. Right clicks or any other mouse settings do work but just n