Multiplexing/Mirroring a datafile

Is there any way to multiplexing or mirroring a datafile in a database?
Explain in detail with an example.
DB: Oracle 10g
OS: Redhat 5
ASM: No ASM is used

Dinesh S wrote:
Atually I read in a book that if mirror the datafiles in the database so we don’t have to perform media recovery for simple disk failures.
Thats why I asked this question.what if your asm mirror is on the same disk that failed? so now you have to use 2 disks to mirror asm to guard against disk failure, and if you are using 2 disks, why are you not RAIDing them and using hardware redundancy when configuring the ASM? much simpler file management.

Similar Messages

  • What EBS performance gains can I expect moving non-x86 (sun?) to x86?

    Hi,
    I was hoping some of you would please share any general performance gains you encountered by moving your EBS from non-x86 to x86. I'm familiar with the benchmarks from tpc.org and spec.org. The users however measure performance on how long it takes for a request to complete. For example, when we moved from our EBS from a two node sun E3500 (4*450 sparc II 8GB memory) to a two node v440 (4*1.28ghz sparc IIIi 8GB memory), performance doubled accross the board with a three year pay back.
    I am trying to 'guesstimate' what performance increase we might encounter, if any, moving from sun sparc to x86. We'll be doing our first dev/test migration the first half of '08, but I thought I'd get a reading from all of you about what to expect.
    Right now we're planning on going with a single-node, 6 cpu dual core 3Ghz x86, 16GB ram. The storage is external RAID 10. We process approximately 1000 payroll checks bi-weekly. Our 'Payroll Process' takes 30min to complete. Similarly, 'Deposit Advice' takes about 30min to complete. Our EBS database is a tiny 200GB, we have a mere 80 concurrent users, and we run HRMS, PAY, PA, GL, FA, AP, AR, PO, OTL, Discoverer.
    Thanks for your feedback. These forums are great.
    L5

    Markus and David,
    First let me thank you for your posts. :-).
    Markus:
    Thank you for the tip. However, I usually do installations with a domain adm user. It does a lot of user switching, yes, but then it only switches to users created by SAPINST, that is most of the time it is switching to <sid>adm, which sounds perfect. At the time of my post I had been setting some environment variables so as to try to get the procedure to distribute the various pieces and bits (saparch, sapbackup, saptrace, origlogs and mirror logs, datafiles, etc. exactly where I wanted them and not where the procedure wants them) so I ended up by using <sid>adm to perform the DB instance installation and not the domain adm user I had installed CI with (I forgot to change back). When I noticed I figured it wouldn't make a difference since it usually switches to <sid>adm anyway. However, for the next attempts I settled on ny initially created dom adm user and no change to the results. OracleService<SID> usually logs on as a system account so the issue doesn't arise, I think.
    and
    David:
    The brackets did it. Thank you so much. It went further and only crashed later, I don't usually potter around sdn, so I'm not familiar with the workings of this, I don't know how to reply separately to the posts and I don't know how to include a properly formatted post (I've seen the Plain Text help but I hate to bother with sidetrack details) so I apologize to all for the probably-too-compact jumble that will come out when I post this. I am now looking at the following problem (same migration to 64) so I fear I may have to close this post and get back with a new one if I can't solve this next issue.

  • How to multiplex datafiles, controlfiles on different disks?

    Hi, Sir...
    I want my database to have two control files, data files on different disks..
    In XE, after installation, data files, control files as follows (you know)..
    D:\Oracle\XE\oradata\XE\Control.DBF, SYSAUX.DBF, SYSTEM.DBF, TEMP.DBF, UNDO.DBF, USERS.DBF
    How to multiplex datafiles, controlfiles on different disks(such as E:\Oracle\XE\oradata\XE\......)
    Thank you...
    Regards,
    Cusco

    shutdown the database, copy the control files to the new directory, modify the control_files parameter
    startup
    you cannot multiplex datafiles (you would have to do that at the os level with mirroring etc)

  • Multiplexing redo logs and control files to a separate diskgroup

    General question this one...
    I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
    +DATA - for datafiles, control files, redo logs
    +FRA - for achive logs, flash recovery. RMAN backup
    Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
    My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
    In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
    Thoughts?

    Some of the decision will probably depend on your specific environment's data activity, volume, and throughput.
    Something to remember is that redo logs are sequential write, which benefit from a lower RAID overhead (RAID-10, 2 writes per IOP vs RAID-5, 4 writes per IOP). RAID-10 is often not cost-effective for the data portion of a database. If your database is OLTP with a high volume of random reads/writes, you're potentially hurting redo throughput by creating contention on the disks sharing data and redo. Again, that depends entirely on what you're seeing in terms of wait events. A low volume database would probably not experience any noticeable degraded performance.
    In my environment, I have RAID-5 and RAID-10 available, and since the RAID-10 requirement from a capacity perspective for redo is very low, it makes sense to create 2 diskgroups for online redo, separate from DATA, and separate from each other. This way, we don't need to be concerned with DATA transactions impacting REDO performance, and vice versa, and we still maintain redo redundancy.
    In my opinion, you can't be too paranoid. :)
    Good luck!
    K

  • Proper layout of datafiles, archivelogs, flash recovery area

    Hi all,
    Sorry a bit of a newb,
    Have a question about how to layout my db files.
    My understanding is that control and archivelog files should be multiplexed to a separate disk than the datafiles.
    Is this also true for the flash_recovery_area?
    Thanks for any help.

    geeter wrote:
    Hi all,
    Sorry a bit of a newb,
    Have a question about how to layout my db files.
    My understanding is that control and archivelog files should be multiplexed to a separate disk than the datafiles.
    Is this also true for the flash_recovery_area?
    Thanks for any help.Lose your control file or your redo file, and you have a problem. Therefore, in a professional environment, the control file and redo logs need to be protected by multiplexing to different physical drives (not different partitions on same disk). One of those locations can be the same DASD as the database files.
    Archivelogs should be bounced from online to nearline to offline storage. These days 'online' is inside flash recovery area. If you are paranoid, you can send archivelogs to multiple destinations (multiplex), although some form of disk protection (RAID 1 ... or 5 - ugh!) should be sufficient.
    Flash Recovery Area can be inside ASM or on regular file system. If in ASM, consider using either RAID 1 under ASM and external failgroups, or normal failgroup mode to allow the system to do it's own RAID 1 equivalent. If on regular file system, use standard disk protection (RAID 1 or 5) to do the protection.
    Oracle uses the SAME (Stripe and Mirror Everything) philosophy, so separating Fast Recovery Area (new name - used to be Flash Recovery Area in the ancient days of Oracle 10g) from database files is not needed.
    Highly recommend you read the first chapter of the Database Concepts Manual and the first chapter of the Database Administrator Guide to get used to the docs, adn then use that as a pringboard (via table of contents) to the sections that answer your question. Start at http://tahiti.oracle.com to get to those docs - they are on the front page of the doc set for your favorite version.

  • Automatic datafile offline due to write error

    Hi,
    Our SAP system are down. In the alert.log file, i found that one of the files are being locked by third party backup software.
    I'm new to both oracle and basis, pls advise the steps to recover the database. Thank you.
    The error in the alert log file:
    Errors in file f:\oracle\p02\saptrace\background\p02_lgwr_3896.trc:
    ORA-00345: redo log write error block 8404 count 2
    ORA-00312: online log 3 thread 1: 'D:\ORACLE\P02\ORIGLOGA\LOG_G13M1.DBF'
    ORA-27072: File I/O error
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    Sat Oct 25 00:23:12 2008
    Errors in file f:\oracle\p02\saptrace\background\p02_lgwr_3896.trc:
    ORA-00343: too many errors, log member closed
    ORA-00346: log member marked as STALE
    ORA-00312: online log 3 thread 1: 'D:\ORACLE\P02\ORIGLOGA\LOG_G13M1.DBF'
    Sat Oct 25 00:26:04 2008
    Incremental checkpoint up to RBA [0x1c1b.2079.0], current log tail at RBA [0x1c1b.20dc.0]
    Sat Oct 25 00:35:18 2008
    KCF: write/open error block=0x3f7c6 online=1
         file=5 G:\ORACLE\P02\SAPDATA1\SR3_2\SR3.DATA2
         error=27072 txt: 'OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.'
    Automatic datafile offline due to write error on
    file 5: G:\ORACLE\P02\SAPDATA1\SR3_2\SR3.DATA2
    Sat Oct 25 00:35:19 2008
    KCF: write/open error block=0x3f7c4 online=1
         file=5 G:\ORACLE\P02\SAPDATA1\SR3_2\SR3.DATA2
         error=27070 txt: 'OSD-04016: Error queuing an asynchronous I/O request.
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.'
    Automatic datafile offline due to write error on
    file 5: G:\ORACLE\P02\SAPDATA1\SR3_2\SR3.DATA2
    Sat Oct 25 00:36:00 2008
    KCF: write/open error block=0x37f74 online=1
         file=7 G:\ORACLE\P02\SAPDATA1\SR3_4\SR3.DATA4
         error=27072 txt: 'OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.'
    Automatic datafile offline due to write error on
    file 7: G:\ORACLE\P02\SAPDATA1\SR3_4\SR3.DATA4
    Sat Oct 25 00:45:49 2008
    Errors in file f:\oracle\p02\saptrace\usertrace\p02_ora_3876.trc:
    ORA-00600: internal error code, arguments: [kdtdelrow-2], [2], [2], [], [], [], [], []
    ORA-00376: file 5 cannot be read at this time
    ORA-01110: data file 5: 'G:\ORACLE\P02\SAPDATA1\SR3_2\SR3.DATA2'

    Hi
    As alway use this information for research purposes only.
    I presume that Oracle and SAP application servers are down.
    Now would be a good time to make a backup image of your crippled system.
    This way you can always get back to this state if needed.
    So do an offline backup if possible.
    It looks like a log file is damaged or deleted. You may find that you have a
    mirror image of it so this might not be the end of the world.
    If you have oracle up to the mount point e.g.
    startup mount
    then you should be able to acces the v$logfile
    select  GROUP#,STATUS,MEMBER from v$logfile;
    This will show you whether the have a mirror setup for the log file you need.
    Make sure all the files outlined in this output exist.
    e.g. do a DIR in the parant directory.
    Post all the output of v$logfile and whether all the files exist.
    regards
    Stephen

  • Unable to resize asm datafile even though I resized the (logical) datafile

    I have a bigfile that went above 16tb - this is causing me grief in a restore to a netapp filer that has a 16tb limit.
    So we went thru the hassles of moving data around. I issued the
    Mon Dec 10 21:15:06 2012
    alter database datafile '+DATA/pcinf/datafile/users1.303.777062961' resize 15900000000000
    Completed: alter database datafile '+DATA/pcinf/datafile/users1.303.777062961' resize 15900000000000
    Mon Dec 10 21:40:10 2012
    The datafile itself from v$datafile shows 15tb - BUT the asm file is still 18tb in size.
    Should it not be the same - is this something others have faced where the asm file doesnt match?
    Name     USERS1.303.777062961
    Type     DATAFILE
    Redundancy     MIRROR
    Block Size (Bytes)     8192
    Blocks     2281701377
    Logical Size (KB)     18253611016
    Linux, Exadata, 11.2.0.2 + psu
    SR Created but not getting anywhere - Why such a large file, are you sure its really not 18tb, etc .etc
    Daryl

    So I just ran another test of my real datafile issue and it appears to have corrected itself..
    OEM Shows this: (Correct)
    Block Size (Bytes)     8192
    Blocks     1934814455
    Logical Size (KB)     15478515640
    select file_id, bytes from dba_data_files where tablespace_name = 'USERS1';
    select file#, bytes from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
    select file#, trunc(bytes/1024/1024/1024) G from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
    alter database datafile '+DATA/pcinf/datafile/users1.303.777062961' resize 15850000000000;
    select file_id, bytes from dba_data_files where tablespace_name = 'USERS1';
    select file#, bytes from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
    select file#, trunc(bytes/1024/1024/1024) G from v$datafile where file# = (select file_id from dba_data_files where tablespace_name = 'USERS1');
       FILE_ID                BYTES
            12   15,900,000,002,048
    1 row selected.
         FILE#                BYTES
            12   18,691,697,672,192    <<<< CAUSING ME MUCH MUCH GRIEF!!
    1 row selected.
         FILE#          G
            12      17408
    1 row selected.
    Database altered.
       FILE_ID                BYTES
            12   15,850,000,007,168
    1 row selected.
         FILE#                BYTES
            12   15,850,000,007,168
    1 row selected.
         FILE#          G
            12      14761
    1 row selected.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Datafiles and control file corrupted

    I have oracle 9i installed on windows 2003 server. A couple of days before our windows crashed and the system administrator took the backup of all files and reinstalled windows. After reinstallation of windows I restored the files backed up by system administrator and tried to start the database. I got the following error:
    ORA-00227: corrupt block detected in controlfile: (block 1, # blocks 1)
    ORA-00202: controlfile: 'D:\ORACLE\ORADATA\ORCL\CONTROL03.CTL'
    All the multiplexed copies of control files are also corrupted. We do not have the backup since last few days and the database is in noarchivelog mode. Simple restoration to the old backup would be a great loss. Kindly help me to recover the control file and datafiles.

    All the multiplexed copies of control files are also corrupted. We do not have the backup since last few days and the database is in noarchivelog mode. Simple restoration to the old backup would be a great loss. Kindly help me to recover the control file and datafiles.You could try to re-create the control files assuming the database itself was closed properly.
    However you should only do this if you know what you are doing.
    In any case you should backup the database in its current state in case things getting worse by trying to recover. CHECK this backup twice!
    In any case: Open a support ticket at Oracle. You will most probably need their help.
    In addition to that - it look quite bad for your data. You should prepare for data loss (just to be sure; i dont want to scare you)....
    Ronny Egner
    My Blog: http://blog.ronnyegner-consulting.de

  • Restore single datafile from source database to target database.

    Here's my issue:
    Database Release : 11.2.0.3 across both the source and targets. (ARCHIVELOG mode)
    O/S: RHEL 5 (Tikanga)
    Database Storage: Using ASM on a stand-alone server (NOT RAC)
    Using Oracle GG to replicate changes on the Source to the Targets.
    My scenario:
    We utilize sequences to keep the primary key in tact and these are replicated utilizing GG. All of my schema tables are located in one tablespace and datafile and all of my indexes are in seperate tablespace (nothing is being partitioned).
    In the event of media failure on the Target or my target schema being completely out of whack, is there a method where I can copy the datafile/tablespace from my source (which is intact) to my target?
    I know there are possibilites of
    1) restore/recover the tablespace to a SCN or timestamp in the past and then I could use GoldenGate to run the transactions in (but this could take time depending on how far back I need to recover the tablespace and how many transactions have processed with GG) (This is not fool-proof).
    2) Could use DataPump to move the data from the Source schema to the Target schema (but the sequences are usually out of order if they haven't fired on the source, get that 'sequence is defined for this session message'). I've tried this scenario.
    3) I could alter the sequences to get them to proper number using the start and increment by feature (again this could take time depending on how many sequences are out of order).
    I would think you could
    1) back up the datafile/tablespace on the source,
    2)then copy the datafile to the target.
    3) startup mount;
    4) Newname the new file copied from the source (this is ASM)
    5) Restore the datafile/tablespace
    6) Recover the datafile/tablespace
    7) alter database open;
    Question 1: Do I need to also copy the backup piece from the source when I execute the backup tablespace on the source as indicated in my step 1?
    Question 2: Do I need to include "plus archivelog" when I execute the backup tablespace on the source as indicated in my step 1?
    Question 3: Do I need to execute an 'alter system switch logfile' on the Target when the recover in step 6 is completed?
    My scenario sounds like a Cold Backup but running with Archivelog mode, so the source could be online while the database is running.
    Just looking for alternate methods of recovery.
    Thanks,
    Jason

    Let me take another stab at sticking a fork into this myth about separating tables and indexes.
    Let's assume you have a production Oracle database environment with multiple users making multiple requests and the exact same time. This assumption mirrors reality everywhere except in a classroom where a student is running a simple demo.
    Let's further assume that the system looks anything like a real Oracle database system where the operating system has caching, the SAN has caching, and the blocks you are trying to read are split between memory and disk.
    Now you want to do some simple piece of work and assume there is an index on the ename column...
    SELECT * FROM emp WHERE ename = 'KING';The myth is that Oracle is going to, in parallel, read the index and read the table segments better, faster, whatever, if they are in separate physical files mapped by separate logical tablespaces somehow to separate physical spindles.
    Apply some synapses to this myth and it falls apart.
    You issue your SQL statement and Oracle does what? It looks for those index blocks where? In memory. If it finds them it never goes to disk. If it does not it goes to disk.
    While all this is happening the hundreds or thousands of other users on the database are also making requests. Oracle is not going to stop doing work while it tries to find your index blocks.
    Now it finds the index block and decides to use the ROWID value to read the block containing the row with KING's data. Did it freeze the system? Did it lock out everyone else while it did this? Of course not. It puts your read request into the queue and, again, first checks memory to see if it needs to go to disk.
    Where in here is there anything that indicates an advantage to having separate physical files?
    And even if there was some theoretical reason why separate files might be better ... are they separate in the SAN's cache? No. Are they definitely located on separate stripes or separate physical disks? Of course not.
    Oracle uses logical mappings (tables and tablespaces) and SANS use logical mappings so you, the DBA or developer, have no clue as to where anything physically is located.
    PS: Ouija Boards don't work either.

  • How to recover database SYSTEM datafile get corrupt ?

    Database is in ARCHIVELOG mode, and the datafile belonging to SYSTEM tablespace gets corrupted. Up to what point can I recover the database ?
    A. Until last commit.
    B. Until the time you perform recovery.
    C. Until the time the datafile got corrupted.
    D. You cannot recover the SYSTEM tablespace and you must be re-create the database.
    and 1 more doubt :
    If redologfiles are not multiplexed and redolog blocks get corrupt in group 2, and archiving stops. All redolog files are filled and database activity is halted.
    DBWR has written everything to disk. What command can be used to proceed further ?
    A. recover logfile block group 2;
    B. alter database drop logfile group 2;
    C. alter database clear logfile group 2;
    D. alter database recover logfile group 2;
    E. alter database clear unarchived lofile group 2;
    Edited by: user642367 on Sep 18, 2008 8:45 PM

    1. A. Since the DB is in archivelog mode, so you can always restore and recover the whole DB including system tablespace datafile till last SCN generated provided the redo record is available in archive/online logfiles.
    2. E. Since only redolog is corrupted so archiver won't proceed and hence a db hang is obvious, So, in order to proceed further you need to clear online (un archived log file) and then db will work as usual. Care should be taken that you must take a full backkup of DB(cold backup wherever feasible) as soon as possible after issuing this command. As now you don't have redo info in archivelog files to recover the db in case of a crash.
    Please go through Oracle 10g DB Administrators guide (available on OTN) for more details.
    Thanks

  • Best RAID configuration for storing Datafiles and Redo log files

    Database version:10gR2
    OS version: Solaris
    Whis is the best RAID level for storing Datafiles and Redo log files?

    Oracle recommends SAME - Stripe And Mirror Everything.
    In the RAC Starter Kit documentation, they specifically recommend not using RAID5 for things like voting disk and so on.
    SAN vendors otoh claims that their RAID5 implementations are as fast as RAID10. They do have these massive memory caches...
    But I would rather err on the safer side. I usually insist on RAID10 - and for those databases that I do not have a vested interest in (other than as a DBA), and owners, developers and management accept RAID5, I put the lead pipe away and do not insist on having it my way. :-)

  • Split-mirror backup: Attempt to mount unknown instance

    Hi experts,
    I am backing up the Oracle database of an SAP system online via the split-mirror approach with a BRBACKUP -t online_split command. After having copied all datafiles, BRBACKUP terminates with the following error message (excerpt):
    BR0330I Starting and mounting database instance X86/SPLIT ...
    BR0278E Command output of '/oracle/X86/102_64/bin/sqlplus':
    SQL*Plus: Release 10.2.0.1.0 - Production on Fri Aug 1 00:28:15 2008
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    SQL> ERROR:
    ORA-12545: Connect failed because target host or object does not exist
    SQL>
    SQL> ORA-12545: Connect failed because target host or object does not exist
    SQL>
    BR0280I BRBACKUP time stamp: 2008-08-01 00.28.15
    BR0279E Return code from '/oracle/X86/102_64/bin/sqlplus': 0
    BR0302E SQLPLUS call for database instance X86/SPLIT failed
    BR0332E Start and mount of database instance X86/SPLIT failed
    At this point,
    - all data files have been copied successfully;
    - the control files have been copied;
    - a copy of the backup summary .fnd file has been written to the sapbackup directory of both database host and backup host;
    - I verified that the tablespaces had been put into backup mode before the split command was executed.
    I wonder why BRBACKUP is trying to mount an instance "X86/SPLIT" that is not defined in tnsnames.ora?
    What is the function of this step, and how can I suppress it as this flags the entry of the backup run in DB12 as failed?
    OS version: Solaris 10 x64 (database and backup host)
    Database: Oracle 10.2.0.2.0
    SAP Kernel: 700
    BRTOOLS Version: 7.00 (34)
    Best regards,
    Rainer

    Michael,
    the command
    more /oracle/X86/.dbenv.sh | grep X86/SPLIT
    did not return any matching line. I tried with .dbenv.sh, .dbenv.csh, .dbenv_<db_hostname>.sh, .dbenv_<db_hostname>.csh.
    A sample is the following:
    # @(#) $Id: //bc/700-1_REL/src/ins/SAPINST/impl/tpls/ora/ind/DBENV.CSH#7 $
    # Oracle RDBMS Environment
    setenv THREAD NOPS
    if ( $THREAD == NOPS ) then
        set    DBSID = X86
    else
        if ( $THREAD != "001" ) then
           set DBSID = X86_${THREAD}
        else
           set DBSID = X86
        endif
    endif
    setenv dbms_type       ORA
    setenv dbs_ora_tnsname $DBSID
    setenv dbs_ora_schema  SAPSR3
    setenv ORACLE_SID      $DBSID
    setenv DB_SID          X86
    setenv ORACLE_BASE     /oracle
    # check for running user and set for orasid ORA_NLS10
    set USER = `id | awk -F\( '{print $2}' | awk -F\) '{print $1}'`
    set TRUL = 'tr "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz"'
    set ORASID = "ora`echo $DB_SID | $TRUL`"
    if ( $USER != $ORASID ) then
      setenv TNS_ADMIN       /usr/sap/X86/SYS/profile/oracle
      set ADD=/oracle/client/10x_64/instantclient
      set _t=/oracle/X86/102_64/bin/sqlplus
      set _f=/sapmnt/X86/profile/DEFAULT.PFL
      set SAPDBHOST=""
      if ( -r "$_f" ) then
        set SAPDBHOST=`awk -F= '/^[      ]*SAPDBHOST[      ]*=/ {print $2; exit}' $_f | awk '{print $1}'`
      endif
      if ( -r "$_t" || `uname -n` == "$SAPDBHOST"  ) then
        setenv ORACLE_HOME /oracle/X86/102_64
      endif
    else
      setenv ORACLE_HOME     /oracle/X86/102_64
      set ADD=/oracle/X86/102_64/lib
    endif
    setenv NLS_LANG        AMERICAN_AMERICA.UTF8
    setenv SAPDATA_HOME    /oracle/X86
    setenv DIR_LIBRARY     /usr/sap/X86/SYS/exe/run
    if ( $?ORACLE_HOME ) then
      foreach d ( $ORACLE_HOME/bin )
        set i=0
        foreach p ( $path )
            if ( "$p" == "$d" ) then
                set i=1
                break
            endif
        end
        if ( $i == 0 ) then
            set path = ( $d $path )
        endif
      end
    endif
    switch (`uname`)
        case AIX*:
            if ( ! $?LIBPATH ) then
                setenv LIBPATH /usr/lib:/lib:${ADD}:/usr/sap/X86/SYS/exe/run
            else
                foreach d ( /usr/sap/X86/SYS/exe/run ${ADD} )
                    set i=0
                    foreach p ( `echo $LIBPATH | sed 's/:/ /g'` )
                        if ( "$p" == "$d" ) then
                            set i=1
                            break
                        endif
                    end
                    if ( $i == 0 ) then
                        setenv LIBPATH ${LIBPATH}:$d
                    endif
                end
            endif
            breaksw
        case HP*:
            if ( ! $?SHLIB_PATH ) then
                setenv SHLIB_PATH ${ADD}:/usr/sap/X86/SYS/exe/run
            else
                foreach d ( /usr/sap/X86/SYS/exe/run ${ADD} )
                    set i=0
                    foreach p ( `echo $SHLIB_PATH | sed 's/:/ /g'` )
                        if ( "$p" == "$d" ) then
                            set i=1
                            break
                        endif
                    end
                    if ( $i == 0 ) then
                        setenv SHLIB_PATH ${SHLIB_PATH}:$d
                    endif
                end
            endif
            breaksw
        case SIN*:
        case Reliant*:
        case Linux*:
            if ( ! $?LD_LIBRARY_PATH ) then
                setenv LD_LIBRARY_PATH ${ADD}:/usr/sap/X86/SYS/exe/run
            else
                foreach d ( /usr/sap/X86/SYS/exe/run ${ADD} )
                    set i=0
                    foreach p ( `echo $LD_LIBRARY_PATH | sed 's/:/ /g'` )
                        if ( "$p" == "$d" ) then
                            set i=1
                            break
                        endif
                    end
                    if ( $i == 0 ) then
                        setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:$d
                    endif
                end
            endif
            breaksw
        case OSF*:
            if ( ! $?LD_LIBRARY_PATH ) then
                setenv LD_LIBRARY_PATH ${ADD}:/usr/sap/X86/SYS/exe/run
            else
                foreach d ( /usr/sap/X86/SYS/exe/run  ${ADD} )
                    set i=0
                    foreach p ( `echo $LD_LIBRARY_PATH | sed 's/:/ /g'` )
                        if ( "$p" == "$d" ) then
                            set i=1
                            break
                        endif
                    end
                    if ( $i == 0 ) then
                        setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:$d
                    endif
                end
            endif
            breaksw
        default:
            if ( ! $?LD_LIBRARY_PATH ) then
                setenv LD_LIBRARY_PATH ${ADD}:/usr/sap/X86/SYS/exe/run
            else
                foreach d ( /usr/sap/X86/SYS/exe/run ${ADD} )
                    set i=0
                    foreach p ( `echo $LD_LIBRARY_PATH | sed 's/:/ /g'` )
                        if ( "$p" == "$d" ) then
                            set i=1
                            break
                        endif
                    end
                    if ( $i == 0 ) then
                        setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:$d
                    endif
                end
            endif
            breaksw
    endsw
    # define some nice aliases
    alias cdora 'cd /usr/sap/$SAPSYSTEMNAME/SYS/profile/oracle'
    # end Oracle RDBMS Environment
    Thank you for your time,
    Rainer

  • Multiplexing Redo Log Files question

    If you are running RAC on ASM on a RAID system, is this required?  We are using an HP autoraid which mirrors at the block level and in the documentation about Multiplexing Redo Log Files it says that you do it to protect against media failure.  The autoraid that we are using gives us multiple levels of redundancy against media failure so I was wondering if Multiplexing would be adding more overhead than is needed.  Thanks for your input.

    ASM is quite compex and I'm not going to outline all the advantages or reasons for ASM, but under ASM you can drop and add devices to maintain your capacity needs online without loosing data, which you cannot do using RAID, which requires a re-initialize, for example, regardless of redundancy. Please see the documentation. ASM, like pretty much everything Oracle will add complexity and you will have to check your requirements. ASM is however pretty much the standard. If you use external RAID, make sure your storage is not using RAID 5 or 0. Regarding logical errors, you could for example overwrite or delete a file by mistake, in which case file redundancy does not protect you. If you are looking for reasons or ways not to use ASM, I'm sure you will find them, but what's the point?

  • CAN WE MULTIPLEX THE BACKUPSET OF RMAN BACKUP ON DISK

    CAN WE MULTIPLEX THE BACKUPSET OF RMAN BACKUP ON DISK

    Hi
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    Regards

  • Does Oracle share i/o load across datafiles?

    Hi,
    I am working with a daabase that has multiple datafiles for each tablespace. The datafiles for each tablespace are located on a single disk. I believe no stripping is present in this design.
    Presently, it is recomended that the datafiles for each tablespace be spread over multiple disks. Would this help with the improvement in i/o performance to the specific tablespace?
    Do I need to do any alter tablespace or datafiles to have the improvement in the i/o?
    Please advice. I am using Oracle 8i.
    Best Regards,
    Flintz

    There are several possibilities to stripe a tablespace.
    You can create
    a) an ORACLE Tablespace with several datafiles
    b) an OS mountpoint which has striped several disks
    If you've an existing tablespace and want to stripe this tablespace with ORACLE than you can create an additional tablespace wtih several datafiles which are on different disks. After this you can use
    - imp/exp
    - CTAS
    - online redifinition
    to copy the tables to the tablespace.
    You should also consider that you've a less safe system if you have several disks which are not mirrored or on a RAID 5 system.

Maybe you are looking for