Tuning RMAN Performance

we have R 12.1.1 with database 11.1.0.7 and redhat 5.3 ( every thing 64 bit).
i have configured RMAN full hot backup daily on my production system.
this is backup on disk. Application is quite slow (died) and users are not able to do data entry at that time.
backup is scheduled at 3 a.m ( midnight) and i think its data entry only at five terminals at that time.
please suggest me what i can do to over come this issue.

Highly speculative, but something you might investigate: If you have a lot of dirty blocks built up over the course of the day, and a large SGA, you may have a checkpointing issue set off by the backup. If you have a very large controlfile, you may have an issue of the controlfile being locked for rman to snap it. See what waits you have.
But also possible is you have a configuration issue, like not using asynch or some memory wasting thing like improper hugepages.
Don't guess, get a statspack or awr.

Similar Messages

  • RMAN performance query

    Hi experts,
    Could you please recommend some information for better rman performance
    what i know is
    1. using duration parameter
    2. sizing large_pool
    3. i/o slaves
    Please mention any other point i have missed out.
    regards,
    shaan

    Check Tuning Backup and Recovery.

  • Tuning query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    Hi David,
              Query performance when reporting on ODS object is slower compared to infocubes, infosets, multiproviders etc because of no aggregates and other performance techinques in DSO.
    Basically for DSO/ODS you need to turn on the BEx reporting flag, which again is an overhead for query execution and affects performance.
    To improve the performance when reporting on ODS you can create secondary indexes from BW workbench.
    Please check the below links.
    [Re: performance issues of ODS]
    [Which criteria to follow to pick InfoObj. as secondary index of ODS?;
    Hope this helps.
    Regards,
    Haritha.

  • How to guide for Fine tuning the performance of SAP EP

    All,
    Could anyone please let me know where to locate the how to guide for fine tuning the performance of the SAP Enterprise Portal?
    thanks,
    ~L

    hi leena,
    Look into these threads...
    https://www.sdn.sap.com/irj/sdn/thread?threadID=119605
    https://www.sdn.sap.com/irj/sdn/thread?threadID=101325
    Also see,
    <a href="https://websmp103.sap-ag.de/~form/sapnet?_SHORTKEY=00200797470000073623&_OBJECT=011000358700001480992005E">this link</a>
    https://media.sdn.sap.com/public/eclasses/bwp04/PortalTuning_files/Default.htm
    Regs,
    jaga
    Message was edited by: jagadeep sampath

  • WebLogic Server Tuning and Performance

    We are having a very strange problem. We have a domain called "ABC". Under that domain, we have three sites (applications) "x", "y" and "z".
    "x" take about 80-85% of load while other two shares remaining 15% of the load.
    When we have high traffic, "x" stops responding with thread stuck error and other two sites are still functional.
    Any suggestion to tune? Here is thread dump.
    Heap
    PSYoungGen total 85056K, used 14066K [0x71800000, 0x77000000, 0x7c400000)
    eden space 80320K, 15% used [0x71800000,0x7241fb90,0x76670000)
    from space 4736K, 34% used [0x76670000,0x7680cdb0,0x76b10000)
    to space 4480K, 0% used [0x76ba0000,0x76ba0000,0x77000000)
    PSOldGen total 176128K, used 55761K [0x5c000000, 0x66c00000, 0x71800000)
    object space 176128K, 31% used [0x5c000000,0x5f674520,0x66c00000)
    PSPermGen total 102400K, used 100797K [0x54000000, 0x5a400000, 0x5c000000)
    object space 102400K, 98% used [0x54000000,0x5a26f400,0x5a400000)
    Thanks.

    In weblogic.xml specify 'precompile' parameter.
              <jsp-descriptor>
              <jsp-param>
              <param-name>
              precompile
              </param-name>
              <param-value>
              true
              </param-value>
              </jsp-param>
              </jsp-descriptor>
              Christopher wrote:
              > I have followed the instructions to mark the Precompile setting in the WebLogic
              > Server Console via the document, "Tuning JSP Performance" and after restarting
              > the server as required by the instructions I have noticed that the setting for
              > the precompile JSPs option is no longer checked. Does anyone know what I need
              > to do in order to keep the Precompile setting?
              

  • RMAN performance parameters

    Hello,
    I have to backup big databases (more than 10 To) with RMAN in Oracle 10gR2 in a Networker environnement. I have to do full , incremental and archivelog backup.
    Could you tell me, please, what are the parameters which have an impact on the performance, the speed of the backup?
    I am thinking of number of channels, FILESPERSET ...
    Could you tell me the relevance of each parameter of performance of RMAN with an example, for the backup and also for a restoration?
    Thank you very much

    Simple example using catalog stored on test1 database
    BACKUP
    rman <<EOF
    connect target rman/rman_oracledba@test2
    connect catalog rman/rman_oracledba@test1
    run { allocate channel d1 type disk;
    backup full tag backup_1 filesperset 2
    format '/data/oracle9/BACKUP/rman_BACKUP_%d_%t.%p.%s.%c.%n.%u.bus'
    database;
    EOF
    RESTORE
    export ORACLE_SID=TEST2
    sqlplus /nolog <<EOF
    connect / as sysdba
    shutdown immediate
    startup mount
    exit
    EOF
    rman <<EOF
    connect target rman/rman_oracledba@test2
    connect catalog rman/rman_oracledba@test1
    run { allocate channel d1 type disk;
    restore database;
    recover database;
    alter database open resetlogs;
    EOFSs

  • Tuning RMAN backups

    Hello,
    Would someone help me with the RMAN configuration for backup to disk? I am new to RMAN so I have no idea. Below is my database configuration.
    Windows 2000 SP4
    Oracle9i binaries on C:\ drive
    Database (datafiles) on D:\ drive
    Logs on E:\ drive
    I have a total of 5 datafiles with approx. 400GB total. I would like to use RMAN to backup my database to disk in a SAN. The D: and E: drives are using RAID5. If I want to run the backup database in RMAN, the default is used. I guess my question is, how do I list all the default settings in RMAN and how do I optimize RMAN to backup my database (i.e. multiplex, number of channels, MAXOPENFILES, FILEPERSET, disk buffer, etc...) and the actual RMAN commands to perform these tasks.
    Thank you very much.

    Once in RMAN, you can execute the following to see all the default settings:
    RMAN> show all;You may want to read over Recovery Manager Quick Reference and Recovery Manager User's Guide to get details on the parameters available to you for configuring the various backup options.

  • Query Tuning for Performance

    Dear Experts,
    Please find herewith below my code. The problem is with KONV table... This report is based on many conditions of KONV table... So the specific table has to be READ many times with all data. How can i Fine tune it... The report is taking a long time to execute... Any commands or way to optimise the report..
    SELECT
            vbeln_i
            vkorg
            vtweg
            spart
            fkart
            land1
            fkdat
            kunrg
            knumv
            waerk
            kurrf
            vkbur_i
            vkgrp_i
            matnr_i
            vrkme_i
            fkimg_i
            matkl_i
            posnr_i
            netwr_i
            augru_auft_i
            aubel_i
            vgbel_i
            pstyv_i
            vstel_i
            prctr_i
            FROM wb2_v_vbrk_vbrp2 INTO corresponding fields of TABLE it_vbrk
            WHERE
            vkorg = p_vkorg AND
            vtweg = '30' AND
            spart IN so_spart AND
            kunrg IN so_kunrg AND
            fkart IN so_fkart AND
            vbeln_i IN so_vbeln AND
            fkdat IN so_fkdat AND
            vkbur_i IN so_vkbur AND
            vkgrp_i IN so_vkgrp AND
            matnr_i IN so_matnr AND
            fkimg_i <> '0'.
    sort it_vbrk by vbeln_i knumv posnr_i.
      IF it_vbrk[] IS NOT INITIAL.
        SELECT
              knumv
              kposn
              stunr
              kschl
              krech
              kbetr
              kpein
              waers
              kmein
              kkurs
              kinak
              FROM konv INTO TABLE it_konv
              FOR ALL ENTRIES IN it_vbrk
              WHERE knumv = it_vbrk-knumv AND
              kschl IN ('ZPR0','Z305','ZCCM','ZFCM').
    sort it_konv by knumv.
        SELECT
              belnr
              hkont
              prctr
              dmbtr
              matnr
              werks
              FROM bseg INTO TABLE it_fin
              FOR ALL ENTRIES IN it_vbrk
              WHERE belnr = it_vbrk-vbeln_i AND
              matnr = it_vbrk-matnr_i AND
              werks = it_vbrk-vstel_i AND
              prctr = it_vbrk-prctr_i.
    sort it_fin by belnr.
        LOOP AT it_vbrk INTO wa_vbrk.
          wa_data-vbeln = wa_vbrk-vbeln_i.
          wa_data-vkbur = wa_vbrk-vkbur_i.
          wa_data-vkgrp = wa_vbrk-vkgrp_i.
          wa_data-matnr = wa_vbrk-matnr_i.
          wa_data-vrkme = wa_vbrk-vrkme_i.
          wa_data-matkl = wa_vbrk-matkl_i.
          wa_data-posnr = wa_vbrk-posnr_i.
          wa_data-netwr = wa_vbrk-netwr_i.
          wa_data-reas = wa_vbrk-reas_i.
          wa_data-aubel = wa_vbrk-aubel_i.
          wa_data-vgbel = wa_vbrk-vgbel_i.
          wa_data-pstyv = wa_vbrk-pstyv_i.
          wa_data-vkorg = wa_vbrk-vkorg.
          wa_data-vtweg = wa_vbrk-vtweg.
          wa_data-spart = wa_vbrk-spart.
          wa_data-fkart = wa_vbrk-fkart.
          wa_data-land1 = wa_vbrk-land1.
          wa_data-fkdat = wa_vbrk-fkdat.
          wa_data-kunrg = wa_vbrk-kunrg.
          wa_data-knumv = wa_vbrk-knumv.
          wa_data-waerk = wa_vbrk-waerk.
          wa_data-kurrf = wa_vbrk-kurrf.
          wa_data-prctr = wa_vbrk-prctr_i.
          wa_data-vstel = wa_vbrk-vstel_i.
    *    move-corresponding wa_vbrk to wa_data.
          READ TABLE it_fin WITH KEY belnr = wa_data-vbeln matnr = wa_data-matnr.
          if it_fin-belnr = wa_data-vbeln.
          wa_data-hkont = it_fin-hkont.
          SELECT SINGLE txt20 INTO wa_data-gltxt FROM skat WHERE spras = 'EN' AND saknr = wa_data-hkont AND ktopl = '1000'.
          endif.
          SELECT SINGLE maktx FROM makt INTO wa_data-maktx WHERE matnr = wa_data-matnr.
          SELECT SINGLE zeinr matkl FROM mara INTO (wa_data-zeinr,wa_data-matkl) WHERE matnr = wa_data-matnr.
          SELECT SINGLE bezei FROM tvkbt INTO wa_data-soff WHERE vkbur = wa_data-vkbur AND spras = 'E'.
          SELECT SINGLE bezei FROM tvgrt INTO wa_data-sgrp WHERE vkgrp = wa_data-vkgrp AND spras = 'E'.
          SELECT SINGLE vtext INTO wa_data-division FROM tspat WHERE spart = wa_data-spart AND spras = 'EN'.
    * added on 23.06.2008
          READ TABLE it_konv INTO wa_konv WITH KEY knumv = wa_data-knumv kposn = wa_data-posnr kschl = 'ZPR0' kinak = ''.
          wa_data-kschl = wa_konv-kschl.
          wa_data-kposn = wa_konv-kposn.
          wa_data-stunr = wa_konv-stunr.
          wa_data-kbetr = wa_konv-kbetr.
          wa_data-kpein = wa_konv-kpein.
          wa_data-kkurs = wa_konv-kkurs.
    * This is calculating per unit rate
          wa_data-rate = ( wa_data-kbetr * wa_data-kpein ) / wa_data-kpein .
          wa_data-drate = wa_data-rate.
          CLEAR wa_konv.
          READ TABLE it_konv INTO wa_konv WITH KEY knumv = wa_data-knumv kposn = wa_data-posnr kschl = 'Z305' krech = 'A'.
          IF wa_konv-kbetr <> 0.
            wa_data-kbetr1 = ABS( wa_konv-kbetr ).
            wa_data-drate =  wa_data-rate  - ( wa_data-rate * ( ( wa_data-kbetr1 / 10 ) / 100 ) ).
            wa_data-dperc = ( wa_data-kbetr1 / 10 ).
            IF  wa_data-fkart = 'S1' OR wa_data-fkart = 'RE' OR wa_data-fkart = 'ZRES' OR wa_data-fkart = 'Z2RE' OR wa_data-fkart = 'G2' .
              IF wa_data-pstyv = 'TANN' OR wa_data-pstyv = 'ZMOH' OR wa_data-pstyv = 'ZPHY' OR wa_data-pstyv = 'ZSAM' OR wa_data-pstyv = 'ZZNN' OR wa_data-pstyv = 'ZREN1'.
    *    IF WA_DATA-PSTYV = 'TANN' OR WA_DATA-PSTYV = 'ZMOH' OR WA_DATA-PSTYV = 'ZPHY' OR WA_DATA-PSTYV = 'ZSAM' or wa_data-pstyv = 'ZZNN'.
                wa_data-total = '0'.
                wa_data-fkimg = wa_vbrk-fkimg_i * -1.
              ELSE.
                wa_data-fkimg = wa_vbrk-fkimg_i * -1.
                wa_data-total = ( ( wa_data-fkimg * wa_data-drate ) ) / wa_data-kpein.
                wa_data-inrvalue = wa_data-total * wa_data-kurrf.
              ENDIF.
            ELSE.
              IF wa_data-pstyv = 'TANN' OR wa_data-pstyv = 'ZMOH' OR wa_data-pstyv = 'ZPHY' OR wa_data-pstyv = 'ZSAM' OR wa_data-pstyv = 'ZZNN'.
                wa_data-fkimg = wa_vbrk-fkimg_i.
                wa_data-total = '0'.
                wa_data-inrvalue = wa_data-total * wa_data-kurrf.
              ELSE.
                wa_data-fkimg = wa_vbrk-fkimg_i.
                wa_data-total =   ( wa_data-fkimg * wa_data-drate )  / wa_data-kpein.
                wa_data-inrvalue = wa_data-total * wa_data-kurrf.
              ENDIF.
            ENDIF.
          ELSE.
            READ TABLE it_konv INTO wa_konv WITH KEY knumv = wa_data-knumv kposn = wa_data-posnr kschl = 'Z305' krech = 'B'.
            IF wa_konv-kbetr <> 0.
               wa_data-kbetr1 = ABS( wa_konv-kbetr ).
              wa_data-drate = wa_data-rate.
              wa_data-dvalue = wa_data-kbetr1.
              IF  wa_data-fkart = 'S1' OR wa_data-fkart = 'RE' OR wa_data-fkart = 'ZRES' OR wa_data-fkart = 'Z2RE' OR wa_data-fkart = 'G2' .
                IF wa_data-pstyv = 'TANN' OR wa_data-pstyv = 'ZMOH' OR wa_data-pstyv = 'ZPHY' OR wa_data-pstyv = 'ZSAM' OR wa_data-pstyv = 'ZZNN' OR wa_data-pstyv = 'ZREN1'.
    *    IF WA_DATA-PSTYV = 'TANN' OR WA_DATA-PSTYV = 'ZMOH' OR WA_DATA-PSTYV = 'ZPHY' OR WA_DATA-PSTYV = 'ZSAM' or wa_data-pstyv = 'ZZNN'.
                  wa_data-total = '0'.
                  wa_data-fkimg = wa_vbrk-fkimg_i * -1.
                ELSE.
                  wa_data-fkimg = wa_vbrk-fkimg_i * -1.
                  wa_data-total = ( ( wa_data-fkimg * wa_data-rate ) - wa_data-kbetr1 ) * -1.
                  wa_data-inrvalue = wa_data-total * wa_data-kurrf.
                ENDIF.
              ELSE.
                IF wa_data-pstyv = 'TANN' OR wa_data-pstyv = 'ZMOH' OR wa_data-pstyv = 'ZPHY' OR wa_data-pstyv = 'ZSAM' OR wa_data-pstyv = 'ZZNN'.
                  wa_data-fkimg = wa_vbrk-fkimg_i.
                  wa_data-total = '0'.
                  wa_data-inrvalue = wa_data-total * wa_data-kurrf.
                ELSE.
                  wa_data-fkimg = wa_vbrk-fkimg_i.
                  wa_data-total = ( ( wa_data-fkimg * wa_data-rate ) - wa_data-kbetr1 ).
                  wa_data-inrvalue = wa_data-total * wa_data-kurrf.
                ENDIF.
              ENDIF.
            ENDIF.
          ENDIF.
          CLEAR wa_konv.
          IF wa_data-kbetr1 EQ '0'.
            IF  wa_data-fkart = 'S1' OR wa_data-fkart = 'RE' OR wa_data-fkart = 'ZRES' OR wa_data-fkart = 'Z2RE' OR wa_data-fkart = 'G2' .
              IF wa_data-pstyv = 'TANN' OR wa_data-pstyv = 'ZMOH' OR wa_data-pstyv = 'ZPHY' OR wa_data-pstyv = 'ZSAM' OR wa_data-pstyv = 'ZZNN' OR wa_data-pstyv = 'ZREN1'.
    *    IF WA_DATA-PSTYV = 'TANN' OR WA_DATA-PSTYV = 'ZMOH' OR WA_DATA-PSTYV = 'ZPHY' OR WA_DATA-PSTYV = 'ZSAM' or wa_data-pstyv = 'ZZNN'.
                wa_data-total = '0'.
                wa_data-fkimg = wa_vbrk-fkimg_i * -1.
              ELSE.
                wa_data-fkimg = wa_vbrk-fkimg_i * -1.
                wa_data-total = ( ( wa_data-fkimg * wa_data-drate ) ) / wa_data-kpein.
                wa_data-inrvalue = wa_data-total * wa_data-kurrf.
              ENDIF.
            ELSE.
              IF wa_data-pstyv = 'TANN' OR wa_data-pstyv = 'ZMOH' OR wa_data-pstyv = 'ZPHY' OR wa_data-pstyv = 'ZSAM' OR wa_data-pstyv = 'ZZNN'.
                wa_data-fkimg = wa_vbrk-fkimg_i.
                wa_data-total = '0'.
                wa_data-inrvalue = wa_data-total * wa_data-kurrf.
              ELSE.
                wa_data-fkimg = wa_vbrk-fkimg_i.
                wa_data-total =   ( wa_data-fkimg * wa_data-drate )  / wa_data-kpein.
                wa_data-inrvalue = wa_data-total * wa_data-kurrf.
              ENDIF.
            ENDIF.
          ENDIF.
          SELECT SINGLE name1 land1 FROM kna1 INTO (wa_data-name1,wa_data-land1) WHERE kunnr = wa_data-kunrg.
          SELECT SINGLE landx FROM t005t INTO wa_data-landx WHERE land1 = wa_data-land1 AND spras = 'EN'.
          SELECT SINGLE wgbez FROM t023t INTO wa_data-wgbez WHERE matkl = wa_data-matkl.
          SELECT SINGLE bezei FROM tvm1t INTO wa_data-bezei WHERE mvgr1 = wa_data-mvgr3.
          SELECT SINGLE bezei FROM tvm1t INTO wa_data-bezei1 WHERE mvgr1 = wa_data-mvgr4.
          APPEND wa_data TO it_data.
    Please reply...
    Thanks,
    Regards,
    Jitesh

    Hi,
    Dont put select quaries in loop. Its affect performance.First collect all the data in temporary tables.
    then read these tables according to the data collected in final table.
    Also use ranges for select quary of konv.
    also use sort table & make indexing like this.
    for example:
      w_index = sy-index + 1.
      READ TABLE it_vbrk INTO wa_vbrk INDEX w_index.
      IF sy-subrc EQ 0.
        w_tabix = sy-tabix + w_tabix.
        LOOP AT it_vbrk INTO wa_vbrk FROM w_tabix.
       endloop.
    endif.
    or u can use hashed table with unique keys.
    Also check in se30 whether the coding takes time or database fetching takes time.
    Regards,
    Anagha Deshmukh

  • Tuning the performance of ObjectInputStream.readObject()

    I'm using JWorks, which roughly corresponds to jdk1.1.8, on my client (VxWorks) and jdk1.4.2 on the server side (Windows, Tomcat).
    I'm serializing a big vector and compressing it using GZipOutputStream on the server side and doing the reverse on the client side to recreate the objects.
    Server side:
    Vector v = new Vector(50000);//also filled with 50k different MyObject instances
    ObjectOutputStream oos = new ObjectOutputStream(new GZipOutputStream(socket.getOutputStream()));
    oos.writeObject(v);Client side:
    ObjectInputStream ois = new ObjectInputStream(new GZipInputStream(socket.getInputStream()));
    Vector v = (Vector)ois.readObject();ObjectInputStream.readObject() at the client takes a long time (50secs) to complete, which is understandable as the client is a PIII-700MHz and the uncompressed vector is around 10MB. Now the problem is that because my Java thread runs with real time priority (which is the default on Vxworks) and deprive other threads, including non-Java ones, for this whole 50 sec period. This causes a watchdog thread to reset the system. I guess most of this delay is in GZipInputStream, as part of the un-gzipping process.
    My question is, is there a way to make ObjectInputStream.readObject() or any of the other connected streams (gzip and socket) yield the CPU to other threads once in a while? Or is there any other way to deal with the situation?
    Is the following a good solution?
    class MyGZipInputStream extends GZipInputStream {
        public int count = 0;
        public int read(byte b[], int off, int len) throws IOException {
             if(++count %10 == 0) // to avoid too many yields
                  Thread.yield();
              return super.read(b, off,len);
    }Thanks
    --kannan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    I'd be inclined to put the yielding input stream as close to the incoming data as possible - thus avoiding any risk that time taken to read in data and buffer it will cause the watchdog to trip.I could do that. But as I'm doing Thread.yield only once every x times, would it help much?
    Also, as I've now found out, Thread.yield() wouldn't give other low priority tasks a chance to run. So I've switched to Thread.sleep(100) now, even though it could mean a performance hit.
    Another relaed question - MyGzipStream.read(byte[], int, int) is called about 3million times during the readObject() of my 10MB vector. This would mean that ObjectInputStream is using very small buffer size. Is there a way to increase it, other than overriding read() to call super.read() with a bigger buffer and then doing a System.arrayCopy()?
    Thanks
    --kannan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Tuning the Performance of Knowledge Management

    Hi
    How can i Tune the Performance of Knowledge Management

    Hi,
       Please go through the following articles, this will help you.
    <a href="https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/2eeebb86-0c01-0010-10af-9ac7101da81c">How to tune the Performance of Knowledge Management1</a>
    <a href="https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3bcdedb3-0801-0010-a3af-cf9292d181b4">How to tune the Performance of Knowledge Management2</a>
    <a href="https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/002fb447-d670-2910-67a5-bfbbf0bb287d">How to tune the Performance of Knowledge Management3</a>
    Regards,
    Venkatesh. K
    /* Points are Welcome */

  • Tuning RMAN - duplicate active database with size 1.7TB

    Dear Oracle experts!
    We are going to have a project where we will migrate an ERP standalone database to RAC cluster database.
    The source standalone db version is 11gR3 with the latest PSU patch.
    The target database would be on the same version on a RAC cluster with ASM.
    Of course we want to use RMAN for duplication. The source box and the target cluster nodes are different physical servers.
    Being given that the database is around 1.7TB, I would like to ask you some kind of best practices what rman options we can leverage having less time for the actual data migration. In the past I did several RAC migration with 4 allocated channels. Here is a sample for the RMAN script:
    run
    allocate channel dev1 device type disk;
    allocate channel dev2 device type disk;
    allocate channel dev3 device type disk;
    allocate channel dev4 device type disk;
    allocate auxiliary channel dev5 device type disk;
    allocate auxiliary channel dev6 device type disk;
    allocate auxiliary channel dev7 device type disk;
    allocate auxiliary channel dev8 device type disk;
    duplicate target database to NEW_RAC
    from active database
    password file
    spfile
    set control_files='+FRA_DG/NEW_RAC/controlfile/control01.ctl','+DATA_DG/NEW_RAC/controlfile/control02.ctl'
    set db_create_file_dest='+DATA_DG'
    set db_create_online_log_dest_1='+FRA_DG'
    set db_recovery_file_dest='+FRA_DG'
    set db_recovery_file_dest_size='1G';
    Of course there are many pre and post steps like configuring tnsnames for rman then creating undo tablespaces, rac services etc. I would like to ask you to focus on the main data migration part and let me know your suggestions.
    Thank you in advance,
    Laszlo

    We are going to have a project where we will migrate an ERP standalone database to RAC cluster database.This is what we do in similar situations :
    1. Full RMAN backup of the source database, and subsequent archivelogs backup every 2 hours
    2. Restore the full backup on the new server, changing file names (if necessary, as in your case)
    3. Apply all archivelogs generated in the meantime
    4. Archive the last log and stop the source database and Application Servers
    5. Apply the last log on the new server, and start the database
    6. Change what needed on Application Servers and start applications.
    The above requires a few minutes downtime. I use the same procedure for database upgrades, and in this case the downtime is, more or less, half an hour.

  • Can Managed Server self-tuning its performance without Admin Server?

    I have an web application deployed on a managed server.
    My observation is:
    - If the Admin Server is not running, the application will keep accepting incoming external http requests and delayed the processing of them.
    - If the Admin Server is running, the application will start the processing soon after the incoming requests arrive.
    Can any body explain the reason behind it? Only default work manager is used in the above case.
    Thanks

    This seems a strange behaviour.
    How is structured the application (app/webApp/ejbs/...)?
    How is targetted ? The target is only on the managed instance?
    Bye
    Mariano

  • RMAN Backup is very slow

    Recently we have enabled RMAN in oracle database 11gR2.
    We are using Oracle secure backup 10.4, solaris 10 and SL24 tape autoloader.
    We have performed full database backup with size 610GB using RMAN to tape and it took 7hrs to backed up.
    Could you please advice me how to fine tune the performance of RMAN.
    Thanks in advance,
    veijar
    Edited by: veijar on Sep 6, 2012 11:03 PM

    Hello;
    I like this chapter :
    Tuning RMAN Performance
    http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmtunin.htm
    Additional information :
    http://www.oracle.com/technetwork/database/features/availability/s316928-1-175928.pdf
    http://juliandontcheff.wordpress.com/category/rman/
    http://levipereira.wordpress.com/2011/01/28/rman-performance-tuning-notes-my-oracle-support-metalink/
    http://levipereira.files.wordpress.com/2011/01/tuning_rman_buffer.pdf
    Best Regards
    mseberg
    Edited by: mseberg on Sep 7, 2012 11:28 AM

  • Alter system flushed shared pool in RMAN backup

    Hi,
    I am trying to take RMAN backup of 11.2.0.1 Database in IBM AIX 6.1 server.
    The RMAN is hanging .
    Though the backup gets completed, The channels allocated doesnt get released and the RMAN gets hanging.
    In earlier RMAN backup Scripts,
    the DBA was using alter system flush shared pool in RMAN backup script and the backup was getting succesful.
    Now my question is , is using ALTER SYSTEM FLUSH SHARED POOL have any performance impact on the database.
    Regards,
    TEJAS

    TEJAS_DBA wrote:
    Hi,
    I am trying to take RMAN backup of 11.2.0.1 Database in IBM AIX 6.1 server.
    The RMAN is hanging .
    Though the backup gets completed, The channels allocated doesnt get released and the RMAN gets hanging.Are you setting the large pool? If you don't, then rman uses the shared pool. Read about tuning rman performance in the docs.
    >
    In earlier RMAN backup Scripts,
    the DBA was using alter system flush shared pool in RMAN backup script and the backup was getting succesful.
    Now my question is , is using ALTER SYSTEM FLUSH SHARED POOL have any performance impact on the database.Yes, you are allowing the components in there to be loaded in the random order of whatever is called first. This may have a good impact if you had some fragmentation in there, or it could be mildly bad if everything was well sorted, or it could be very bad if you are unlucky or have some pattern of invalidations or should be pinning something or who-knows-what. It generally is considered not a good thing to do as a habit. You wind up with [url http://tkyte.blogspot.com/2012/05/another-debugging-story.html]rainy Monday scenarios.
    Edit: I notice there are some bugs, including very slow performance when using a catalog. When you say hang, how long are you waiting? Have you considered current patches?
    Edited by: jgarry on Aug 8, 2012 11:09 AM

  • RMAN backup performance tuning

    Hi All,
    We have oracle 9iR2 two nodes RAC on solaris 9, I am looking for some helpful material on RMAN backup performance tuning, though i went through oracle's official RMAN performance tuning guide, still if there is some really nice material please let me know!
    TIA

    Oracle Material is the best.
    Anyway, the one given below is also a good P&T book.
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm
    I am looking for some helpful material on RMAN backup performance tuningTip: Give a good value for large_pool_size buffer for a better RMAN performance. For RAC configure channels on multiple nodes for parallel backup process through RMAN,
    CONFIGURE DEVICE TYPE sbt PARALLELISM 3;
    CONFIGURE DEFAULT DEVICE TYPE TO sbt;
    CONFIGURE CHANNEL 1 DEVICE TYPE sbt CONNECT 'usr1/pwd1@n1';
    CONFIGURE CHANNEL 2 DEVICE TYPE sbt CONNECT 'usr2/pwd2@n2';
    CONFIGURE CHANNEL 3 DEVICE TYPE sbt CONNECT 'usr3/pwd3@n3';
    BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
    [email protected]

Maybe you are looking for