RMAN & Hot backup cloning taking long time

Dear DBAs,
We are doing both RMAN & Hot backup cloning foe that taking more time to restore data files to TEST instance. scp taking too much time to copy to target server because of its volume of data (1.5 TB) & Mounting the RMAN backups drive on test server and restoring from that drive taking more than 8 hours.
I heard that some companies are using some device to move heavy volume Hot backup copy from source to dest faster for clone the instance easily with in minutes. What is that device name to take hot backup copies from source and restore it in target server.
Test server Specs
Processor 8 cores with 32GB Memory.
Thanks
DBA

Hot backup is not recommanded for the database size more than 1T.
RMAN backup and restore is the only opiton available for a DBA.
Other option is to use EMC Symmentrix/BCV's/Snapshots. EMC uses pair of disks to achive synch the produciton data with test server. read more about EMC symmentrix or ask your storage team.

Similar Messages

  • RMAN hot backup cloning problem..need a way out

    I took hot backup from server A using RMAN
    I prepared server A by running preclone on apps and db tier
    I copied apps tier and dbTier(tech_st) from server A to server B..Apps and db tier were up and running during the copy.I also copied the RMAN backup sets
    now on server B, what should i do..RMAN backup is in backup sets and i need to restore it on server B(Target server)
    I am following note 406982.1 for cloning steps..
    in advanced clonign options, it doesnt specify to run adcfgclone on apps tier too
    please guide
    help appreciated

    Hi,
    The document assumes that you are aware of the RMAN backup, and you know how to duplicate the database.
    Oracle Database Documentation
    http://www.oracle.com/technology/documentation/database.html
    Search this forum for "RMAN Duplicate" there are many documents referenced as this topic was discussed many times before.
    Regards,
    Hussein

  • HT201250 my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    my time capsule is taking too much time indexing backup and then taking longer time to back up ( 207 days ) or longer !!! what shall i do ?

    Try 10.7.5 supplemental update.
    This update seems to have solved this problem for many.
    Best.

  • Level1 backup is taking more time than Level0

    The Level1 backup is taking more time than Level0, I really am frustated how could it happen. I have 6.5GB of database. Level0 took 8 hrs but level1 is taking more than 8hrs . please help me in this regard.

    Ogan Ozdogan wrote:
    Charles,
    By enabling the block change tracking will be indeed faster than before he have got. I think this does not address the question of the OP unless you are saying the incremental backup without the block change tracking is slower than a level 0 (full) backup?
    Thank you in anticipating.
    OganOgan,
    I can't explain why a 6.5GB level 0 RMAN backup would require 8 hours to complete (maybe a very slow destination device connected by 10Mb/s Ethernet) - I would expect that it should complete in a couple of minutes.
    An incremental level 1 backup without a block change tracking file could take longer than a level 0 backup. I encountered a good written description of why that could happen, but I can't seem to locate the source at the moment. The longer run time might have been related to the additional code paths required to constantly compare the SCN of each block, and the variable write rate which may affect some devices, such as a tape device.
    A paraphrase from the book "Oracle Database 10g RMAN Backup & Recovery"
    "Incremental backups must check the header of each block to discover if it has changed since the last incremental backup - that means an incremental backup may not complete much faster than a full backup."
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Rman hot backup

    hello
    i am using rman hot backup script to take backup database everyday but the problem is it is working but not deleting old backup older than 2 days .
    also i have question .. my database is in archive log mode and everyday about 6-7 .arch files generating in my archive directory.
    it is not deleting the old files but generating new files everyday so adding up to the space.
    SQL> show parameter archive
    NAME TYPE VALUE
    archive_lag_target integer 0
    log_archive_config string
    log_archive_dest string /u03/archive_logs/DEVL
    log_archive_dest_1 string
    also should i set dest_1 as archive location or just log_archive_dest
    whats is the difference.?
    my rman script is
    RMAN Hot backup.unix script
    The RMAN hot backup script rman_backup.sh:
    # !/bin/bash
    # Declare your environment variables
    export ORACLE_SID=DEVL
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
    export PATH=$PATH:${ORACLE_HOME}/bin
    # Start the rman commands
    rman target=/ << EOF
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u03/backup/autobackup_control_file%F';
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 2 DAYS;
    run {
    allocate channel d1 type disk;
    allocate channel d2 type disk;
    allocate channel d3 type disk;
    allocate channel d4 type disk;
    ALLOCATE CHANNEL RMAN_BACK_CH01 TYPE DISK;
    CROSSCHECK BACKUP;
    BACKUP AS COMPRESSED BACKUPSET DATABASE FORMAT '/u03/backup/databasefiles_%d_%u_%s_%T';
    sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';
    BACKUP AS COMPRESSED BACKUPSET ARCHIVELOG ALL FORMAT '/u03/backup/archivelogs_%d_%u_%s_%T' DELETE INPUT;
    BACKUP AS COMPRESSED BACKUPSET CURRENT CONTROLFILE FORMAT '/u03/backup/controlfile_%d_%u_%s_%T';
    CROSSCHECK BACKUP;
    DELETE NOPROMPT OBSOLETE;
    DELETE NOPROMPT EXPIRED BACKUP;
    RELEASE CHANNEL RMAN_BACK_CH01;
    EXIT;
    EOF
    thanks

    Ahmer Mansoor wrote:
    RMAN never deletes the Backups unless there is a space pressure in the Recovery Area. Instead it marks the Backups as OBSOLETE based on Retention Policy (in your case it is 2 Days),
    To confirm it SET DB_RECOVERY_FILE_DEST_SIZE to some smaller value, the RMAN will remove all the Obsolete Backups automatically to reclaim space.Be very careful with this. If you generate a LOT of archivelog files and you exceed this size, on the next archivelog switch your database will hang with "cannot continue until archiver freed". RMAN will not automatically remove anything. RMAN only removes stuff when you program it in your script.
    See:
    http://docs.oracle.com/cd/E14072_01/backup.112/e10642/rcmconfb.htm#insertedID4 Retention Policy (recovery window or redundancy)
    things like:
    set retention window and number of copies
    crosscheck backup
    delete obsolete <-- delete old, redundant, no longer necessary backups/archivelogs
    delete expired <-- NOTE: If you manually delete files and do not execute delete expired (missing file), the DB_RECOVERY_FILE_DEST_SIZE remains the same. So, you can clean out the space and oracle will still say the location is "full".
    Understand that if you also set this parameter too small and your backup recovery window/redundancy are incorrectly set, you can also exhaust the "logical" space of this location again, putting your database at risk. Your parameter could be set to 100G on a 400G file system and even though you have 300G available, Oracle will see the limit of this parameter.
    My suggestion, get in a DEV/TEST environment and test to see how to best configure your environment for RMAN database backups/control file, archivelog backups also taking into consideration OS tape backup solutions. I always configure DISK for RMAN backups, then have some other tape backup utility sweep those locations to tape ensuring that I have sufficient backups to reconstitute my database - I also include a copy of the init.ora file, password file as well as the spfile backup in this location.
    >
    In case of Archivelogs, It is better to create and execute a Purge Job to remove Archivelogs after backup them on tape.I almost agree. I try to keep all archivelogs necessary for recovery from last full backup online. I try to keep a full backup online as well. much faster at restoring stuff instead of trying to locate it on tape.

  • Hot Backup cloning vs Cold Backup CLoning

    Hi,
    when we do cold backu cloning...we get same data as production database..
    when we do hot backup cloning..i think we will loose current transactions which are recorded in redo log and not archived ....am i right?????
    when we do hot backup cloning....all the database files headers are freezed..means...all latest changes are not copied while we copying datafiles to test machine...but those changes are logged in archived log files...we will get these changes in archived log files...
    am i right?????
    please correct me if i am wrong.....

    When all is said and done there is really no difference in the clone database made from a cold backup or a hot backup as both are point in time recoveries.
    In the case of the cold backup the source database was made unavailable to users while the backup was taken. If you have that luxary then that is fine, but many sites would need or want to keep the source available while the backup runs.
    The backup type should not be an issue unless you really need to capture the source database with the data in a specific known state where no changes were taking place.
    IMHO -- Mark D Powell --

  • User hot and Rman hot backup

    During user mode hot backup lots of redo gets generated as the entire block is written when any changes are made to a block which is in hot backup mode.But during Rman hot backup less redo are generated why is this so and whatz the logic invloved? and how oracle recovers the file that has been backed up through Rman.
    Could you please explain me regarding this in detail it will be really helpful.
    kumaresh

    From Article      Note:76736.1 RMAN FAQ: Recovery Manager -- Frequently Asked
    To understand why RMAN does not require extra logging or backup mode,
    you must first understand why those features are required for non-RMAN
    online backups.
    A non-RMAN online backup consists of a non-Oracle tool, such as cp or
    dd, backing up a datafile at the same time that DBWR is updating the
    file. We can't prevent the tool from reading a particular block at the
    exact same time that DBWR is updating that block. When that happens,
    the non-Oracle tool might read a block in a half-updated state, so that
    the block which is copied to the backup media might only have been
    updated in its first half, while the second half contains older data.
    This is called a "fractured block". If this backup needs to be restored
    later, and that block needs to be recovered, recovery will fail because
    that block is not usable.
    The 'alter tablespace begin backup' command is our solution for the
    fractured block problem. When a tablespace is in backup mode, and a
    change is made to a data block, instead of logging just the changed
    bytes to the redo log, we also log a copy of the entire block image
    before the change, so that we can reconstruct this block if media
    recovery finds that this block was fractured. That block image logging
    is what causes extra redo to be generated while files are in backup
    mode.
    The reason that RMAN does not require extra logging is that it
    guarantees that it will never back up a fractured block. We can make
    that guarantee because we know the format of Oracle data blocks, and we
    verify that each block that we read is complete before we copy it to the
    backup. If we read a fractured block, we read the block again to obtain
    a complete block before backing it up. non-Oracle tools are not able to
    do the same thing because they do not know how to verify the contents of
    an Oracle data block.
    Backup mode has another effect, which is to 'freeze' the checkpoint in
    the header of the file until the file is removed from backup mode.
    We do this because we cannot guarantee that the third-party backup
    tool will copy the file header prior to copying the data blocks.
    RMAN does not need to freeze the file header checkpoint because we
    know the order in which we will read the blocks, which enables us to
    capture a known good checkpoint for the file.

  • SSRS Reports taking long time to load

    Hello,
    Problem : SSRS Reports taking long time to load
    My System environment : Visual Studio 2008 SP1  and SQL Server 2008 R2
    Production Environment : Visual Studio 2008 SP1  and SQL Server 2008 R2
    I have created a Parameterized report (6 parameters), it will fetch data from 1 table. table has 1 year and 6 months data,      I am selecting parameters for only 1 month (about 2500 records). It is taking almost 2 minutes and 30 seconds
    to load the report.
    This report running efficiently in my system (report load takes only 5 to 6 seconds) but in
    production it is taking 2 minutes 30 seconds.
    I have checked the Execution log from production so I found the timing for
    Data retrieval (approx~)       Processing (approx~)               Rendering (approx~)
    10 second                                      15 sec                        
                2 mins and 5 sec.
    But Confusing point is that , if I run the same report at different time overall output time is same (approx) 2 min 30 sec but
    Data retrieval (approx~)       Processing (approx~)                Rendering (approx~)
    more than 1 min                            15 sec                                     
    more than 1 min
    so 1 question why timings are different ?
    My doubts are
    1) If query(procedure to retrieve the data) is the problem then it should take more time always,
    2) If Report structure is problem then rendering will also take same time (long time)
    for this (2nd point) I checked on blog that Rendering depends on environment structure e.g. Network bandwidth, RAM, CPU Usage , Number of users accessing same report at a time.
    So I did testing of report when no other user working on any report But failed (same result  output is 2 min 30 sec)
    From network team I got the result is that there is no issue or overload in CPU usage or RAM also No issue in Network bandwidth.
    Production Database Server and Report server are different (but in same network).
    I checked that database server the SQL Server is using almost Full RAM (23 GB out of 24 GB)
    I tried to allocate the memory to less amount up to 2GB (Trial solution I got from Blogs) but this on also failed.
    one hint I got from colleague that , change the allocated memory setting from static memory to dynamic to SQL Server
    (I guess above point is the same) I could not find that option Static and Dynamic memory setting.
    I did below steps
    Connected to SQL Server Instance
    Right click on Instance go to properties, Go to Memory Tab
    I found three options 1) Server Memory   2) Other memory   3) Section for "Configured values and Running values"
    Then I tried to reduce Maximum  Server memory up to 2 GB (As mentioned above)
    All trials failed, this issue I could not find the roots for this issue.
    Can anyone please help (it's bit urgent).

    Hi UdayKGR,
    According to your description, your report takes too long to load on your production environment. Right?
    In this scenario, since the report runs quickly in developing environment, we initially think it supposed to be the issue on data retrieval. However, based on the information in execution log, it takes longest time on rendering part. So we suggest you optimize
    the report itself to reduce the time for rendering. Please refer to the link below:
    My report takes too long to render
    Here is another article about overall performance optimization for Reporting Services:
    Reporting Services Performance and Optimization
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Connecting to the database taking long time to connect database server

    Hi
    When I execute procedure i am getting the below message at bottom of the Oracle SQL Developer
    "Connecting to the database"
    it is taking more than 10 min plz guide

    Hi
    have you installed a normal Oracle Client also on your Host? normal Oracle Client
    Did you connect with host:port:sid or with a Oracle Naming Service? through TNS Service
    Can you test tnsping <alias> yes, It is working fine
    Did other user have the same problem? yes
    Did you connect through WAN or LAN connection? LAN (Intranet)
    Can you tell more about you client/database setup?
    Database setup:
    OS: Window 2008 server
    version: 11.1.0
    Client: 11.1.0
    OS: Window 2008 server
    Now I am not able to execute single select query which table contains 6 records and 15 columns it is taking long time I have waited 30 min still no resutls
    only one table is behaving like this remaining is working fine
    Edited by: user9235224 on Oct 6, 2012 7:06 PM

  • The ODS activation is taking long time

    Hi,
    We are on SAP NetWeaver BI 701 (Support Package 5).
    We create a Z ODS, it will contain a lot of data (180.000.000 month-end) and we want to generate specific reports about it.
    The activation is taking long time, I assume is because we checked the flag "SIDs Generation upon Activation". I am confused about this check. I really need it? is this check the only problem.
    Thanks for you help.
    Victoria

    Hi Victoria:
       If your Z DSO is used only for staging purposes (you don't have queries based on this DSO and you send the data to another DSO or to an InfoCube) then you don't need to check the "SIDs Generation Upon Activation" box.
    Even more, to achieve better performance during data loads in this scenario, you might consider using a Write Optimized DSO instead of a Standard DSO, but if you decide to take this alternative don't forget to select the "Do Not check Uniqueness of Data" box if you need to write several records with the same Semantic Key.
    Regards,
    Francisco Milán.

  • F4 Help is taking long time

    Hi All,
    We are working on BI 7.0. version
    In the varaible pop-up screen we have two info objects.
    1. Fiscal year Period
    2. JOA(Joint operating aggriment)
    If u press F4 for JOA, it is taking long time to execute and finally the application is getting closed.same situation is there in RSRT also.
    If i enter with out JOA the query is giving the output. Here i have to restrict the query by JOA.
    i have changed the JOA peroperties in query designer.
    Query execution for filter value selection = Values in master data table.......
    but still the situation is the same.......
    Could you please suggest any solution for this.....
    Thanks & Regards,
    PK

    Hi Kamal,
    You can set that at the query level in the query designer for each query.
    1. Select the corresponding characteristic in the query designer.
    2. Goto to the "Extended tab" in the properties
    3. Select the "Values in the Master data table" in the "Query execution in the filter value selection.
    Also see some recomendations:
    Note 748623 - Input help (F4) has a very long runtime - recommendations
    Hope this helps.
    CK

  • Update ztable is taking long time

    Hi All,
    i have run the 5 jobs with the same program at a time but when we check the db trace
    zs01 is taking long time as shown below.here zs01 is having small amount of data.
    in the below dbtrace for updating zs01 is taking 2,315,485 seconds .how to reduce this?
    HH:MM:SS.MS Duration     Program   ObjectName  Op.   Curs   Array   Rec     RC     Conn     
    2:36:15 AM     2,315,485     SAPLZS01  ZS01       FETCH  294     1     1     0     R/3     
    The code as shown below
    you can check the code in the program SAPLZS01 include LZS01F01.
    FORM UPDATE_ZS01.
    IF ZS02-STATUS = '3'.
        IF Z_ZS02_STATUS = '3'.            "previous status is ERROR
          EXIT.
        ELSE.
          SELECT SINGLE FOR UPDATE * FROM  ZS01
                 WHERE  PROC_NUM    = ZS02-PROC_NUM.
          CHECK SY-SUBRC = 0.
          ADD ZS02-MF_AMT TO ZS01-ERR_AMT.
          ADD 1           TO ZS01-ERR_INVOI.
          UPDATE ZS01.
        ENDIF.
      ENDIF.
    my question is when updating the ztable why it is taking such long time,
    how to reduce the time or how to make faster to update the ztable .
    Thanks in advance,
    regards
    Suni

    Try the code like this..
    data: wa_zs01 type zs01.
    FORM UPDATE_ZS01.
    IF ZS02-STATUS = '3'.
        IF Z_ZS02_STATUS = '3'.            "previous status is ERROR
          EXIT.
        ELSE.
          SELECT SINGLE FOR UPDATE * FROM  ZS01
                 WHERE  PROC_NUM    = ZS02-PROC_NUM.
    -- change
      CHECK SY-SUBRC = 0.
          ADD ZS02-MF_AMT TO wa_ZS01-ERR_AMT.
          ADD 1           TO wa_ZS01-ERR_INVOI.
          UPDATE ZS01 from wa_zs01.
        ENDIF.
      ENDIF.
    And i think this Select query for ZS01 is inside the ZS02 SELECT statement,
    This might also make slow process.
    If you want to make database access always use Workarea/Internal table to fetch the data
    and work with that.
    Accessing database like this or with Select.... endselect is an inefficient programming.

  • Simple query is taking long time

    Hi Experts,
    The below query is taking long time.
    [code]SELECT   FS.*
      FROM   ORL.FAX_STAGE FS
             INNER JOIN
                   ORL.FAX_SOURCE FSRC
                INNER JOIN
                   GLOBAL_BU_MAPPING GBM
                ON GBM.BU_ID = FSRC.BUID
             ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
    WHERE       FSRC.IS_DELETED = 'N'
             AND GBM.BU_ID IS NOT NULL
             AND UPPER (FS.FAX_STATUS) ='COMPLETED';[/code]
    this query is returning 1645457 records.
    [code]PLAN_TABLE_OUTPUT
    | Id  | Operation           | Name                   | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT    |                        |   625K|   341M| 45113   (1)|
    |   1 |  HASH JOIN          |                        |   625K|   341M| 45113   (1)|
    |   2 |   NESTED LOOPS      |                        |   611 | 14664 |    22   (0)|
    |   3 |    TABLE ACCESS FULL| FAX_SOURCE             |  2290 | 48090 |    22   (0)|
    |   4 |    INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID |     1 |     3 |     0   (0)|
    |   5 |   TABLE ACCESS FULL | FAX_STAGE              |  2324K|  1214M| 45076   (1)|
    PLAN_TABLE_OUTPUT
    Note
       - 'PLAN_TABLE' is old version
    15 rows selected.[/code]
    The distinct number of records in each table.
    [code]SELECT FAX_STATUS,count(*)
    FROM fax_STAGE
    GROUP BY FAX_STATUS;
    FAX_STATUS    COUNT(*)
    BROKEN          10
    Broken - New    9
    Completed    2324493
    New             20
    SELECT is_deleted,COUNT(*)
    FROM  FAX_SOURCE
    GROUP BY IS_DELETED;
    IS_DELETED COUNT(*)
    N         2290
    Y         78[/code]
    Total number of records in each table.
    [code]SELECT COUNT(*) FROM ORL.FAX_SOURCE FSRC-- 2368
    SELECT COUNT(*) FROM ORL.FAX_STAGE--2324532
    SELECT COUNT(*) FROM APPS_GLOBAL.GLOBAL_BU_MAPPING--9
    [/code]
    To improve the performance of this query I have created the following indexes.
    [code]Functional based index on UPPER (FSRC.FAX_NUMBER) ,UPPER (FS.DESTINATION) and UPPER (FS.FAX_STATUS).
    Bitmap index on FSRC.IS_DELETED.
    Normal Index on GBM.BU_ID and FSRC.BUID.
    [/code]
    But still the performance is bad for this query.
    What can I do apart from this to improve the performance of this query.
    Please help me .
    Thanks in advance.

    <I have created the following indexes.
    CREATE INDEX ORL.IDX_DESTINATION_RAM ON ORL.FAX_STAGE(UPPER("DESTINATION"))
    CREATE INDEX ORL.IDX_FAX_STATUS_RAM ON ORL.FAX_STAGE(LOWER("FAX_STATUS"))
    CREATE INDEX ORL.IDX_UPPER_FAX_STATUS_RAM ON ORL.FAX_STAGE(UPPER("FAX_STATUS"))
    CREATE INDEX ORL.IDX_BUID_RAM ON ORL.FAX_SOURCE(BUID)
    CREATE INDEX ORL.IDX_FAX_NUMBER_RAM ON ORL.FAX_SOURCE(UPPER("FAX_NUMBER"))
    CREATE BITMAP INDEX ORL.IDX_IS_DELETED_RAM ON ORL.FAX_SOURCE(IS_DELETED)
    After creating the following indexes performance got improved.
    But our DBA said that new BITMAP index at FAX_SOURCE table (ORL.IDX_IS_DELETED_RAM) can cause locks
    on multiple rows if IS_DELETED column is in use. Please proceed with detailed tests.
    I am sending the explain plan before creating indexes and after indexes has been created.
    SELECT  FS.*
    FROM  ORL.FAX_STAGE FS
                    INNER JOIN
                    ORL.FAX_SOURCE FSRC
                  INNER JOIN
                      GLOBAL_BU_MAPPING GBM
                    ON GBM.BU_ID = FSRC.BUID
                ON UPPER (FSRC.FAX_NUMBER) = UPPER (FS.DESTINATION)
    WHERE      FSRC.IS_DELETED = 'N'
              AND GBM.BU_ID IS NOT NULL
              AND UPPER (FS.FAX_STATUS) =:B1;
    --OLD without indexes
    PLAN_TABLE_OUTPUT
    Plan hash value: 3076973749
    | Id  | Operation          | Name                  | Rows  | Bytes | Cost (%CPU)| Time    |
    |  0 | SELECT STATEMENT    |                        |  141K|    85M| 45130  (1)| 00:09:02 |
    |*  1 |  HASH JOIN          |                        |  141K|    85M| 45130  (1)| 00:09:02 |
    |  2 |  NESTED LOOPS      |                        |  611 | 18330 |    22  (0)| 00:00:01 |
    |*  3 |    TABLE ACCESS FULL| FAX_SOURCE            |  2290 | 59540 |    22  (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN | GLOBAL_BU_MAPPING_BUID |    1 |    4 |    0  (0)| 00:00:01 |
    |*  5 |  TABLE ACCESS FULL | FAX_STAGE              | 23245 |    13M| 45106  (1)| 00:09:02 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
      1 - access(UPPER("FSRC"."FAX_NUMBER")=UPPER("FS"."DESTINATION"))
      3 - filter("FSRC"."IS_DELETED"='N')
      4 - access("GBM"."BU_ID"="FSRC"."BUID")
          filter("GBM"."BU_ID" IS NOT NULL)
      5 - filter(UPPER("FS"."FAX_STATUS")=SYS_OP_C2C(:B1))
    21 rows selected.
    --NEW with indexes.
    PLAN_TABLE_OUTPUT
    Plan hash value: 665032407
    | Id  | Operation                        | Name                    | Rows  | Bytes | Cost (%CPU)| Time    |
    |  0 | SELECT STATEMENT                |                          |  5995 |  3986K|  3117  (1)| 00:00:38 |
    |*  1 |  HASH JOIN                      |                          |  5995 |  3986K|  3117  (1)| 00:00:38 |
    |  2 |  NESTED LOOPS                  |                          |  611 | 47658 |    20  (5)| 00:00:01 |
    |*  3 |    VIEW                          | index$_join$_002        |  2290 |  165K|    20  (5)| 00:00:01 |
    |*  4 |    HASH JOIN                    |                          |      |      |            |      |
    |*  5 |      HASH JOIN                  |                          |      |      |            |      |
    PLAN_TABLE_OUTPUT
    |  6 |      BITMAP CONVERSION TO ROWIDS|                          |  2290 |  165K|    1  (0)| 00:00:01 |
    |*  7 |        BITMAP INDEX SINGLE VALUE | IDX_IS_DELETED_RAM      |      |      |            |      |
    |  8 |      INDEX FAST FULL SCAN      | IDX_BUID_RAM            |  2290 |  165K|    8  (0)| 00:00:01 |
    |  9 |      INDEX FAST FULL SCAN        | IDX_FAX_NUMBER_RAM      |  2290 |  165K|    14  (0)| 00:00:01 |
    |* 10 |    INDEX RANGE SCAN              | GLOBAL_BU_MAPPING_BUID  |    1 |    4 |    0  (0)| 00:00:01 |
    |  11 |  TABLE ACCESS BY INDEX ROWID    | FAX_STAGE                | 23245 |    13M|  3096  (1)| 00:00:38 |
    |* 12 |    INDEX RANGE SCAN              | IDX_UPPER_FAX_STATUS_RAM |  9298 |      |  2434  (1)| 00:00:30 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
      1 - access(UPPER("DESTINATION")="FSRC"."SYS_NC00035$")
      3 - filter("FSRC"."IS_DELETED"='N')
      4 - access(ROWID=ROWID)
      5 - access(ROWID=ROWID)
      7 - access("FSRC"."IS_DELETED"='N')
      10 - access("GBM"."BU_ID"="FSRC"."BUID")
          filter("GBM"."BU_ID" IS NOT NULL)
      12 - access(UPPER("FAX_STATUS")=SYS_OP_C2C(:B1))
    31 rows selected
    Please confirm on the DBA comment.Is this bitmap index locks rows in my case.
    Thanks.>

  • Fwrite() and fread() of a shared FAT32 formatted file is taking long time in MAC osx Lion C program

    Hi
    Is there any provision or api in MAC to open a file in shared mode same as windows
       hUSBdrive = CreateFile(pDriveName, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_NO_BUFFERING, NULL);
    we have the follwing scenario where a file is shared among two processes for read/write.one is running on Linux and the other one is running on MAC.where both the processses are reading/writing into the same memory location in the file say "X"
    FAT 32 formatted raw data file which is located on the device, is shared among two processes.
    One process is running on Linux device which is connected to MAC book through usb.In this linux process, the file is opened using fopen() and we have used fcntl() with O_DIRECT flag.This process continuously reads/writes data on memory location "X" in the shared file .
    The other process is running on Mac which has simple c program that opens the file on the connected device i.e from usb drive and reads/writes data using fread()/fwrite().fopen() is used to open the file and FILE_NOCACHE flag is used to avoid caching.
    The value at memory location "X" is updated by mac by using fwrite() and the linux process reads the memory location "X" by using fread(). Linux process is taking around 30 sec to get the updated value.
    If the value is updated by Linux process at memory location "X" by using fwrite() the MAC process is also taking long time more than a minute to read the updated value by usng fread().
    fwrite()/fread() on mac is taking long time where as the windows application which uses the same apis is taking msec time.
    Do we need to use other api s or flags to open file?
    thanks in advance.......

    does any one face this kind of problem?
    fwrite() and fread() takes long time?
    Is there any problem in read/write to a fat32 file on MAC?

  • RSPCM is taking long time

    Hi,
    RSPCM is taking long time to open at the same time RSPC is working fine
    when I try to change the process chain status through RSPC_PROCESS_FINISH . it is executing for long time no response. I tried executing it backgrond also from past 2 days it is running, no progress.
    Our basis team created index to all backend table of RSPCM, still issue persist. Please suggest me some to get rid of this.
    Br,
    Harish

    hi,
    Please check the below thread
    RSPCM T-Code was executing very slow
    Please check note 1372931
    hope it helps!
    Edited by: Lavanya J on Nov 4, 2011 1:32 PM

Maybe you are looking for

  • How to write optimized select statements for this requirement

    I.     G/L Account No. Info: Get all the list of G/L Account no. , company code and the corresponding long description from SKB1 & SKAT tables based on the G/L Account no. and company code values entered in the selection screen. II.     No. of transa

  • Ipages compatible with Microsoft word???

    How do you save a document on ipages to make it compatible with microsoft word? I am trying to edit some pix that have an all black background to an all white background so i dont waste all that black ink. Any help would be greatly appreciated. Thank

  • Need a sample Oracle Database Design Document

    Looking for a sample database design document. Which may include below high level steps if possible. 1. Hardware & software specifications. 2. Database Sizing/ estimate. 3. Schema Design 4. Data transformations/ feeds Thanks Mallikharjuna

  • Serial not accepted

    MAC OS Mavericks. Just purchased PS Elements 12. Downloaded and tried to install. The serial number is not accepted. Could be a languages conflict? I use italian on my mac but PSE12 is english. Adobe customer service couldn't help.

  • How to eject dvd from the macbookpro

    How can I eject the dvd?  The dvd is not shown in the finder as a device, the eject button is not working.  You don't see the dvd on the desktop.