Table Comparison Performance

Anyone has experience with using table comparison transforming for large volume data, e.g. 10M records.
How is the performance?
Any experience and suggestions is welcome.

Performance will depend on a lot of factors, but the two big ones are:
1. how many columns you are using in your compare (more columns will be slower)
2. how you do the compare (sorted input is WAY faster than row comparison)
Without knowing any details about your data or the process you are trying to implement it's difficult to make any additional recommendations.  But keep in mind that there are other options - especially for really big data sets.
For example, you can always use a two step process that first deletes existing records matching the incoming set of primary keys, and then do a straight insert of the rows.   This will avoid the whole comparison step - and avoid doing updates which are much slower than inserts.
However, this only works if you are replacing the existing records.  It wouldn't work if your table compare is part of a type 2 dimension load or something that requires you to track history.

Similar Messages

  • Erratic behaviour map operation after table comparison

    BODI XIR2 11.7.3.6
    We want to detect and store changes in source data and store these changes in the target table.
    After the table comparison the row has got an update operation code flag and goes to a Map operation that converts Update row types to Normal and discards all other row types.
    normal -> discard
    update -> normal
    insert -> discard
    delete -> discard
    The map operation behaviour is erratic: sometimes the update row is mapped to normal and sometimes the update is discarded.

    I would be surprised if that is the case. You could run the dataflow in debug mode, 'cause there you can see the data and the OPCode flag of insert/update/delete after the TC transform.

  • Table Comparisons and Deadlocks

    Post Author: Thang Nguyen
    CA Forum: Data Integration
    Hi ,
    Pretty new to this DI stuff, but I've got a dataflow where I'm using a Table Comparison Transform to work out my updates and inserts. My database is SQL server 2000.
    When it runs the Table Comparison I get SQL errors regarding deadlock victim and the insert fails. I've ran a trace on SQL Server and the insert statement is being blocked by a select statement, so looks like some sort of issue with the Table Comparison looking for the differences and inserting new rows at the same time.
    I've tried to split the operation into two Dataflows using Map Operation where one is doing the updates and the other does in the inserts, but I still get the deadlock issue.
    Has anyone else experienced this problem?
    Thanks
    Thang

    Post Author: Thang Nguyen
    CA Forum: Data Integration
    If anyone is interested the solution I've got from BO is:
    "Can you put the following parameter in  your DSConfig   / al_engine section  :  SQLServerReadUncommitted=1 "
    Beware that this sets your changes the SQL server Transaction Isolation level to allow dirty reads which isn't ideal

  • Using table comparison transform can you point to multiple tables as target

    Using table comparison transform can you point to multiple tables as target tables?
    Thank you very much for the helpful info.

    If you want to feed the output to multiple tables, you can do so, but you have to be cautious enough on which table to be used as a comparison table in this case. The comparison table provided inside Table Comparison will be compared against the input data set for generating opcodes (Insert / Update / Delete) to input rows.

  • Using table comparison can we use multiple tables as source?

    Using table comparison can we use multiple tables as source?
    Thank you very much for the helpful info.

    Table Comparison
    1) Input Data coming in
    2) Comparison table (table to which the data is compared)
    3) Output (input rows with respective opcodes based on the comparison result of input dataset with the comparison table)
    If your question is whether table comparison can accept union/join of multiple table sources, you can achieve by using merge/query transforms and then feeding to table comparison. Here, you have to be careful about choosing the primary keys inside table comparison

  • I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY WUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ??

    I HAVE A SOURCE TABLE WITH 10 RECORDS AND TARGET TABLE 15 RECORDS. MY QUESTION IS USING WITH THE TABLE COMPARISON TRANSFORM .I WANT TO DELETE UNMATCHED RECORDS FROM THE TARGET TABLE ?? HOW IT IS ??

    Hi Kishore,
    First identify deleted records by selecting "Detect deleted rows from comparison table" feature in Table Comparison
    Then Use Map Operation with Input row type as "delete" and output row type as "delete" to delete records from target table.

  • COLLATE Error on Table Comparison

    Hi,
    We have just upgraded to from version 11.7.3 to version 12.2.2 and we are getting this error when running the table comparison transform with sorted input:
    "Expression type int is invalid for COLLATE clause. "
    I asked our DBA to see what BODS was passing through at it is sending this:
    WHERE ( "TCRdr_1"."VEHICLE_ID_NK"  >= @P1  COLLATE Latin1_General_BIN)
    Why is it using COLLATE, and more importantly is there anyway to control this?
    The database is SQL Server 2005.
    Thanks

    this issue is fixed in 12.2.3, there were some issues prior to 12.2.3 related to Access Violation in case of SQL Server 2008 as target table, for DATETIME and DATETIME2 datatypes in target table
    as you mentioned you are using SQL Server 2005, do you get the Access Violation in a particular case, is the issue consistently reproducible ? please file a support case for the Access Violation issue or give me a reproducible scenario so that I can file a bug for that
    if you don't want to apply the 12.2.3 then you will see this issue for all table compare with sorted input and key columns with datatype other than CHAR, VARCHAR or NCHAR or NVARCHAR

  • SQLDeveloper 1.5.4 Table browsing performance issue

    Hi all,
    I had read of previous posts regarding SQLDeveloper 1.5.3 table browsing performance issues. I downloaded and installed version 1.5.4 and it appears the problem has gotten worse!
    It takes ages to display rows of this particular table (the structure is shown below). It takes much longer to view it in Single Record format. Then attempting to Export the data is another frustrating exercise. By the way, TOAD does not seem to have this problem so I guess it is a SQLDeveloper bug.
    Can someone help with any workarounds?
    Thanks
    Chiedu
    Here is the table structure:
    create table EMAIL_SETUP
    APPL_ID VARCHAR2(10) not null,
    EML_ID VARCHAR2(10) not null,
    EML_DESC VARCHAR2(80) not null,
    PRIORITY_NO_DM NUMBER(1) default 3 not null
    constraint CC_EMAIL_SETUP_4 check (
    PRIORITY_NO_DM in (1,2,3,4,5)),
    DTLS_YN VARCHAR2(1) default '0' not null
    constraint CC_EMAIL_SETUP_5 check (
    DTLS_YN in ('0','1')),
    ATT_YN VARCHAR2(1) default '0' not null
    constraint CC_EMAIL_SETUP_6 check (
    ATT_YN in ('0','1')),
    MSG_FMT VARCHAR2(5) default 'TEXT' not null
    constraint CC_EMAIL_SETUP_7 check (
    MSG_FMT in ('TEXT','HTML')),
    MSG_TMPLT VARCHAR2(4000) not null,
    MSG_MIME_TYPE VARCHAR2(500) not null,
    PARAM_NO NUMBER(2) default 0 not null
    constraint CC_EMAIL_SETUP_10 check (
    PARAM_NO between 0 and 99),
    IN_USE_YN VARCHAR2(1) not null
    constraint CC_EMAIL_SETUP_11 check (
    IN_USE_YN in ('0','1')),
    DFLT_USE_YN VARCHAR2(1) default '0' not null
    constraint CC_EMAIL_SETUP_12 check (
    DFLT_USE_YN in ('0','1')),
    TAB_NM VARCHAR2(30) null ,
    FROM_ADDR VARCHAR2(80) null ,
    RPLY_ADDR VARCHAR2(80) null ,
    MSG_SBJ VARCHAR2(100) null ,
    MSG_HDR VARCHAR2(2000) null ,
    MSG_FTR VARCHAR2(2000) null ,
    ATT_TYPE_DM VARCHAR2(4) null
    constraint CC_EMAIL_SETUP_19 check (
    ATT_TYPE_DM is null or (ATT_TYPE_DM in ('RAW','TEXT'))),
    ATT_INLINE_YN VARCHAR2(1) null
    constraint CC_EMAIL_SETUP_20 check (
    ATT_INLINE_YN is null or (ATT_INLINE_YN in ('0','1'))),
    ATT_MIME_TYPE VARCHAR2(500) null ,
    constraint PK_EMAIL_SETUP primary key (EML_ID)
    )

    Check Tools | Preferences | Database | Advanced Parameters and post the value you have there.
    Try setting it to a small number and report if you see any improvement.
    -Raghu

  • 5 tables Used - Performance Improvement Help Req...

    Hello Experts,
    I am using 5 table joins to fetch data from query; does this will degrade performance?
    Please find below the query used; in this WHERE Clause, 1) le.locker_entry_id is PRIMARY Key column
    2) dc.object_type - created a index on this column 3) Created a function based index on this column upper(dt.partner_device_type)
    Can any other code improvement be done or view be created ?
    Please suggest.
    SELECT count(*) FROM locker_entry le;
    COUNT(*)              
    1762                  
    SELECT count(*) FROM digital_compatibility dc;
    COUNT(*)              
    227757    
    SELECT count(*) FROM digital_encode_profile dep;
    COUNT(*)              
    48                    
    SELECT count(*) FROM device_type dt;
    COUNT(*)              
    421                   
    SELECT count(*) FROM digital_sku dsku;
    COUNT(*)              
    26037     
    EXPLAIN PLAN FOR
    SELECT
      /*+ INDEX(dep, DIGITAL_ENCODE_PROFILE_ID_PK) */
      DISTINCT le.locker_entry_id AS locker_entry_id,
      dep.delivery_type
    FROM locker_entry le,
      digital_compatibility dc,
      digital_encode_profile dep,
      device_type dt,
      digital_sku dsku
    WHERE le.sku_id                   = dsku.legacy_id
    AND le.digital_package_id         = dsku.digital_package_id
    AND dsku.digital_sku_id           = dc.object_id
    AND dc.encode_profile_id          =dep.digital_encode_profile_id
    AND dt.capability_set_id          =dc.capability_set_id
    AND le.locker_entry_id           IN (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32)
    AND dc.object_type                =:"SYS_B_0"
    AND upper(dt.partner_device_type) =:33;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
    | Id  | Operation                             | Name                         | Rows  | Bytes | Cost (%CPU)|
    |   0 | SELECT STATEMENT                      |                              |     1 |   370 |   481   (3)|
    |   1 |  HASH UNIQUE                          |                              |     1 |   370 |   481   (3)|
    |   2 |   NESTED LOOPS                        |                              |     1 |   370 |   480   (3)|
    |   3 |    NESTED LOOPS                       |                              |     1 |   336 |   479   (3)|
    |*  4 |     HASH JOIN                         |                              |     1 |   193 |   478   (3)|
    |   5 |      MAT_VIEW ACCESS BY INDEX ROWID   | DIGITAL_SKU                  |     1 |    48 |     5   (0)|
    |   6 |       NESTED LOOPS                    |                              |    16 |  1392 |     5   (0)|
    |   7 |        INLIST ITERATOR                |                              |       |       |            |
    |   8 |         TABLE ACCESS BY INDEX ROWID   | LOCKER_ENTRY                 |    32 |  1248 |     1   (0)|
    |*  9 |          INDEX RANGE SCAN             | LOCKER_ENTRY_ID_PK           |    32 |       |     1   (0)|
    |  10 |        BITMAP CONVERSION TO ROWIDS    |                              |       |       |            |
    |  11 |         BITMAP AND                    |                              |       |       |            |
    |  12 |          BITMAP CONVERSION FROM ROWIDS|                              |       |       |            |
    |* 13 |           INDEX RANGE SCAN            | IDX_DIGITAL_SKU_LEGACY_ID    |     1 |       |     1   (0)|
    |  14 |          BITMAP CONVERSION FROM ROWIDS|                              |       |       |            |
    |* 15 |           INDEX RANGE SCAN            | IDX_DIGITAL_PACKAGE_ID       |     1 |       |     1   (0)|
    |* 16 |      MAT_VIEW ACCESS FULL             | DIGITAL_COMPATIBILITY        |  2098 |   217K|   472   (3)|
    |* 17 |     INDEX RANGE SCAN                  | DEVICE_TYPE_IDX              |     1 |   143 |     1   (0)|
    |  18 |    MAT_VIEW ACCESS BY INDEX ROWID     | DIGITAL_ENCODE_PROFILE       |     1 |    34 |     1   (0)|
    |* 19 |     INDEX UNIQUE SCAN                 | DIGITAL_ENCODE_PROFILE_ID_PK |     1 |       |     1   (0)|
    Predicate Information (identified by operation id):
       4 - access("DSKU"."DIGITAL_SKU_ID"=TO_NUMBER("DC"."OBJECT_ID"))
       9 - access("LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:1) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:2) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:3) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:4) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:5) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:6) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:7) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:8) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:9) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:10) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:11) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:12) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:13) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:14) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:15) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:16) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:17) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:18) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:19) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:20) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:21) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:22) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:23) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:24) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:25) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:26) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:27) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:28) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:29) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:30) OR
                  "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:31) OR "LE"."LOCKER_ENTRY_ID"=TO_NUMBER(:32))
      13 - access("LE"."SKU_ID"="DSKU"."LEGACY_ID")
      15 - access("LE"."DIGITAL_PACKAGE_ID"="DSKU"."DIGITAL_PACKAGE_ID")
      16 - filter("DC"."OBJECT_TYPE"=:SYS_B_0)
      17 - access("DT"."CAPABILITY_SET_ID"="DC"."CAPABILITY_SET_ID" AND
                  UPPER("PARTNER_DEVICE_TYPE")=:33)
      19 - access("DC"."ENCODE_PROFILE_ID"="DEP"."DIGITAL_ENCODE_PROFILE_ID")
    Note
       - 'PLAN_TABLE' is old version
    Trace information
    =================  
    recursive calls     17
    db block gets     0
    consistent gets     239
    physical reads     61
    redo size     0
    bytes sent via SQL*Net to client     742
    bytes received via SQL*Net from client     1361
    SQL*Net roundtrips to/from client     2
    sorts (memory)     5
    sorts (disk)     0
       Edited by: Linus on Oct 10, 2011 2:44 AM

    Linus wrote:
    Yes Bravid i got regarding the point to remove the hint from the query; unless there is a change in execution plan with index hint.
    My concern is; is there any issues by keeping 5 tables on single query and will this degrade performance?
    Because as of during the certification practise; i used to overcome some where ,"NOT to use many table joins".
    So that is reason i am asking you; sorry if am wrong in any ways.
    Thanks...There's nothing inherently wrong with joining lots of tables and there isn't one specific thing that will degrade performance. You could have a query that joins 2 tables performing very badly and a query with 10 tables that performs very well. A lot of it is down to the quality of the decisions the optimiser can make. To get the best decisions out of it you need to make sure it had enough information to work with. So that means making sure stats are available and relevant for the data you're querying and removing any hints unless you have a specific reason to use them.
    Earlier this year I had the "joy" of working with a query that had 65 tables in it. It was an INSERT INTO...SELECT FROM and it was a clear example of where a single statement really should have been about 6 or 7 separate statements. The reason in this case was because there were 6 or 7 different sections to the query that had essentially no relationship with each other and they were all outer joined and used lots of analytic functions and case statements to categorise each row and populate the same columns with different values depending on which query block it had originated from.
    This is an extreme example but the point is you have to look at the statement and decide whether it does it's job well. My personal preference is to try to avoid big "generic" SQL statements that cater for lots of different scenarios because they can easily become over complicated and difficult to maintain and tune.
    HTH
    David
    Edited by: Bravid on Oct 10, 2011 1:53 PM

  • Web Dynpro ABAP - Table Scroll Performance

    Hi,
    In the standard ESS Timesheet application, we have created a custom pop-up screen to select the cost assignment for the time entry. The pop-up screen is a search help developed by implementing the IWD_VALUE_HELP interface and is a complex screen with multiple view containers/views and multiple tables and tabs.
    In the pop-up screen, the default number of visible rows in the table is set to 5 and the user needs to scroll if there are too many entries. The performance during scrolling is very bad and takes about 4-5 seconds for each scroll. All the traces in the backend show that there is hardly any processing being done that would take so much time. HTTPWatch trace shows 0.4 seconds response time, so I'm wondering why is the browser taking 4-5 seconds to load. Is it because the timesheet page itself is complex and since the pop-up screen is also heavy, the browser takes so much time to lay out the whole page for every scroll?
    Any other ideas?
    Any inputs will be greatly appreciated.
    Thanks,
    Manish

    I have noticed that pagination is relatively faster when compared to scrolling. Try making the same performance measurements by removing the scrollbar (by changing the setting in the WD application parameters) and introducing pagination instead.

  • Inserting multiple records in multiple tables - High Performance

    I have a input form in the table and the user can input any no. of rows. Every row has around 25 columns and when a single row is saved, each of this column is saved in its own table, i.e. there are 25 inserts happening (for 25 columns) for a single row. Now if a user paste even 100 rows this takes a lot of time (because of 100*25 inserts happening). Now I want to support 10000 inserts and obviously it should be very fast. Can someone tell me how to maximize the performance. I am using mysql 4.1 as my DB.

    javanewbie80 wrote:
    I have a input form in the table and the user can input any no. of rows. Every row has around 25 columns and when a single row is saved, each of this column is saved in its own table, Why is each value its own table? Are they not related in some way?
    This sounds like a bad RDBMS design.
    i.e. there are 25 inserts happening (for 25 columns) for a single row. Now if a user paste even 100 rows this takes a lot of time (because of 100*25 inserts happening). Now I want to support 10000 inserts and obviously it should be very fast. Can someone tell me how to maximize the performance. I am using mysql 4.1 as my DB.
    Rethink your design.
    If you must persist in this folly, I'd recommend batching your INSERTs to try and minimize network traffic.
    But do consult with somebody who actually knows SQL and relational design. This smells bad.
    %

  • Database feature Derived Table nad performance

    WE recently migrated our Data warehouse from DB2 to Oracle Exadata. Since the migration, i have noticed that some of the reports have becone extremely slow. e.g the report was runnign for 7 secs before is now running for over 6 minutes. In the database feature tab of the physical layer for the database connection, I have the database feature Derived Table turned on. If I turn this setting off, the same report runs in 3 secs. The Derived Table is supposed to help with the performance of the reports. but in this case it seems like it is hurting the performance than helping.
    Should this setting be turned off? What are the side effects of turning this off? We can not do a full testing by changing this setting so i want to reach out to someone who had run into similar issues and what they did to remedy.
    Any help will be appreciated.
    Thanks!

    So the answer is "yes" but not quite in the way you might expect.
    You have created the object where you can "borrow" the LOV for your derived table prompt. What you need to do is this. First, you need to create another object in your universe (I put it in the same class as the "code" object) that contains your object description. Then do this:
    1. Double-click on the code object
    2. Select the properties tab
    3. Click on the Edit button. This does not edit the object definition itself, it edits the LOV definition.
    4. On the query panel, add the Description object created earlier. Make sure it is the second object in the query panel.
    5. You can opt to sort either by code or description, whichever makes sense to your users
    6. Click "OK" to save the query definition, or click "Run" if you want to populate the LOV with values.
    7. Make sure you click "Export with Universe" on the properties tab once you have customized the LOV, else your computer is the only one that will include the new LOV definition
    8. The Hierarchical Display box may also be checked; for this case you have a code + description which are not hierarchical, so clear that box
    That's it. When you export your universe, the LOV will go with it. When someone asks for a list of values for the code, the list will show both codes and descriptions, but only the code will be selected.
    You do not need to make any changes to your current derived table prompt once the LOV has been customized.

  • CREATE TABLE AS - PERFORMANCE ISSUE

    Hi All,
    I am creating a table CONTROLDATA from existing tables PF_CONTROLDATA & ICDSV2_AutoCodeDetails as per the below query.
    CREATE TABLE CONTROLDATA AS
    SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, STRVALUE
    FROM PF_CONTROLDATA CD1
    JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL)
    AND CD1.AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID);The above statement is taking around 10mins of time to create the table CONTROLDATA which is not acceptible in our environment. Can any one please suggest is there any way to improve the performance of the above query to create the table CONTROLDATA under a minute?
    PF_CONTROLDATA has 1,50,00,000 (15million) rows and has composite index(XIF16PF_CONTROLDATA) on CONTEXTID, AUDITORDER columns and one more index(XIE1PF_CONTROLDATA) on CONTROLID column.
    ICDSV2_AutoCodeDetails has only 6 rows and no indexes.
    After the create table statement CONTROLDATA will have around 10,00,000 (1million) records.
    Can some one give any suggestion to improve the performance of the above query?
    oracle version is : 10.2.0.3
    Tkprof output is:
    create table CONTROLDATA2 as
    SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, DATATYPE, NUMVALUE, FLOATVALUE, STRVALUE, PFDATETIME, MONTH, DAY, YEAR, HOUR, MINUTE, SECOND, UNITID, NORMALIZEDVALUE, NORMALIZEDUNITID, PARENTCONTROLVALUEID, PARENTVALUEORDER
    FROM PF_CONTROLDATA CD1
         JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL OR CD1.CONTROLID=AC.SYNONYM_CTRL)
         AND AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.03          2          2          0           0
    Execute      1     15.25     593.43     211688    4990786       6617     1095856
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     15.25     593.47     211690    4990788       6617     1095856
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 40 
    ********************************************************************************Explain plan output is:
    Plan hash value: 2771048406
    | Id  | Operation                           | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT              |                        |     1 |   105 |  3609K  (1)| 14:02:20 |
    |   1 |  LOAD AS SELECT                     | CONTROLDATA2           |       |       |            |          |
    |*  2 |   FILTER                            |                        |       |       |            |          |
    |   3 |    TABLE ACCESS BY INDEX ROWID      | PF_CONTROLDATA         |   178K|  9228K| 55344   (1)| 00:12:55 |
    |   4 |     NESTED LOOPS                    |                        |   891K|    89M| 55344   (1)| 00:12:55 |
    |   5 |      TABLE ACCESS FULL              | ICDSV2_AUTOCODEDETAILS |     5 |   260 |     4   (0)| 00:00:01 |
    |   6 |      BITMAP CONVERSION TO ROWIDS    |                        |       |       |            |          |
    |   7 |       BITMAP OR                     |                        |       |       |            |          |
    |   8 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |*  9 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  10 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |* 11 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  12 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |* 13 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  14 |        BITMAP CONVERSION FROM ROWIDS|                        |       |       |            |          |
    |* 15 |         INDEX RANGE SCAN            | XIE1PF_CONTROLDATA     |       |       |    48   (3)| 00:00:01 |
    |  16 |    SORT AGGREGATE                   |                        |     1 |    16 |            |          |
    |  17 |     FIRST ROW                       |                        |     1 |    16 |     3   (0)| 00:00:01 |
    |* 18 |      INDEX RANGE SCAN (MIN/MAX)     | XIF16PF_CONTROLDATA    |     1 |    16 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter("AUDITORDER"= (SELECT MAX("AUDITORDER") FROM "PF_CONTROLDATA" "CD2" WHERE
                  "CD2"."CONTEXTID"=:B1))
       9 - access("CD1"."CONTROLID"="AC"."MODTERM_CONTROL")
      11 - access("CD1"."CONTROLID"="AC"."FAILED_CTRL")
      13 - access("CD1"."CONTROLID"="AC"."CODE_CTRL")
      15 - access("CD1"."CONTROLID"="AC"."SYNONYM_CTRL")
      18 - access("CD2"."CONTEXTID"=:B1)
    Note
       - dynamic sampling used for this statement
    ********************************************************************************I tried to change the above logic even by using insert statement and APPEND hint, but still taking the same time.
    Please suggest.
    Edited by: 867546 on Jun 22, 2011 2:42 PM

    Hi user2361373
    i tried using nologging also but still it is taking same amout of time. Please find below the tkprof output.
    create table CONTROLDATA2 NOLOGGING as
    SELECT CONTROLVALUEID, VALUEORDER, CONTEXTID, AUDITORDER, INVALIDATINGREVISIONNUMBER, CONTROLID, DATATYPE, NUMVALUE, FLOATVALUE, STRVALUE, PFDATETIME, MONTH, DAY, YEAR, HOUR, MINUTE, SECOND, UNITID, NORMALIZEDVALUE, NORMALIZEDUNITID, PARENTCONTROLVALUEID, PARENTVALUEORDER
    FROM PF_CONTROLDATA CD1
         JOIN ICDSV2_AutoCodeDetails AC ON (CD1.CONTROLID=AC.MODTERM_CONTROL OR CD1.CONTROLID=AC.FAILED_CTRL OR CD1.CONTROLID=AC.CODE_CTRL OR CD1.CONTROLID=AC.SYNONYM_CTRL)
    AND AUDITORDER=(SELECT MAX(AUDITORDER) FROM PF_CONTROLDATA CD2 WHERE CD1.CONTEXTID=CD2.CONTEXTID)
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.03       0.03          2          2          0           0
    Execute      1     13.98     598.54     211963    4990776       6271     1095856
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     14.01     598.57     211965    4990778       6271     1095856
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 40 
    ********************************************************************************Edited by: 867546 on Jun 22, 2011 3:09 PM
    Edited by: 867546 on Jun 22, 2011 3:10 PM

  • Unexplained poor table query performance

    Hi All
    I am really open to any advice as I have hit a kind of brick wall,  a developer came to me asking about y a procedure was performing so slowly in beta as opposed to dev and after looking at exactly what it did I indentified the offending
    select statement. 
    The query was basically passing some ids into a user defined table and using that thoses ids to filter.
    Select gc.id
    From temperatures as gcm left outer join
    gauges gc ON ( gc.id = gcm.id Or gc.id IS NULL )
    AND ( gc.countryid = gcm.countryid or gcm.countryid is null )
    where souriceid = 3
    So the gauges table has around 90K where as the temperatures has around 3 million .
    K the test on the development server and the above returns in under 3 seconds where as the beta is just over 1 minute .
    The beta in terms of processing power is much fast and both have the same version of SQL2012 sp1 ( 11.0.3128 ( x64))
    having ran a quick query on index fragmentation i find there are a few indexes within the temperature table that are reasonable high.   I then rebuild them and see that they are pretty much back to an acceptable level.  Again I try the select
    and a few times and get a range of times .
    I then tried a restore from the weekend just to see if there was anything that may have changed and wondering if I was beginning to clutch at straws.
    low and behold the restore was not only quick but from an index fragmentation point of view not in as great shape.
    Ive compared the two tables which are identical with the only difference being in data to which I copied over to the restore and got the same 2 second result.
    Any help on what to do next would be great ,  as I could replace the table with the restored one but I would like to know why this is happening .
    Many Thanks
    Robert

    The query is a bit strange with the NULL checks on gc.id and gcm.countryid.
    Since temperatures is the retained (outer) table, you can remove the part "or gcm.countryid is null".
    Also, if table gauges does not allow NULLs (or does not have NULLs) in column id, you should remove the part "OR gc.id IS NULL".
    If the query can be simplified as stated above, then all you need is a compound index on (id, countryid) or on (countryid, id) on both tables.
    If the problem still persists, you can check the query plan to see what is different, and that should give you a clue about the issue.
    Please note that for performance related queries, it is essential to show the exact query you are using. For example, if you are using a local variable or a parameter instead of "3" in your query, that makes a big difference.
    If you need more help, then please post DDL for the tables and indexes that are involved.
    Gert-Jan

  • Schema Table Comparison

    Hi All,
    I've got 2 schemas with identical tables.
    I want to do a minus on the tables but would like to do this with a procedure that then reports the change into a <table_name>_diff table for each - This table should show records that are in schema1 but not in 2 and records that are in schema 2 but not in 1.
    There are about 40 tables in total so a proc rather than doing it all manually would be superb...
    Any ideas ?

    Hi ,
    I have found somewhere in the net the following code......
    REM
    REM Edit the following three DEFINE statements to customize this script
    REM to suit your needs.
    REM
    REM Tables to be compared:
    DEFINE table_criteria = "table_name = table_name" -- all tables
    REM DEFINE table_criteria = "table_name != 'TEST'"
    REM DEFINE table_criteria = "table_name LIKE 'LOOKUP%' OR table_name LIKE 'C%'"
    REM Columns to be compared:
    DEFINE column_criteria = "column_name = column_name" -- all columns
    REM DEFINE column_criteria = "column_name NOT IN ('CREATED', 'MODIFIED')"
    REM DEFINE column_criteria = "column_name NOT LIKE '%_ID'"
    REM Database link to be used to access the remote schema:
    DEFINE dblink = "remote_db"
    SET SERVEROUTPUT ON SIZE 1000000
    SET VERIFY OFF
    DECLARE
      CURSOR c_tables IS
        SELECT   table_name
        FROM     user_tables
        WHERE    &table_criteria
        ORDER BY table_name;
      CURSOR c_columns (cp_table_name IN VARCHAR2) IS
        SELECT   column_name, data_type
        FROM     user_tab_columns
        WHERE    table_name = cp_table_name
        AND      &column_criteria
        ORDER BY column_id;
      TYPE t_char80array IS TABLE OF VARCHAR2(80) INDEX BY BINARY_INTEGER;
      v_column_list     VARCHAR2(32767);
      v_total_columns   INTEGER;
      v_skipped_columns INTEGER;
      v_count1          INTEGER;
      v_count2          INTEGER;
      v_rows_fetched    INTEGER;
      v_column_pieces   t_char80array;
      v_piece_count     INTEGER;
      v_pos             INTEGER;
      v_length          INTEGER;
      v_next_break      INTEGER;
      v_same_count      INTEGER := 0;
      v_diff_count      INTEGER := 0;
      v_error_count     INTEGER := 0;
      v_warning_count   INTEGER := 0;
      -- Use dbms_sql instead of native dynamic SQL so that Oracle 7 and Oracle 8
      -- folks can use this script.
      v_cursor          INTEGER := dbms_sql.open_cursor;
    BEGIN
      -- Iterate through all tables in the local database that match the
      -- specified table criteria.
      FOR r1 IN c_tables LOOP
        -- Build a list of columns that we will compare (those columns
        -- that match the specified column criteria). We will skip columns
        -- that are of a data type not supported (LOBs and LONGs).
        v_column_list := NULL;
        v_total_columns := 0;
        v_skipped_columns := 0;
        FOR r2 IN c_columns (r1.table_name) LOOP
          v_total_columns := v_total_columns + 1;
          IF r2.data_type IN ('BLOB', 'CLOB', 'NCLOB', 'LONG', 'LONG RAW') THEN
            -- The column's data type is one not supported by this script (a LOB
            -- or a LONG). We'll enclose the column name in comment delimiters in
            -- the column list so that the column is not used in the query.
            v_skipped_columns := v_skipped_columns + 1;
            IF v_column_list LIKE '%,' THEN
              v_column_list := RTRIM (v_column_list, ',') ||
                               ' /*, "' || r2.column_name || '" */,';
            ELSE
              v_column_list := v_column_list || ' /* "' || r2.column_name ||'" */ ';
            END IF;
          ELSE
            -- The column's data type is supported by this script. Add the column
            -- name to the column list for use in the data comparison query.
            v_column_list := v_column_list || '"' || r2.column_name || '",';
          END IF;
        END LOOP;
        -- Compare the data in this table only if it contains at least one column
        -- whose data type is supported by this script.
        IF v_total_columns > v_skipped_columns THEN
          -- Trim off the last comma from the column list.
          v_column_list := RTRIM (v_column_list, ',');
          BEGIN
            -- Get a count of rows in the local table missing from the remote table.
            dbms_sql.parse
            v_cursor,
            'SELECT COUNT(*) FROM (' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name || '"' ||
            ' MINUS ' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name ||'"@&dblink)',
            dbms_sql.native
            dbms_sql.define_column (v_cursor, 1, v_count1);
            v_rows_fetched := dbms_sql.execute_and_fetch (v_cursor);
            IF v_rows_fetched = 0 THEN
              RAISE NO_DATA_FOUND;
            END IF;
            dbms_sql.column_value (v_cursor, 1, v_count1);
            -- Get a count of rows in the remote table missing from the local table.
            dbms_sql.parse
            v_cursor,
            'SELECT COUNT(*) FROM (' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name ||'"@&dblink'||
            ' MINUS ' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name || '")',
            dbms_sql.native
            dbms_sql.define_column (v_cursor, 1, v_count2);
            v_rows_fetched := dbms_sql.execute_and_fetch (v_cursor);
            IF v_rows_fetched = 0 THEN
              RAISE NO_DATA_FOUND;
            END IF;
            dbms_sql.column_value (v_cursor, 1, v_count2);
            -- Display our findings.
            IF v_count1 = 0 AND v_count2 = 0 THEN
              -- No data discrepencies were found. Report the good news.
              dbms_output.put_line
              r1.table_name || ' - Local and remote table contain the same data'
              v_same_count := v_same_count + 1;
              IF v_skipped_columns = 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: 1 LOB or LONG column was omitted ' ||
                'from the comparison'
                v_warning_count := v_warning_count + 1;
              ELSIF v_skipped_columns > 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: ' || TO_CHAR (v_skipped_columns) ||
                ' LOB or LONG columns were omitted from the comparison'
                v_warning_count := v_warning_count + 1;
              END IF;
            ELSE
              -- There is a discrepency between the data in the local table and
              -- the remote table. First, give a count of rows missing from each.
              IF v_count1 > 0 THEN
                dbms_output.put_line
                r1.table_name || ' - ' ||
                LTRIM (TO_CHAR (v_count1, '999,999,990')) ||
                ' rows on local database missing from remote'
              END IF;
              IF v_count2 > 0 THEN
                dbms_output.put_line
                r1.table_name || ' - ' ||
                LTRIM (TO_CHAR (v_count2, '999,999,990')) ||
                ' rows on remote database missing from local'
              END IF;
              IF v_skipped_columns = 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: 1 LOB or LONG column was omitted ' ||
                'from the comparison'
                v_warning_count := v_warning_count + 1;
              ELSIF v_skipped_columns > 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: ' || TO_CHAR (v_skipped_columns) ||
                ' LOB or LONG columns were omitted from the comparison'
                v_warning_count := v_warning_count + 1;
              END IF;
              -- Next give the user a query they could run to see all of the
              -- differing data between the two tables. To prepare the query,
              -- first we'll break the list of columns in the table into smaller
              -- chunks, each short enough to fit on one line of a telnet window
              -- without wrapping.
              v_pos := 1;
              v_piece_count := 0;
              v_length := LENGTH (v_column_list);
              LOOP
                EXIT WHEN v_pos = v_length;
                v_piece_count := v_piece_count + 1;
                IF v_length - v_pos < 72 THEN
                  v_column_pieces(v_piece_count) := SUBSTR (v_column_list, v_pos);
                  v_pos := v_length;
                ELSE
                  v_next_break :=
                    GREATEST (INSTR (SUBSTR (v_column_list, 1, v_pos + 72),
                                     ',"', -1),
                              INSTR (SUBSTR (v_column_list, 1, v_pos + 72),
                                     ',/* "', -1),
                              INSTR (SUBSTR (v_column_list, 1, v_pos + 72),
                                     ' /* "', -1));
                  v_column_pieces(v_piece_count) :=
                    SUBSTR (v_column_list, v_pos, v_next_break - v_pos + 1);
                  v_pos := v_next_break + 1;
                END IF;
              END LOOP;
              dbms_output.put_line ('Use the following query to view the data ' ||
                                    'discrepencies:');
              dbms_output.put_line ('(');
              dbms_output.put_line ('SELECT ''Local'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"');
              dbms_output.put_line ('MINUS');
              dbms_output.put_line ('SELECT ''Local'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"@&dblink');
              dbms_output.put_line (') UNION ALL (');
              dbms_output.put_line ('SELECT ''Remote'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"@&dblink');
              dbms_output.put_line ('MINUS');
              dbms_output.put_line ('SELECT ''Remote'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"');
              dbms_output.put_line (');');
              v_diff_count := v_diff_count + 1;
            END IF;
          EXCEPTION
            WHEN OTHERS THEN
              -- An error occurred while processing this table. (Most likely it
              -- doesn't exist or has fewer columns on the remote database.)
              -- Show the error we encountered on the report.
              dbms_output.put_line (r1.table_name || ' - ' || SQLERRM);
              v_error_count := v_error_count + 1;
          END;
        END IF;
      END LOOP;
      -- Print summary information.
      dbms_output.put_line ('-------------------------------------------------');
      dbms_output.put_line
      'Tables examined: ' || TO_CHAR (v_same_count + v_diff_count + v_error_count)
      dbms_output.put_line
      'Tables with data discrepencies: ' || TO_CHAR (v_diff_count)
      IF v_warning_count > 0 THEN
        dbms_output.put_line
        'Tables with warnings: ' || TO_CHAR(v_warning_count)
      END IF;
      IF v_error_count > 0 THEN
        dbms_output.put_line
        'Tables that could not be checked due to errors: ' || TO_CHAR(v_error_count)
      END IF;
      dbms_sql.close_cursor (v_cursor);
    END;I hope , it ' ll help you...!!!!
    Regards,
    Simon

Maybe you are looking for

  • Major iPod Problems, Need Help Solving

    One day I was casually syncing my iPod video with my computer to get a few new songs uploaded onto it. Suddenly, the iPod freezes up, shuts itself off, restarts, and magically it has forgotten everything and pretends it is a brand new iPod. No proble

  • I can't hear caller/they can hear me/speakers work fine on Vmail and Ipod

    I can't hear a caller when I place a call or receive a call. They can hear me. Speakerphone does not work either The speakers work fine for listening to Voicemails and IPod songs. What gives?

  • What should i take for customer account group

    wat is the best to customize customer account group i m thginking to creat one exportcustomer and other one is domestic or else i ll make one one as sold to party plz help

  • Error in AXIS2 when i try publish ODI web services

    Hi people, I´m configured AXIS2 into OC4J (SOA Suite 10.1.3.1) and when i try to publish ODI web services using a odi-public-ws.aar file on AXIS2, show me a "Faulty Services" message and when i click on details link show me a Java message (below): {c

  • How do you list fully Depreciated Assets in SAP?

    Hi experts: My client has some assets that are fully depreciated in reality. The client has created a fully depreciated asset in newly configured SAP system. She wants the asset to be listed in SAP as fully depreciated and depreciation should not be