Reduce the execution time of a insert

Hello,
I have a table called LINK_TABLE that contains a field called LINK_ID that identify with an ID the links of a network and other two fields called START_NODE_ID and END_NODE_ID that contains the ID of the start node and the ID of the end node for each link. This table has 7 Millions of record.
Then I have a table called NODE_TABLE that contains a field called NODE_ID that identify the all nodes of the network (i.e. all the START_NODE_ID and END_NODE_ID without duplicates ). This table has 6 Millions of record
Now, I created a new table of links called LINK_TABLE2 that derive from the LINK_TABLE with these conditions:
INSERT INTO LINK_TABLE$ USING SELECT * FROM ITALIA_LINK$ WHERE (FRC>=0 AND FRC<=6) AND FEATTYP<>4130;The table LINK_TABLE has now 3 Millions of record.
NODE_TABLE2 must have only the NODE_ID that are the START_NODE_ID and END_NODE_ID of the table LINK_TABLE2 ( without duplicates ). So, I'm doing this insert:
INSERT INTO NODE_TABLE2 USING SELECT * FROM NODE_TABLE n WHERE (n.node_id IN (SELECT start_node_id FROM LINK_TABLE2) OR n.node_id IN (SELECT end_node_id FROM LINK_TABLE2));but this second insert is working from about 15 hours.
I have 4GB RAM (oracle is using it for 30%), Quad core of CPU (oracle is using it for 100%).
Is it possible?
Is there a fast method for do this?
Thank you very much in advance.

example:
LINK_TABLE
LINK_ID: 50, 51, 52
START_NODE_ID: 10, 11, 12
END_NODE_ID: 20, 21, 22
where:
10 and 20 are the start point and the end point of the link (or street) 50;
11 and 21 are the start point and the end point of the link (or street) 51;
12 and 22 are the start point and the end point of the link (or street) 52;
NODE_TABLE
NODE_ID: 10, 11, 12, 20, 21, 22
LINK_TABLE2
LINK_ID: 50, 52
START_NODE_ID: 10, 12
END_NODE_ID: 20, 22
now, NODE_TABLE2 must have only:
NODE_TABLE2
NODE_ID: 10, 12, 20, 22.
Is it clear?
thank you very much.

Similar Messages

  • Reduce the execution time for the below query

    Hi,
    Please help me to reduce the execution time on the following query .. if any tuning is possible.
    I have a table A with the columns :
    ID , ORG_LINEAGE , INCLUDE_IND ( -- the org lineage is a string of ID's. If ID 5 reports to 4 and 4 to 1 .. the lineage for 5 will be stored as the string -1-4-5)
    Below is the query ..
    select ID
    from A a
    where INCLUDE_IND = '1' and
    exists (
    select 1
    from A b
    where b.ID = '5'
    and b.ORG_LINEAGE like '%-'||a.ID||'-%'
    order by ORG_LINEAGE;
    The only constraint on the table A is the primary key on the ID column.
    Following will be the execution plan :
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=406 Card=379 Bytes=2
    653)
    1 0 SORT (ORDER BY) (Cost=27 Card=379 Bytes=2653)
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'A' (Cost=24 Card
    =379 Bytes=2653)
    4 2 TABLE ACCESS (BY INDEX ROWID) OF 'A' (Co
    st=1 Card=1 Bytes=6)
    5 4 INDEX (RANGE SCAN) OF 'ORG_LINEAGE'
    (NON-UNIQUE)

    I order it by the org_lineage to get the first person. So it is a result problem? The order by doesn't give you the first person, it gives you a sorted result set (of which there may be zero, one, or thousands).
    If you only want one row from that, then you're spending a lot of time tuning the wrong query.
    How do you know which ORG_LINEAGE row you want?
    Maybe it would help if you posted some sample data.

  • TO REDUCE THE EXECUTION TIME OF REPORT

    HI,
         CAN ANYONE TELL ME THAT, HOW CAN I REDUCE THE EXECUTION TIME OF THE REPORT. IS THERE ANY IDEA TO IMPROVE THE PERFORMANCE OF THE REPORT.

    Hi Santosh,
    Good check out the following documentation
    <b>Performance tuning</b>
    For all entries
    Nested selects
    Select using JOINS
    Use the selection criteria
    Use the aggregated functions
    Select with view
    Select with index support
    Select … Into table
    Select with selection list
    Key access to multiple lines
    Copying internal tables
    Modifying a set of lines
    Deleting a sequence of lines
    Linear search vs. binary
    Comparison of internal tables
    Modify selected components
    Appending two internal tables
    Deleting a set of lines
    Tools available in SAP to pin-point a performance problem
    <b>Optimizing the load of the database</b>
    For all entries
    The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
    The plus
    Large amount of data
    Mixing processing and reading of data
    Fast internal reprocessing of data
    Fast
    The Minus
    Difficult to program/understand
    Memory could be critical (use FREE or PACKAGE size)
    Some steps that might make FOR ALL ENTRIES more efficient:
    Removing duplicates from the the driver table
    Sorting the driver table
    If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
    FOR ALL ENTRIES IN i_tab
      WHERE mykey >= i_tab-low and
            mykey <= i_tab-high.
    Nested selects
    The plus:
    Small amount of data
    Mixing processing and reading of data
    Easy to code - and understand
    The minus:
    Large amount of data
    when mixed processing isn’t needed
    Performance killer no. 1
    Select using JOINS
    The plus
    Very large amount of data
    Similar to Nested selects - when the accesses are planned by the programmer
    In some cases the fastest
    Not so memory critical
    The minus
    Very difficult to program/understand
    Mixing processing and reading of data not possible
    Use the selection criteria
    SELECT * FROM SBOOK.                   
      CHECK: SBOOK-CARRID = 'LH' AND       
                      SBOOK-CONNID = '0400'.        
    ENDSELECT.                             
    SELECT * FROM SBOOK                     
      WHERE CARRID = 'LH' AND               
            CONNID = '0400'.                
    ENDSELECT.                              
    Use the aggregated functions
    C4A = '000'.              
    SELECT * FROM T100        
      WHERE SPRSL = 'D' AND   
            ARBGB = '00'.     
      CHECK: T100-MSGNR > C4A.
      C4A = T100-MSGNR.       
    ENDSELECT.                
    SELECT MAX( MSGNR ) FROM T100 INTO C4A 
    WHERE SPRSL = 'D' AND                
           ARBGB = '00'.                  
    Select with view
    SELECT * FROM DD01L                    
      WHERE DOMNAME LIKE 'CHAR%'           
            AND AS4LOCAL = 'A'.            
      SELECT SINGLE * FROM DD01T           
        WHERE   DOMNAME    = DD01L-DOMNAME 
            AND AS4LOCAL   = 'A'           
            AND AS4VERS    = DD01L-AS4VERS 
            AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    SELECT * FROM DD01V                    
    WHERE DOMNAME LIKE 'CHAR%'           
           AND DDLANGUAGE = SY-LANGU.     
    ENDSELECT.                             
    Select with index support
    SELECT * FROM T100            
    WHERE     ARBGB = '00'      
           AND MSGNR = '999'.    
    ENDSELECT.                    
    SELECT * FROM T002.             
      SELECT * FROM T100            
        WHERE     SPRSL = T002-SPRAS
              AND ARBGB = '00'      
              AND MSGNR = '999'.    
      ENDSELECT.                    
    ENDSELECT.                      
    Select … Into table
    REFRESH X006.                 
    SELECT * FROM T006 INTO X006. 
      APPEND X006.                
    ENDSELECT
    SELECT * FROM T006 INTO TABLE X006.
    Select with selection list
    SELECT * FROM DD01L              
      WHERE DOMNAME LIKE 'CHAR%'     
            AND AS4LOCAL = 'A'.      
    ENDSELECT
    SELECT DOMNAME FROM DD01L    
    INTO DD01L-DOMNAME         
    WHERE DOMNAME LIKE 'CHAR%' 
           AND AS4LOCAL = 'A'.  
    ENDSELECT
    Key access to multiple lines
    LOOP AT TAB.          
    CHECK TAB-K = KVAL. 
    ENDLOOP.              
    LOOP AT TAB WHERE K = KVAL.     
    ENDLOOP.                        
    Copying internal tables
    REFRESH TAB_DEST.              
    LOOP AT TAB_SRC INTO TAB_DEST. 
      APPEND TAB_DEST.             
    ENDLOOP.                       
    TAB_DEST[] = TAB_SRC[].
    Modifying a set of lines
    LOOP AT TAB.             
      IF TAB-FLAG IS INITIAL.
        TAB-FLAG = 'X'.      
      ENDIF.                 
      MODIFY TAB.            
    ENDLOOP.                 
    TAB-FLAG = 'X'.                  
    MODIFY TAB TRANSPORTING FLAG     
               WHERE FLAG IS INITIAL.
    Deleting a sequence of lines
    DO 101 TIMES.               
      DELETE TAB_DEST INDEX 450.
    ENDDO.                      
    DELETE TAB_DEST FROM 450 TO 550.
    Linear search vs. binary
    READ TABLE TAB WITH KEY K = 'X'.
    READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
    Comparison of internal tables
    DESCRIBE TABLE: TAB1 LINES L1,      
                    TAB2 LINES L2.      
    IF L1 <> L2.                        
      TAB_DIFFERENT = 'X'.              
    ELSE.                               
      TAB_DIFFERENT = SPACE.            
      LOOP AT TAB1.                     
        READ TABLE TAB2 INDEX SY-TABIX. 
        IF TAB1 <> TAB2.                
          TAB_DIFFERENT = 'X'. EXIT.    
        ENDIF.                          
      ENDLOOP.                          
    ENDIF.                              
    IF TAB_DIFFERENT = SPACE.           
    ENDIF.                              
    IF TAB1[] = TAB2[].  
    ENDIF.               
    Modify selected components
    LOOP AT TAB.           
    TAB-DATE = SY-DATUM. 
    MODIFY TAB.          
    ENDLOOP.               
    WA-DATE = SY-DATUM.                    
    LOOP AT TAB.                           
    MODIFY TAB FROM WA TRANSPORTING DATE.
    ENDLOOP.                               
    Appending two internal tables
    LOOP AT TAB_SRC.              
      APPEND TAB_SRC TO TAB_DEST. 
    ENDLOOP
    APPEND LINES OF TAB_SRC TO TAB_DEST.
    Deleting a set of lines
    LOOP AT TAB_DEST WHERE K = KVAL. 
      DELETE TAB_DEST.               
    ENDLOOP
    DELETE TAB_DEST WHERE K = KVAL.
    Tools available in SAP to pin-point a performance problem
    The runtime analysis (SE30)
    SQL Trace (ST05)
    Tips and Tricks tool
    The performance database
    Optimizing the load of the database
    Using table buffering
    Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
    Select DISTINCT
    ORDER BY / GROUP BY / HAVING clause
    Any WHERE clasuse that contains a subquery or IS NULL expression
    JOIN s
    A SELECT... FOR UPDATE
    If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECR clause.
    Use the ABAP SORT Clause Instead of ORDER BY
    The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
    If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
    Avoid ther SELECT DISTINCT Statement
    As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
    Good Luck and thanks
    AK

  • Can I reduce the execution time for a step in a TestStand ?

    Hi,
    I calculated the a single step execution time for TestStand Ver 2.0. It comes to around 20 milliseconds/step. Can I reduce this excution time ?
    Are there any settings available for configuring execution time parameters except result logging and exception handlings to reduce the execution time ?

    It's difficult to tell how you what time you are reporting for your step. Clearly we don't have control of the time it takes your code to execute. However, we are constantly working on reducing the overhead of calling the code. In addition, you don't mention the type of step you are calling. One way to have a common reference is to use the example \Examples\Benchmarks\Benchmarks.seq. Below have have posted the results of running this sequence with both tracing and result collection enabled and then disabled. I have a 700 MHz, 128 MB RAM, Dell PIII laptop. In this example there is no code within the code modules. You notice that calling a DLL has the least overhead with a minimum of 7.459 ms with tracing and results enabled and 0.092 ms with tracing and results disabled. Although not included below, if I enable results be disable tracing I get a minimum time of 0.201 ms, a 100x improvement on your time.
    With Results and Tracing enabled.
    7.578 milliseconds per step for CVI Standard Prototype - Object File
    7.579 milliseconds per step for CVI Standard Prototype - DLL
    7.459 milliseconds per step for DLL Flexible Prototype
    8.589 milliseconds per step for DLL Flexible Prototype Numeric Limit
    9.563 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
    10.015 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
    7.868 milliseconds per step for ActiveX Automation
    8.892 milliseconds per step for LabVIEW Standard Prototype
    With tracing and results disabled.
    0.180 milliseconds per step for CVI Standard Prototype - Object File
    0.182 milliseconds per step for CVI Standard Prototype - DLL
    0.092 milliseconds per step for DLL Flexible Prototype
    0.178 milliseconds per step for DLL Flexible Prototype Numeric Limit
    0.277 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
    0.400 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
    0.270 milliseconds per step for ActiveX Automation
    1.235 milliseconds per step for LabVIEW Standard Prototype

  • Reduce the execution time of catproc.sql

    how can i reduce the time of execution of catproc.sql, I have increased the size of redo log files from 5m to 40m, but still performance hasn't improved much,
    any suggestion
    regards
    asif

    Is it still running? Almost half an hour now.
    By the time we find a solution maybe it's already done :)
    I don't think you could do much after it's already started. As long as it's not hung, it will finish sooner or later. Make sure there are no other applications runing on the server to compete CPU and diskio with oracle.

  • Decrease the execution time of package

    Hi All,
    In our dataware housing environment ( complete ETL in PL/SQL) . We have different packages to populate different mart tables.
    One of our package is to populate a table which runs on several times to populate the table with data of different locations. It executes very fast for all the locations except one particular location for this particular location amount of data is also much higher than other location that is one of the reason it takes a lot of time.
    Can we do something to reduce the execution time of this package for this particular location despite of huge amount of data ?
    Any kind of help will be highly appreciated .
    Thanks
    Dikshit

    Since I do not know what's your queries doing inside the package, I really can not come up with a solid solution, I would also recommend you to use parellal dml in order to reduce the execution time, but you also need to make sure you have enogh hardware resources to play with.
    hare krishna
    Alok

  • Do the execution time of the insert command depend upon the no the indexes

    hi,
    Do the execution time of the insert,update and delete command depend upon the no the indexes created for a table......
    Edited by: [email protected] on Mar 4, 2009 3:02 AM

    sure,..
    An index is a structure which contains entries pointing to the actual data in the table.
    When you insert a record into a table, the data which should also be indexed is inserted in the index structure. This index data needs to be in a specific place, not just anywhere (as opposed to e.g. a heap table).
    So this might lead to an update and insert in the index structure.
    This is just to give you an idea. More on the subject in Tom Kyte's Expert Oracle Database Architecture and of course Oracle's documentation.

  • How to reduce the fetch time of this sql?

    Here is the SQL, three-table join and joining conditions are:
    ms_ets_cntrl.ims_cntrt_oid=ims_alctn.ims_alctn_oid
    ims_alctn.ims_trde_oid=ims_trde.ims_trde_oid
    SELECT 'MCH' Type, ims_ets_cntrl.STTS tp_stts, count(*) Count  FROM ims_ets_cntrl  where ims_ets_cntrl.ims_cntrt_oid in
    (select ims_alctn.ims_alctn_oid  FROM ims_alctn, ( select ims_trde.ims_trde_oid from ims_trde
    where ( IMS_TRDE.IMS_TRDE_RCPT_DTTM  >= TO_DATE('10/29/2009 00:00', 'MM/DD/YYYY HH24:MI') AND IMS_TRDE.IMS_TRDE_RCPT_DTTM <= (TO_DATE('11/5/2009 23:59', 'MM/DD/YYYY HH24:MI')) )
    AND (IMS_TRDE.GRS_TRX_TYPE IN ('INJECTION','WITHDRAWAL','PAYMENT') OR IMS_TRDE.SSC_INVST_TYPE = 'FC' AND  1=1  and IMS_TRDE.SERVICE_TYPE='FS' )) TRDE
    where IMS_ALCTN.IMS_TRDE_OID=TRDE.IMS_TRDE_OID) and ims_ets_cntrl.outbnd_dest = 'ETD' group by ims_ets_cntrl.STTSOptimizer and related parameter info:
    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE
    optimizer_dynamic_sampling           integer     1
    optimizer_features_enable            string      9.2.0
    optimizer_index_caching              integer     0
    optimizer_index_cost_adj             integer     100
    optimizer_max_permutations           integer     2000
    optimizer_mode                       string      CHOOSE
    SQL>select pname, pval1, pval2 from sys.aux_stats$ where sname='SYSSTATS_INFO';
    DSTART          11-16-2009 10:23
    DSTOP          11-16-2009 10:23
    FLAGS     1     
    STATUS          NOWORKLOADHere is autotrace output:
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE
       1    0   SORT (GROUP BY)
       2    1     VIEW
       3    2       SORT (UNIQUE)
       4    3         TABLE ACCESS (BY INDEX ROWID) OF 'IMS_ETS_CNTRL'
       5    4           NESTED LOOPS
       6    5             NESTED LOOPS
       7    6               TABLE ACCESS (BY INDEX ROWID) OF 'IMS_TRDE'
       8    7                 INDEX (RANGE SCAN) OF 'IMS_TRDE_INDX4' (NON- UNIQUE)
       9    6               TABLE ACCESS (BY INDEX ROWID) OF 'IMS_ALCTN'
      10    9                 INDEX (RANGE SCAN) OF 'IMS_ALCTN_INDX1' (NON  -UNIQUE)
      11    5             INDEX (RANGE SCAN) OF 'IMS_ETS_CNTRL_INDX1' (NON  -UNIQUE)
    Statistics
              0  recursive calls
              0  db block gets
         244608  consistent gets
          58856  physical reads
              0  redo size
            497  bytes sent via SQL*Net to client
            499  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              1  rows processedHere is TKPROF output:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      4.85     129.72      53863     244608          0           1
    total        4      4.85     129.72      53863     244608          0           1
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 63 
    Rows     Row Source Operation
          1  SORT GROUP BY
      12972   VIEW 
      12972    SORT UNIQUE
      12972     TABLE ACCESS BY INDEX ROWID IMS_ETS_CNTRL
      46236      NESTED LOOPS 
      19134       NESTED LOOPS 
      19744        TABLE ACCESS BY INDEX ROWID IMS_TRDE
    176922         INDEX RANGE SCAN IMS_TRDE_INDX4 (object id 34099)
      19134        TABLE ACCESS BY INDEX ROWID IMS_ALCTN
      19134         INDEX RANGE SCAN IMS_ALCTN_INDX1 (object id 34094)
      27101       INDEX RANGE SCAN IMS_ETS_CNTRL_INDX1 (object id 34101)
    ********************************************************************************Explain plan output:
    PLAN_TABLE_OUTPUT
    | Id  | Operation                         |  Name                | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT                  |                      |       |       |       |
    |   1 |  SORT GROUP BY                    |                      |       |       |       |
    |   2 |   VIEW                            |                      |       |       |       |
    |   3 |    SORT UNIQUE                    |                      |       |       |       |
    |*  4 |     TABLE ACCESS BY INDEX ROWID   | IMS_ETS_CNTRL        |       |       |       |
    |   5 |      NESTED LOOPS                 |                      |       |       |       |
    |   6 |       NESTED LOOPS                |                      |       |       |       |
    |*  7 |        TABLE ACCESS BY INDEX ROWID| IMS_TRDE             |       |       |       |
    |*  8 |         INDEX RANGE SCAN          | IMS_TRDE_INDX4       |       |       |       |
    |   9 |        TABLE ACCESS BY INDEX ROWID| IMS_ALCTN            |       |       |       |
    |* 10 |         INDEX RANGE SCAN          | IMS_ALCTN_INDX1      |       |       |       |
    |* 11 |       INDEX RANGE SCAN            | IMS_ETS_CNTRL_INDX1  |       |       |       |
    Predicate Information (identified by operation id):
       4 - filter("IMS_ETS_CNTRL"."OUTBND_DEST"='ETD')
       7 - filter("IMS_TRDE"."GRS_TRX_TYPE"='INJECTION' OR "IMS_TRDE"."GRS_TRX_TYPE"='WITHD
                  RAWAL' OR "IMS_TRDE"."GRS_TRX_TYPE"='PAYMENT' OR "IMS_TRDE"."SSC_INVST_TY
                  PE"='FC' AND "IMS_TRDE"."SERVICE_TYPE"='FS')
       8 - access("IMS_TRDE"."IMS_TRDE_RCPT_DTTM">=TO_DATE('2009-10-29 00:00:00', 'yyyy-mm-
                  dd hh24:mi:ss') AND "IMS_TRDE"."IMS_TRDE_RCPT_DTTM"<=TO_DATE('2009-11-05
                  23:59:00', 'yyyy-mm-dd hh24:mi:ss')
      10 - access("IMS_ALCTN"."IMS_TRDE_OID"="IMS_TRDE"."IMS_TRDE_OID")
      11 - access("IMS_ETS_CNTRL"."IMS_CNTRT_OID"="IMS_ALCTN"."IMS_ALCTN_OID")
    Note: rule based optimizationCould you please help tune this sql?
    How can I reduce the elapsed time? How can I reduce query read?
    If there is any other info that you need, please let me know!
    thank you very much!

    What exactly is this logic meant to do?
    AND    (ims_trde.grs_trx_type IN ('INJECTION', 'WITHDRAWAL', 'PAYMENT')
            OR ims_trde.ssc_invst_type = 'FC'
            AND ims_trde.service_type = 'FS')is that really:
    AND    (ims_trde.grs_trx_type IN ('INJECTION', 'WITHDRAWAL', 'PAYMENT')
            OR ims_trde.ssc_invst_type = 'FC')
    AND    ims_trde.service_type = 'FS'or is it maybe:
    AND   (ims_trde.grs_trx_type IN ('INJECTION', 'WITHDRAWAL', 'PAYMENT')
           OR (ims_trde.ssc_invst_type = 'FC'
               AND ims_trde.service_type = 'FS'))?

  • Execution time for an insert/update

    Hello!
    We are using EJB entities 3.0 and JPA configured to run on WAS and DB2. We also are using Container Managed Persistence
    We have a transactional method let's name it addA(), when executed, ultimately inserts data in 11 DB2 tables.
    In some of the 11 tables there are could be multiple rows inserted, in average, about 2 inserts.
    We are using the EntityManager.persist method to handle each entity.
    The method completes in about 11 seconds when the resources on the server (CPU,memory) are in a good state (so not overloaded).
    Is this a reasonable/decent time for the operation we are trying to do?
    If not, what would be a reasonable running time for such an operation?
    What do we need to do in order to improve the performance and decrease the execution time, other than switching to BMP and coding manual SQL inserts?

    user2617486 wrote:
    Do you have any idea how we can localize/isolate better the problem at the DB level?
    Can we programatically insert log statements to see how long it takes the processing on the WAS and how long takes the actual SQL statements execution once they hit the DB2 database?You need help from a DBA, you can't reason this problem away. You need cold hard facts from whatever tooling the database provides. Of course you could try adding log statements to see how long each database operation is taking on the Java side of things, but that only proves that it is slow, not WHY it is slow.
    The network latency can not be considered in this case since we run the test application on the same WAS where the application resides so it no networking involved.and the database runs on that machine as well? This is new information you are pulling out of your hat by the way, now all of a sudden there are two applications? And with the limited information you give I am to assume you are having performance problems from the test application and not from your "main application"? Otherwise I see no point in you making this argument.

  • Reducing query execution time

    how can we reduce query execution time?which methods we have to follow to optimization?

    which methods we have to follow to optimization?First, read this informative thread:
    How to post a SQL statement tuning request HOW TO: Post a SQL statement tuning request - template posting
    and post the relevant details we need.
    Execution plans and/or TRACE/TKPROF output can help you identifying performance bottlenecks.

  • How to get the execution time of a Discoverer Report from qpp_stats table

    Hello
    by reading some threads on this forum I became aware of the information stored in eul5_qpp_stats table. I would like to know if I can use this table to determine the execution time of a worksheet. In particular it looks like the field qs_act_elap_time stores the actual elapsed time of each execution of specific worksheet: am I correct? If so, how is this value computed? What's the unit of measure? I assume it's seconds, but then I've seen that sometimes I get numbers with decimals.
    For example I ran a worksheet and it took more than an hour to run, and the value I get in the qs_act_elap_time column is 2218.313.
    Assuming the unit of measure was seconds than it would mean approx 37 mins. Is that the actual execution time of the query on the database? I guess the actual execution time on my Discoverer client was longer since some calculations were performed at the client level and not on the database.
    I would really appreciate if you could shed some light on this topic.
    Thanks and regards
    Giovanni

    Thanks a lot Rod for your prompt reply.
    I agree with you about the accuracy of the data. Are you aware of any other way to track the execution times of Discoverer reports?
    Thanks
    Giovanni

  • HT5731 How do I change my iTunes settings to lower the resolution of movies I download to reduce the download time?

    How do I change my iTunes settings to lower the resolution of movies I download to reduce the download time?

    If you're buying HD movies, you can change the setting in the Store preferences in iTunes to prefer 720p. Or you can just buy SD versions.
    Regards.

  • How to find out the execution time of a sql inside a function

    Hi All,
    I am writing one function. There is only one IN parameter. In that parameter, i will pass one SQL select statement. And I want the function to return the exact execution time of that SQL statement.
    CREATE OR REPLACE FUNCTION function_name (p_sql IN VARCHAR2)
    RETURN NUMBER
    IS
    exec_time NUMBER;
    BEGIN
    --Calculate the execution time for the incoming sql statement.
    RETURN exec_time;
    END function_name;
    /

    Please note that wrapping query in a "SELECT COUNT(*) FROM (<query>)" doesn't necessarily reflect the execution time of the stand-alone query because the optimizer is smart and might choose a completely different execution plan for that query.
    A simple test case shows the potential difference of work performed by the database:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Session altered.
    SQL>
    SQL> drop table count_test purge;
    Table dropped.
    Elapsed: 00:00:00.17
    SQL>
    SQL> create table count_test as select * from all_objects;
    Table created.
    Elapsed: 00:00:02.56
    SQL>
    SQL> alter table count_test add constraint pk_count_test primary key (object_id)
    Table altered.
    Elapsed: 00:00:00.04
    SQL>
    SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'COUNT_TEST')
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.29
    SQL>
    SQL> set autotrace traceonly
    SQL>
    SQL> select * from count_test;
    5326 rows selected.
    Elapsed: 00:00:00.10
    Execution Plan
    Plan hash value: 3690877688
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            |  5326 |   431K|    23   (5)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| COUNT_TEST |  5326 |   431K|    23   (5)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
            419  consistent gets
              0  physical reads
              0  redo size
         242637  bytes sent via SQL*Net to client
           4285  bytes received via SQL*Net from client
            357  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
           5326  rows processed
    SQL>
    SQL> select count(*) from (select * from count_test);
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 572193338
    | Id  | Operation             | Name          | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |               |     1 |     5   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE       |               |     1 |            |          |
    |   2 |   INDEX FAST FULL SCAN| PK_COUNT_TEST |  5326 |     5   (0)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
             16  consistent gets
              0  physical reads
              0  redo size
            412  bytes sent via SQL*Net to client
            380  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>As you can see the number of blocks processed (consistent gets) is quite different. You need to actually fetch all records, e.g. using a PL/SQL block on the server to find out how long it takes to process the query, but that's not that easy if you want to have an arbitrary query string as input.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to extend the execution time of an ABAP Program using the Process chain

    Hello Sapians,
    Our Environment has got 600seconds = 10 mintues as the execution time.
    My ABAP Program is taking more than this 600 seconds, to show the result, I found this when I tried to execute in debug mode, it shows the result.
    If I execute in background also it shows the results succesfully.
    Only issue is when I execute this report in foreground it has been taking ages and goes on Time OUT Error.
    It has been decided that we can extend the execution time only for this report, and it will reset the time back to 10mintues once the report has been executed successfully or failed in between for any other reasons.
    And we can achieve this by using the process chains.
    Can any body help me please in this regard
    Thanks,

    Hi,,,,,,,,,,
    Besides Process Chain There is another way out for this........
    Resetting time counter of dialog process so that time-out does not
    happen. Use this fm within your program at appropriate locations to
    reset time counter.
    "CALL FUNCTION 'TH_REDISPATCH'."
    Thanks
    Saurabh

  • Why the execution time increases with a while loop, but not with "Run continuously" ?

    Hi all,
    I have a serious time problem that I don't know how to solve because I don't know exactly where it comes from.
    I command two RF switches via a DAQ card (NI USB-6008). Only one position at the same time can be selected on each switch. Basically, the VI created for this functionnality (by a co-worker) resets all the DAQ outputs, and then activates the desired ones. It has three inputs, two simp0le string controls, and an array of cluster, which contains the list of all the outputs and some informations to know what is connected (specific to my application).
    I use this VI in a complex application, and I get some problems with the execution time, which increased each time I callled the VI, so I made a test VI (TimeTesting.vi) to figure out where the problem came from. In this special VI I record the execution time in a csv file to analyse then with excel.
    After several tests, I found that if I run this test VI with the while loop, the execution time increases at each cycle, but if I remove the while loop and use the "Run continuously" funtionnality, the execution time remains the same. In my top level application I have while loops and events, and so the execution time increases too.
    Could someone explain me why the execution time increases, and how can I avoid that? I attached my test VI and the necessary subVIs, as well as a picture of a graph which shows the execution time with a while loop and with the "run continuously".
    Thanks a lot for your help!
    Solved!
    Go to Solution.
    Attachments:
    TimeTesting.zip ‏70 KB
    Graph.PNG ‏20 KB

    jul7290 wrote:
    Thank you very much for your help! I added the "Clear task" vi and now it works properly.
    If you are still using the RUn Continuously you should stop. That is meant strictly for debugging. In fact, I can't even tell you the last time I ever used it. If you want your code to repeat you should use loops and control the behavior of the code.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

Maybe you are looking for