Execution time for an insert/update

Hello!
We are using EJB entities 3.0 and JPA configured to run on WAS and DB2. We also are using Container Managed Persistence
We have a transactional method let's name it addA(), when executed, ultimately inserts data in 11 DB2 tables.
In some of the 11 tables there are could be multiple rows inserted, in average, about 2 inserts.
We are using the EntityManager.persist method to handle each entity.
The method completes in about 11 seconds when the resources on the server (CPU,memory) are in a good state (so not overloaded).
Is this a reasonable/decent time for the operation we are trying to do?
If not, what would be a reasonable running time for such an operation?
What do we need to do in order to improve the performance and decrease the execution time, other than switching to BMP and coding manual SQL inserts?

user2617486 wrote:
Do you have any idea how we can localize/isolate better the problem at the DB level?
Can we programatically insert log statements to see how long it takes the processing on the WAS and how long takes the actual SQL statements execution once they hit the DB2 database?You need help from a DBA, you can't reason this problem away. You need cold hard facts from whatever tooling the database provides. Of course you could try adding log statements to see how long each database operation is taking on the Java side of things, but that only proves that it is slow, not WHY it is slow.
The network latency can not be considered in this case since we run the test application on the same WAS where the application resides so it no networking involved.and the database runs on that machine as well? This is new information you are pulling out of your hat by the way, now all of a sudden there are two applications? And with the limited information you give I am to assume you are having performance problems from the test application and not from your "main application"? Otherwise I see no point in you making this argument.

Similar Messages

  • Do the execution time of the insert command depend upon the no the indexes

    hi,
    Do the execution time of the insert,update and delete command depend upon the no the indexes created for a table......
    Edited by: [email protected] on Mar 4, 2009 3:02 AM

    sure,..
    An index is a structure which contains entries pointing to the actual data in the table.
    When you insert a record into a table, the data which should also be indexed is inserted in the index structure. This index data needs to be in a specific place, not just anywhere (as opposed to e.g. a heap table).
    So this might lead to an update and insert in the index structure.
    This is just to give you an idea. More on the subject in Tom Kyte's Expert Oracle Database Architecture and of course Oracle's documentation.

  • Query Execution Time for a Query causing ORA-1555

    dear Gurus
    I have ORA-01555 error , earlier I used the Query Duration mentioned in Alert Log and increased the Undo Retention as I did not find th UnDOBLKS column of v$undostat high for the time of occurence of ORA-01555..
    But new ORA-01555 is coming whose query duration exceeds Undo Retention time..
    My question -
    1. Is it possible to accurately find the query duration time besides the Alert Log file ?

    abhishek, as you are using an undo tablespace and have already increased the time that undo data is retained via undo_retention then you might want to consider the following ideas which were useful with 1555 error under manual rbs segment management.
    1- Tune the query. The faster a query runs the less likely a 1555 will occur.
    2- Look at the processing. If a process was reading and updating the same table while committing frequenctly then the process under manual rbs management would basically create its own 1555 error rather than just being the victum of another process changing data and the rbs data being overlaid while the long running query was still running. With undo management the process could be generating more data than can be held for the undo_retention period but because it is committed Oracle has been told it doesn't really have to keep the data for use rolling back a current transaction so it gets discarded to make room for new changes.
    If you find item 2 is true then separating the select from the update will likely eliminate the 1555. You do this by building a driving table that has the keys of the rows to be updated or deleted. Then you use the driver to control accessing the target table.
    3- If the cause of the 1555 is or may be delayed block cleanout then select * from the target prior to running the long running query.
    Realistically you might need to increase the size of the undo tablespace to hold all the change data and the value of the undo_retention parameter to be longer than the job run time. Which brings up back to option 1. Tune every query in the process so that the job run time is reduced to optimal.
    HTH -- Mark D Powell --
    dear mark
    Thanks for the excellent advise..I found that the error is coming because of frequent commits..which is item 2 as u righly mentioned ..
    I think I need to keep a watch on the queries running , I was just trying to find the execution time for the queries..If there is any way to find the query duration without running a trace ..
    regards
    abhishek

  • New iphone. someone else put in their username and password. time for me to update the iphone. downloaded itunes onto pc. plugged in iphone and waiting for prompt to show phone. pc thinks it is the camera. how do i get itunes/pc to recognize iphone ?

    new iphone. (someone else put in their username and password. so i had lots of apps that i had to eliminate but do NOT wish to compromise the contact list) time for me to update the iphone. downloaded itunes onto pc. plugged in iphone and waiting for prompt to show phone. pc thinks it is the camera. how do i get itunes/pc to recognize iphone ? then, how do i backup contacts and proceed before updating iphone?

    This forum is for questions from those managing sites on iTunes U, Apple's service for colleges and universities to post educational material in the iTunes Store. You'll be most likely to get help with this issue if you ask in the general iTunes or iPhone forums.
    Regards.

  • Execution time for Call Library Function Node

    I am experimenting with the Call Library Function Node block in LabVIEW and am curious if it should be running faster than what I'm seeing.  For testing purposes, I have compiled and transfered to my RT target the .out file from the KB article http://digital.ni.com/public.nsf/allkb/81D1172E3C28A5E4862575CC0076A230 (I'm using the vxworks 6.1 version).  The function in the .out file just multiplies two inputs together, adds a constant, and returns the result.  I have put this inside a 1 kHz timed loop with a commanded period of 1 ms and via the Ticks(ms) block and shift registers I calculate the amount of time per loop execution.  This process is apparently taking 5 ms per cycle and to me that seems slow.  Is that roughly the correct execution time for this kind of setup?  I will attach my test .vi file.
    What I'm using:
    Windows 7
    LabVIEW 2009 SP1
    NI-cRIO 9024 with NI-RIO 3.4.0
    Solved!
    Go to Solution.
    Attachments:
    test DLL.vi ‏31 KB

    First off, the way you are doing timing isn't necessarily accurate because you don't know when the tick count VI is being called. For example, if it gets called on one iteration after your call library node executes, and the next iteration it gets called before the CLFN it executes, the subtraction doesn't include the call of the CLFN so you aren't seeing the true time it is taking for the dll to be called.
    Where it says "error" on the top left hand corner of your loop. left click and choose previous iteration timing. Also, do you have the ability to choose a 1 Mhz clock? Are you sure it's actually being run on the RT and not on your PC? Running it on the PC would definitely make it difficult to execute at a 1 kHz rate.
    CLA, LabVIEW Versions 2010-2013

  • Execution Time for T.Code

    Hi Experts,
    I want to know the exact execution time for a t.code. I check it in ST03 or ST03n but I can't get proper data. In ST03 I get the average and total response time but I want the exact ececution or response  time.
    Waiting for your inputs.
    Regards,
    Nisit

    From SAP
    For old verions
    In ST03 transaction
    click on "Performance Database" tab 
    Then double click on Total
    It will open Dialogue box CHOOSE TIME PERIOD  ,select the time period and then click on the required date
    Then Click on Transaction Profile ,here it will give the list of transaction executed and its execution time
    In STO3N , you will find "TRANSACTION PROFILE" under Analysis views, when you double click on it will give present days Transaction codes executed.
    If you want the month data then go to Export mode in ST03n ,you will get the data "TRANSACTION PROFILE" under Analysis views  .
    Regards,
    Beena

  • Execution time for web reports

    Hello every one,
    How to calculate execution time for web reports, for query execution we will go through RSRT, by giving query name and press execute + Debug button then select statistical data & Do not Cache buttons then press enter, after getting output press on back button, we will get duration of the query.....
    But my question is , can we calculate execution time for webreport, if so can you please guide me.
    and can you also tell me , if there is any RRI for one report, how to calculate execution time for these queries.
    Ex : Query ABC have XYZ as its drilldown report , i need to calculate execution time for XYZ report via ABC report.
    Thanks in advance,
    Best Regards.
    NP.

    Hi,
    For reports executed in java web you can add the parameter &PROFILING=X
    to the URL in order to record the execution time. Please have a look at SAP note 1048691 for further information.
    Best regards,
    Janine

  • How to find the Execution Time for Java Code?

    * Hi everyone , i want to calculate the execution time for my process in java
    * The following was the ouput for my coding,
    O/P:-
    This run took 0 Hours ;1.31 Minutes ;78.36 Seconds
    *** In the above output , the output should come exactly what hours , minutes and seconds for my process,
    but in my code the minutes are converted into seconds(It should not)...
    * Here is my coding,
        static long start_time;
        public static void startTime()
            start_time = System.currentTimeMillis();
        public static void endTime()
            DecimalFormat df = new DecimalFormat("##.##");
            long end_time = System.currentTimeMillis();
            float t = end_time - start_time;
            float sec = t / 1000;
            float min = 0, hr = 0;
            if (sec > 60) {
                min = sec / 60;
            if (min > 60) {
                hr = min / 60;
            System.out.println("This run took " + df.format(hr) + " Hours ;"+ df.format(min) + " Minutes ;" + df.format(sec) + " Seconds");
        }* How to Calcualte exact timing for my process....
    * Thanks

    * Hi flounder, Is following code will wotk perfectly?
         public static void endTime()
              DecimalFormat df = new DecimalFormat("##.##");
              long end_time = System.currentTimeMillis();
              float t = end_time - start_time;
              float sec = t / 1000;
              float min = 0, hr = 0;
              while(sec >= 60){
         min++;
         sec = sec -60;
         if (min >= 60){
         min = 0; //or min = min -60;
         hr++;
              System.out.println("This run took " + df.format(hr) + " Hours ;"+ df.format(min) + " Minutes ;" + df.format(sec) + " Seconds");
         }

  • Can I reduce the execution time for a step in a TestStand ?

    Hi,
    I calculated the a single step execution time for TestStand Ver 2.0. It comes to around 20 milliseconds/step. Can I reduce this excution time ?
    Are there any settings available for configuring execution time parameters except result logging and exception handlings to reduce the execution time ?

    It's difficult to tell how you what time you are reporting for your step. Clearly we don't have control of the time it takes your code to execute. However, we are constantly working on reducing the overhead of calling the code. In addition, you don't mention the type of step you are calling. One way to have a common reference is to use the example \Examples\Benchmarks\Benchmarks.seq. Below have have posted the results of running this sequence with both tracing and result collection enabled and then disabled. I have a 700 MHz, 128 MB RAM, Dell PIII laptop. In this example there is no code within the code modules. You notice that calling a DLL has the least overhead with a minimum of 7.459 ms with tracing and results enabled and 0.092 ms with tracing and results disabled. Although not included below, if I enable results be disable tracing I get a minimum time of 0.201 ms, a 100x improvement on your time.
    With Results and Tracing enabled.
    7.578 milliseconds per step for CVI Standard Prototype - Object File
    7.579 milliseconds per step for CVI Standard Prototype - DLL
    7.459 milliseconds per step for DLL Flexible Prototype
    8.589 milliseconds per step for DLL Flexible Prototype Numeric Limit
    9.563 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
    10.015 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
    7.868 milliseconds per step for ActiveX Automation
    8.892 milliseconds per step for LabVIEW Standard Prototype
    With tracing and results disabled.
    0.180 milliseconds per step for CVI Standard Prototype - Object File
    0.182 milliseconds per step for CVI Standard Prototype - DLL
    0.092 milliseconds per step for DLL Flexible Prototype
    0.178 milliseconds per step for DLL Flexible Prototype Numeric Limit
    0.277 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition
    0.400 milliseconds per step for DLL Flexible Prototype Numeric Limit with Precondition and 4 Parameters
    0.270 milliseconds per step for ActiveX Automation
    1.235 milliseconds per step for LabVIEW Standard Prototype

  • How can i calculate execution time for methods?

    I'm making a project that i want to calculate execution time for a
    method in "miliseconds" or "microseconds".You see,I have a sort algorithm and i want to calculate execution time of this algorithm.How can i do?
    Thanks...

    Just remembered.
    The answer you get isn't trustworthy below a hundred millis, so you may need to sort a hundred or a thousand times to get a reasonable elapsed time. You also need to run the test five or ten times and take an average. In Windows you should fire up the Task Manager and be sure that your other CPU usage is as near to zero as you can get.

  • Estimate execution time for CTAS

    Hi,
    I am searching for long to find a way to estimate the execution time for CTAS commands. I am a DBA. Our users run CTAS commands to load millions of rows. The commands fetch data from 4-5 very big tables each with millions of records and process them using where clause and group by clause and finally create the table. All these things are coded in the CTAS command. These CTAS sometime takes long time like 5 , 8 Hrs. Users frequently ask me to find how long it's going to take. I use both OEM and TOAD. But I couldn't find the time estimated from these tools. I feel that there must be some way, but I don't know the method.
    Can any body please help me in this regard?
    Thanks & Regards
    Ananda Basak

    It depends on a number of factors chief among them how accurate your estimate needs to be but also including things like what version of Oracle you're using, how accurate your database statistics are, etc.
    One option is to look at the TIME column in the plan. For example, if I wanted to do a CTAS to create a copy of the EMP table, the optimizer expects that to take on the order of a second. Of course, the optimizer's estimates are only estimates and are only as accurate as the database statistics that are in place. If the optimizer generates a bad plan, it's likely because the optimizer expects some operation to take much more or much less time than it does in reality in which case the optimizer's runtime estimate is likely to be way off.
    SQL> explain plan for create table emp_copy as select * from emp;
    Explained.
    SQL> ed
    Wrote file afiedt.buf
      1  select *
      2*   from table( dbms_xplan.display() )
    SQL> /
    PLAN_TABLE_OUTPUT
    Plan hash value: 2748781111
    | Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | CREATE TABLE STATEMENT |          |    14 |   546 |     4   (0)| 00:00:01 |
    |   1 |  LOAD AS SELECT        | EMP_COPY |       |       |            |          |
    |   2 |   TABLE ACCESS FULL    | EMP      |    14 |   546 |     3   (0)| 00:00:01 |
    -----------------------------------------------------------------------------------Depending on the query plan, you may be able to query the GV$SESSION_LONGOPS table to track the progress of any long-running operations in your session. If your query plan involves a lot of full table scans, sorts that take more than a few seconds, hash joins, etc. then it is likely that you'll be able to chart the progress of a query over time by watching GV$SESSION_LONGOPS change. Of course, if your query is going to need to do many long-running operations, you'll need to a human to interpret the data a bit in order to figure out where in the plan Oracle currently is and how far along that means the entire query is.
    SELECT *
      FROM gv$session_longops
    WHERE time_remaining > 0If you're using 11g and you have the performance and tuning pack licensed, you could also potentially use the V$SQL_PLAN_MONITOR view.
    Justin

  • Ideal execution time for any program

    Hi,
    Is there any method to determine the ideal execution time for a program ?
    Or else how to determine that ?
    I just wanted the max. time that a program can take so that the performance would not be hampered.
    Thanks,
    Binay.

    did you ask for the 'ideal execution time' or 'how to measure execution times'?
    The second question was answered in one of your other questions.
    Optimization:
    Do SQL Trace, go to Summary by SQL statement, check 10 Top contributions (time = duration).
    Try to optimize them, note minimal time per record, if larger than 10.000 microsecodns, then you should index usage.
    Do SE30, go to hit list, sort by net time, again address 10 Top contributions, try to optimize, check the coding.
    Do optimization and trace again, check again 10 Top contributions ....
    Siegfried

  • How to get execution time for a view inside procedure ?

    Hi,
    I want execution time for all the views in my database. I tried "execute immediate" but it does not seem to work.
    It is not waiting to complete the execution of view to go to next step.
    If I am executing the same statement in sqlplus, it is displaying correct time.
    Here is my code:
    Begin
    output_file := UTL_FILE.FOpen ('RECORDING',v_FileName, 'W', 32767);
    Open viewcur;
    Loop
    Fetch viewcur into v_view_name;
         Exit when viewcur%notfound;
         SELECT to_char(systimestamp,'DD-MON-YYYY HH24:MI:SS.FF') into v_start_time from dual;
         v_stmt := 'Select * from ' ||v_view_name ;
         Execute Immediate v_stmt;
    SELECT to_char(systimestamp,'DD-MON-YYYY HH24:MI:SS.FF') into v_end_time from dual;
    v_record_str := v_start_time||','||v_view_name||','||v_end_time;
         UTL_FILE.PUT_LINE(output_file, v_record_str);
    End Loop;
    Close viewcur;
    utl_file.fClose(output_file);
    End ;
    Oracle version: 11.1.0.6.0

    Hi,
    Running with a user with dba privileges:
    DECLARE
        CURSOR viewcur IS
            SELECT table_name
            FROM   dictionary d
            WHERE  d.table_name LIKE 'ALL_A%';
        output_file UTL_FILE.file_type;
        v_FileName  VARCHAR2(30) := 'TEST_VIEW_TIME.TXT';
        v_view_name dictionary.table_name%TYPE;
        v_start_time varchar2(30);
        v_end_time varchar2(30);
        v_record_str varchar2(200);
        v_stmt varchar2(200);
    BEGIN
        output_file := UTL_FILE.FOpen('EXT_FILES', v_FileName, 'W', 32767);
        OPEN viewcur;
        LOOP
            FETCH viewcur
                INTO v_view_name;
            EXIT WHEN viewcur%NOTFOUND;
            SELECT TO_CHAR(systimestamp, 'DD-MON-YYYY HH24:MI:SS.FF')
            INTO   v_start_time
            FROM   dual;
            v_stmt := 'Select * from ' || v_view_name;
            EXECUTE IMMEDIATE v_stmt;
            SELECT TO_CHAR(systimestamp, 'DD-MON-YYYY HH24:MI:SS.FF')
            INTO   v_end_time
            FROM   dual;
            v_record_str := v_start_time || ',' || v_view_name || ',' || v_end_time;
            UTL_FILE.PUT_LINE(output_file, v_record_str);
        END LOOP;
        CLOSE viewcur;
        utl_file.fClose(output_file);
    END;
    /TEST_VIEW_TIME.TXT:
    02-JUL-2009 11:48:47.953000,ALL_ARGUMENTS,02-JUL-2009 11:48:47.953000
    02-JUL-2009 11:48:47.953000,ALL_ALL_TABLES,02-JUL-2009 11:48:47.953000
    02-JUL-2009 11:48:47.953000,ALL_ASSOCIATIONS,02-JUL-2009 11:48:47.953000
    02-JUL-2009 11:48:47.953000,ALL_AUDIT_POLICIES,02-JUL-2009 11:48:47.999000
    02-JUL-2009 11:48:47.999000,ALL_AUDIT_POLICY_COLUMNS,02-JUL-2009 11:48:48.093000
    02-JUL-2009 11:48:48.093000,ALL_AWS,02-JUL-2009 11:48:48.187000
    02-JUL-2009 11:48:48.187000,ALL_AW_PS,02-JUL-2009 11:48:48.187000
    02-JUL-2009 11:48:48.187000,ALL_APPLY,02-JUL-2009 11:48:48.343000
    02-JUL-2009 11:48:48.343000,ALL_APPLY_PARAMETERS,02-JUL-2009 11:48:48.421000
    02-JUL-2009 11:48:48.421000,ALL_APPLY_KEY_COLUMNS,02-JUL-2009 11:48:48.437000
    02-JUL-2009 11:48:48.437000,ALL_APPLY_CONFLICT_COLUMNS,02-JUL-2009 11:48:48.781000
    02-JUL-2009 11:48:48.781000,ALL_APPLY_TABLE_COLUMNS,02-JUL-2009 11:48:48.828000
    02-JUL-2009 11:48:48.828000,ALL_APPLY_DML_HANDLERS,02-JUL-2009 11:48:48.890000
    02-JUL-2009 11:48:48.890000,ALL_APPLY_PROGRESS,02-JUL-2009 11:48:48.968000
    02-JUL-2009 11:48:48.968000,ALL_APPLY_ERROR,02-JUL-2009 11:48:49.015000
    02-JUL-2009 11:48:49.015000,ALL_APPLY_ENQUEUE,02-JUL-2009 11:48:49.234000
    02-JUL-2009 11:48:49.234000,ALL_APPLY_EXECUTE,02-JUL-2009 11:48:49.281000
    02-JUL-2009 11:48:49.281000,ALL_AW_PROP,02-JUL-2009 11:48:49.531000
    02-JUL-2009 11:48:49.546000,ALL_AW_OBJ,02-JUL-2009 11:48:49.578000
    02-JUL-2009 11:48:49.578000,ALL_AW_PROP_NAME,02-JUL-2009 11:48:49.609000
    02-JUL-2009 11:48:49.609000,ALL_AW_AC,02-JUL-2009 11:48:49.624000
    02-JUL-2009 11:48:49.624000,ALL_AW_AC_10G,02-JUL-2009 11:48:49.640000Regards,

  • What mechanism Oracle 10g use for write (Insert/ Update) and Read (Select)?

    Hi
    What mechanism Oracle 10g use for write (Insert/ Update) and Read (Select)?
    Thank you

    Aren't the answers given in PL/SQL forum sufficient enough?Well, as the first answer in that forum directed the OP to this forum you can hardly blame them for the repost.
    There is some high-level stuff in the Concepts Guide. If that is insufficient the OP will need to tell us what more details they need to know (and perhaps why).
    Cheers, APC

  • Queries taking high execution time for zero count

    Hi,
    i have procedures executing as jobs.
    the procedures take a lot of time to execute when the cursor count is zero.
    what might be the reason for this?

    GreenHorn wrote:
    cursor 1 - select a.col1, b.col1,decode(c.col1,1,c.col1,2,c.col2,null) col3 from a,b,c
    where joing conditions
    and nvl(c.col3,c.col4) = b.col3
    and c.col5 is null
    cursor 2 - cursor 1 - select a.col1, b.col1,decode(c.col1,1,c.col1,2,c.col2,null) col3 from a,b,c
    where joing conditions
    and a.timestamp > sysdate-1
    and nvl(c.col3,c.col4) = b.col3
    and c.col5 is not nulll
    cursor 2 first updates the values of col5 to null
    cursor 1 recalculates the value of col5 and updates it.
    c is a partitioned table and partition code is also present in the where condition.One question: Since you say that the cursor is "updating", but the cursor is a query, does this mean that you're performing row-by-row processing in a loop?
    If yes, you might be better off with doing this in one or two plain SQL statements, which is probably much faster.
    Another question: You say that after taking the described measures the performance was significantly better but became again worse after a couple of days again, is this right?
    Can you provide more details, what "good" and "bad" performance means, e.g. in terms of execution time?
    You might want to check if the execution plans change between the "good" performance and the "bad" performance.
    If your table continuously gets data deleted and for some reason the deleted rows are not re-used, e.g. by using direct-path inserts to add new data, then your segment might become larger and larger and you would need to re-organize the table if you use regularly full table scans against it.
    The execution plan posted is not really helpful. Try to use DBMS_XPLAN.DISPLAY to get a proper output including the "Predicate Information" section below the plan and specify to which of the two statements the plan corresponds.
    Use the {noformat}{noformat} tags to format the plan output properly here in mono-space fonts.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/
    Edited by: Randolf Geist on Dec 12, 2008 9:57 AM
    Note regarding execution plan added                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Maybe you are looking for