Different execution time

Hello, when i run the bellow code more than one time i am getting different execution time("Total time taken"),
Ex. when i run first time it prints Total time taken 47
second time it prints Total time taken 16
thrid time Total time taken 78 ,why its vary,
Please solve my question
long l = System.currentTimeMillis();
System.out.println(l);
     for(int i=0; i <88; i++){
          System.out.print("Hello ");
     System.out.println("\n"+" Total time taken "+(System.currentTimeMillis() - l));

Its worth noting that the application is IO bound (due to printing to a device/console)
So you are really testing the performance of your IDE/console.
public static void main(String... args) throws IOException {
    int count = 100 * 1000;
    long start = 0;
    for (int i = -count / 10; i < count; i++) {
        if (i == 0)
            start = System.nanoTime();
        System.out.print("Hello ");
    long time = System.nanoTime() - start;
    System.out.printf("%nTotal time taken %,5d micro-seconds per call%n", time / count);
}

Similar Messages

  • Different execution times for back ground jobs - why?

    We have few jobs scheduled and each time they run, we see a different execution times. Some times, it increases and some times it decreases steeply . What could be the reasons?
    Note:
    1. We have the same load of jobs on the system in all the cases.
    2. We haven't changed any settings at system level.
    3. Data is more or less at the same range in all the executions.
    4. We mainly run these jobs
    Thanks,
    Kiran
    Edited by: kiran dasari on Mar 29, 2010 7:12 PM

    Thank you Sandra.
    We have no RFC calls or other instances. Ours is a very simple system. We have two monster jobs, the first one for HR dataand second an extract program to update a Z table for BW loads.
    Our basis and admin confirmed that there are no network issues
    Note: We are executing these jobs over the weekend nights.
    Thanks,
    Kiran

  • Why same query takes different execution time in sql 2008

    Hi!
    With below query in SQL Server 2008 R2 when I change Book_ID  to another value like '99000349'  it takes very long time to execute while both result sets have same number of records!?
    select Card_Serial,Asset_ID, Field_Name,Field_Value,Asset_Number,Field_ID,Book_ID from dbo.vw_InspectionReport where Book_ID='99000347'
    I've test it more and more,A time I ran quickest one, or longest one first, restart Windows, but for some specific Book_ID values (although with same number of result set rows) it take multiple time slower than rest of Book_IDs.
    Also showing state of the result set is different for these diffrent Book_IDs:
    for fast ones it looks like below picture:
    for slow ones it looks like below picture:
    if you note, order of returned records are different!?
    I'm waiting for your kindly reply!...

    Do you see any changes if you add a hint to the query?
    select Card_Serial,Asset_ID, Field_Name,Field_Value,Asset_Number,Field_ID,Book_ID from dbo.vw_InspectionReport where Book_ID='99000347OPTION(RECOMPILE)
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Performance issue, same range scan different execution times

    Oracle 11gR1, queries run within seconds of each other.
    I have 2 queries that are logically the same. Even the explain plans are very similar, except the second one reports an index range scan doing much more work than the first. The table is an IOT with deal_bucket_id and datetime as PK (in that order).
    TKPROF output below:
    select count(*) from deal_bucket_detail where deal_bucket_id
    in
    (815
    ,     816
    ,     817
    ,     818
    ,     997)
    and datetime between to_date('01-JUL-08','dd-MON-rr') and to_date('01-JAN-09','dd-MON-rr')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      0.79       2.24       2936       3551          0           1
    total        4      0.79       2.24       2936       3551          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 43 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=3551 pr=2936 pw=2936 time=0 us)
    1430928   FILTER  (cr=3551 pr=2936 pw=2936 time=380920 us)
    1430928    INLIST ITERATOR  (cr=3551 pr=2936 pw=2936 time=372057 us)
    1430928     INDEX RANGE SCAN PK_DEAL_BUCKET_DETAIL (cr=3551 pr=2936 pw=2936 time=8782 us cost=1203 size=4069596 card=339133)(object id 14199)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                      2936        0.02          1.49
      SQL*Net message from client                     2        0.00          0.00
    select count(*) from deal_bucket_detail where deal_bucket_id
    between 815 and 997
    and datetime between to_date('01-JUL-08','dd-MON-rr') and to_date('01-JAN-09','dd-MON-rr')
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        2      3.70       8.86      29199      26986          0           1
    total        4      3.70       8.86      29199      26986          0           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 43 
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=26986 pr=29199 pw=29199 time=0 us)
    1430928   FILTER  (cr=26986 pr=29199 pw=29199 time=6986078 us)
    1430928    INDEX RANGE SCAN PK_DEAL_BUCKET_DETAIL (cr=26986 pr=29199 pw=29199 time=6977063 us cost=45208 size=5195748 card=432979)(object id 14199)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                       219        0.04          0.08
      db file parallel read                          35        0.04          0.32
      db file scattered read                        211        0.10          5.02
      SQL*Net message from client                     2        0.00          0.00
    ********************************************************************************How can I work out why the second query is doing much more work than the first?
    Edited by: SamB on Aug 5, 2009 6:09 PM

    Both are doing an index range scan, however a different index range scan.
    query 1: inlist iterator with <n> index range scan for 1 value, due to hard-coded values.
    query 2: index range scan for all values, starting at the lowest, due to between.
    Sybrand Bakker
    Senior Oracle DBA

  • Execution of TestStand-Sequence in LabVIEW via TS-API: Different Execution times for same sequence

    Hello Forum-Members,
    I have a problem concerning the execution of an TestStand-Sequence in LabVIEW. I have created a VI that offers the ability to choose a TestStand-Sequence-File and then executes the sequence using the TestStand-API. The implementation is based on an example in C++-Application found following this link:
    http://forums.ni.com/t5/NI-TestStand/Unreleased-references-using-engine-API-in-C/m-p/2927314#M46034
    The implementation works quite solid in case the VI is executed the first time. The VI processes the chosen sequence in a acceptable duration.
    But in case the execution is started a second time, the execution of the sequence takes ca. 30sec more than in the first case.
    Until now I have not found a solution and hope someones got a hint concerning this problem...
    I am using LabVIEW 2013 and TestStand 2013.
    I have attached my own VI, a sample sequence with a small sample VI, so you can reproduce the problem.
    Kind regards,
    TobiKi
    Solved!
    Go to Solution.
    Attachments:
    Exe-TestStand-Sequence.vi ‏25 KB
    Sequenz.vi ‏8 KB
    Test-Sequenz.seq ‏5 KB

    Hi Norbert,
    first thanks for your answer.
    What would be a reasonable way to replace the "Execution.WaitForEndEx"? My first idea is to get the respective thread of the execution and use the "Thread.WaitForEnd".
    To clarify my problem:
    The execution of the sequence itself takes longer time and so the execution of the calling VI. I have attached pictures of the log file of the first and second execution.
    Further I don't get any dialog popups during the shutdown of TestStand. (I have activated the "ReportObjectLeaks" using the "DebugOptions") While developing the attached VI I've gotten several popups. But these popups disappeared after closing all references.
    Maybe you have another hint how to locate the problem.
    TobiKi
    Attachments:
    FirstExecution_20-08-2014.png ‏16 KB
    SecondExecution_20-08-2014.png ‏16 KB

  • Execution time, elapsed time  of an sql query

    Can you please tell me how to get the execution time, elapsed time of an sql query

    user8680248 wrote:
    I am running query in the database
    I would like to know how long the query take the time to completeWhy? That answer can be totally meaningless as the VERY SAME query on the VERY SAME data on the VERY SAME database in the VERY SAME Oracle session can and will show DIFFERENT execution times.
    So why do you want to know a specific query's execution time? What do you expect that to tell you?
    If you mean that you want to know how long an existing query being executed is still going to take - that's usually quite difficult to determine. Oracle does provide a view on so-called long operations. However, only certain factors of a query's execution will trigger that this query is a long operation - and only for those specific queries will there be long operation stats that provide an estimated completion time.
    If your slow and long running query does not show in long operation, then Oracle does not consider it a long operation - it fails to meet the specific criteria and factors required as a long operation. This is not a bug or an error. Simply that your query does not meet the basic requirements to be viewed as a long operation.
    Oracle however provides the developer with the means to create long operations (using PL/SQL). You need to know and do the following:
    a) need to know how many units of work to do (e.g. how many fetches/loop iterations/rows your code will process)
    b) need to know how many units of work thus far done
    c) use the DBMS_APPLICATION_INFO package to create a long operation and continually update the operation with the number of work units thus far done
    It is pretty easy to implement this in PL/SQL processing code (assuming requirements a and b can be met) - and provide long operation stats and estimated completion time for the DBA/operators/users of the database, waiting on your process to complete.

  • Execution time of a query

    Hi,
    I am trying to find the execution time of a SQL query.
    How to do it?
    Regards.
    Ashish

    >I am trying to find the execution time of a SQL query.
    How to do it?
    Psuedo code:
    time = SystemTime() -- get the current system time
    RunSQL() -- run the SQL statement
    print( SystemTime()-time ) -- displays the difference in time
    Needless to say that this is utterly useless most of the time. The first time a query is run it may be subjected to a hard parse. The second time not. So the exact same SQL will show different execution times. This execution time measurement is useless as it does not tell you anything - except that there was a difference in time.
    The first time a query runs it make do a lot of physical I/O to read the data from disk. The second time around, with the data in the db buffer cache, it makes use of logical I/O. There is a significant difference in the execution of the exact same SQL. Again, the measurement of execution is meaningless. It does not tell you anything. One number versus another number. Nothing meaningful to compare.
    Fact. The very same SQL will have different execution times.
    So what do you hope to gain from measuring it?

  • Execution Time & Explain Plan Variations

    I have a scenario here. I have 2 schemas; schema1 & schema2. I executed a lengthy SELECT statement of 5 TABLE JOIN in these 2 schemas. I am getting totally different execution time (one runs at 0.3 seconds & the other at 4 seconds) and a different Explain Plan. I assume that, since its the same SELECT statement in these schema, I should get the same Explain Plan. What could be the reason for these dissimilarities? Oracle Version: 9.2.0.8.0. I am ready to share the Explain Plan of these 2 schemas. But they are of length around 300 lines.
    Thank you.

    There are many factors come in to play here.
    1.) Size of all tables involved are same
    2.) structures are also same
    3.) Also indexes are same
    4.) Also stats are up to date on both
    5.) Constraints and other factors are also same.
    regards
    PravinAnd a few more.
    6) session environments are the same
    7) bind variable values are the same - or were at first execution
    I'd change 4 to read Optimizer statistics are the same. (not up to date which is a bit vague).
    In short if you are using the CBO and feed in the exact same inputs you will get the exact same plan, however typically you won't get the exact same inputs and so may get a different plan. If your query has a time element and the two queries are hard parsed at different times it may actually be impossible to get the same input - for example the percentage of a table that is returned by a predicate like
    timestamp_col > sysdate - 1 will be estimated differently depending on the time of parsing for the same data.
    That all said looking at the plans might reveal some obvious differences, though perhaps it might be better to point at a URL that holds the plans given the length you say they are.
    Niall Litchfield
    http://www.orawin.info/
    null

  • Variable execution time

    Hi,
    I am executing a query using parallel hint whose execution time varies when i am executing it continuously.
    I created all the required indexes and the query uses all those indexes.
    For example ::
    1st time ---->3.20 sec
    2nd time ---->1.34 sec
    3rd time ----->5.41 sec
    4th time ----->2.5 sec
    5th time ----->5.8 sec
    Can any one explain me why it is taking different execution time.
    Thanks
    srikar

    TRACE/TKPROF will give you exact timings.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#PFGRF01010
    http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
    http://www.oracle-base.com/articles/8i/TKPROFAndOracleTrace.php
    Don't expect the execution time of a query to be always 100% the same if you run that query multiple times.
    Workload changes, data changes, what's in LRU/MRU changes, etc.

  • Changing execution times from recompile in 8.20

    Well, we are starting to have some serious problems with LV8.20 and its now compile times. I have read there is compression now in saving VIs which reduced the size of saved files on disk. The problem is that LV8.20 takes MUCH LONGER to recompile a VI after any change than it ever did. We have some seriously large applications and VIs in the neighborhood of 5MB code size that always worked find pre-LabVIEW 8. You could never notice the compile time in LV7.1. Now, you move a wire or add a primitive and the compile time on a 3GHz machine takes 2-3 seconds. It is improssible to code with this delay, any way to turn that "feature" off.
    What's worse, is that the execution times of VIs now changes after a recompile. We are seeing this in many of our applications when performance testing and targetting more deterministic code in RT. You recompile the VI and a component that you DID NOT EVEN CHANGE has a very drastic change in performance. I suspect its due to the VI compression/decompression, but that is a guess since it is new and this is the first time we have seen this happening to this extent in LV. Most of our developers have noticed this since switching to LV8.20 and causing us a lot of headaches.
    Any workarounds or suggestions?

    Hi Mike,
    It looks like you are working on this issue with Avinash via phone/email, but for others who may be interested, there are two bugs filed that are somewhat related, one is regarding different execution times depending on the order of unbundling, the other is in regards to slowdowns in the development environment with large projects, where a project with a huge number of VIs takes longer to do things like open the VI Hierarchy, but it does not specifically mention things like moving wires taking a significantly longer period of time.  It may be helpful to send a project that can reproduce this also, which you may want to do via email/ftp site.
    Doug M
    Applications Engineer
    National Instruments
    For those unfamiliar with NBC's The Office, my icon is NOT a picture of me

  • Same query at same time, but different execution plans from two schemas

    Hi!
    We just had some performance problems in our production system and I would like to ask for some advice from you.
    We had a select-query that was run from USER1 on SCHEMA1, and it ran a table scan on a huge table.
    Using session browser in TOAD I copied the Sql-statement, logged on SCHEMA1 and ran the same query. I got a different execution plan where I avoided the table scan.
    So my question is:
    How is it possible that the same query get different execution plans when running in two different schemas at the same time.
    Some more information:
    The user USER1 runs "alter session set current_schema=SCHEMA1;" when it logs on. Besides that it doesn't do anything so session parameter values are the same for USER1 and SCHEMA1.
    SCHEMA1 is the schema owning the tables.
    ALL_ROWS is used for both USER1 and SCHEMA1
    Our database:
    Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    PL/SQL Release 9.2.0.8.0 - Production
    CORE     9.2.0.8.0     Production
    TNS for Linux: Version 9.2.0.8.0 - Production
    NLSRTL Version 9.2.0.8.0 - Production
    Anybody has some suggestions to why I experience different execution plan for the same query, run at the same time, but from different users?

    Thanks for clarification of the schema structure.
    What happens if instead of setting the current session schema to SCHEMA1, if you simply add the schema name to alle tables, views and other objects inside your select statement?
    As in select * from schema1.dual;I know that this is not what you want eventually, but it might help to find any misleading objects.
    Furthermore it is not clear what you meant with: "avoided a table scan".
    Did you avoid a full table scan (FTS) or was the table completely removed from the execution plan?
    Can you post both plans?
    Edited by: Sven W. on Mar 30, 2010 5:27 PM

  • Same sqlID with different  execution plan  and  Elapsed Time (s), Executions time

    Hello All,
    The AWR reports for two days  with same sqlID with different  execution plan  and  Elapsed Time (s), Executions time please help me to find out what is  reason for this change.
    Please find the below detail 17th  day my process are very slow as compare to 18th
    17th Oct                                                                                                          18th Oct
    221,808,602
    21
    2tc2d3u52rppt
    213,170,100
    72,495,618
    9c8wqzz7kyf37
    209,239,059
    71,477,888
    9c8wqzz7kyf37
    139,331,777
    1
    7b0kzmf0pfpzn
    144,813,295
    1
    0cqc3bxxd1yqy
    102,045,818
    1
    8vp1ap3af0ma5
    128,892,787
    16,673,829
    84cqfur5na6fg
    89,485,065
    1
    5kk8nd3uzkw13
    127,467,250
    16,642,939
    1uz87xssm312g
    67,520,695
    8,058,820
    a9n705a9gfb71
    104,490,582
    12,443,376
    a9n705a9gfb71
    62,627,205
    1
    ctwjy8cs6vng2
    101,677,382
    15,147,771
    3p8q3q0scmr2k
    57,965,892
    268,353
    akp7vwtyfmuas
    98,000,414
    1
    0ybdwg85v9v6m
    57,519,802
    53
    1kn9bv63xvjtc
    87,293,909
    1
    5kk8nd3uzkw13
    52,690,398
    0
    9btkg0axsk114
    77,786,274
    74
    1kn9bv63xvjtc
    34,767,882
    1,003
    bdgma0tn8ajz9
    Not only queries are different but also the number of blocks read by top 10 queries are much higher on 17th than 18th.
    The other big difference is the average read time on two days
    Tablespace IO Stats
    17th Oct
    Tablespace
    Reads
    Av Reads/s
    Av Rd(ms)
    Av Blks/Rd
    Writes
    Av Writes/s
    Buffer Waits
    Av Buf Wt(ms)
    INDUS_TRN_DATA01
    947,766
    59
    4.24
    4.86
    185,084
    11
    2,887
    6.42
    UNDOTBS2
    517,609
    32
    4.27
    1.00
    112,070
    7
    108
    11.85
    INDUS_MST_DATA01
    288,994
    18
    8.63
    8.38
    52,541
    3
    23,490
    7.45
    INDUS_TRN_INDX01
    223,581
    14
    11.50
    2.03
    59,882
    4
    533
    4.26
    TEMP
    198,936
    12
    2.77
    17.88
    11,179
    1
    732
    2.13
    INDUS_LOG_DATA01
    45,838
    3
    4.81
    14.36
    348
    0
    1
    0.00
    INDUS_TMP_DATA01
    44,020
    3
    4.41
    16.55
    244
    0
    1,587
    4.79
    SYSAUX
    19,373
    1
    19.81
    1.05
    14,489
    1
    0
    0.00
    INDUS_LOG_INDX01
    17,559
    1
    4.75
    1.96
    2,837
    0
    2
    0.00
    SYSTEM
    7,881
    0
    12.15
    1.04
    1,361
    0
    109
    7.71
    INDUS_TMP_INDX01
    1,873
    0
    11.48
    13.62
    231
    0
    0
    0.00
    INDUS_MST_INDX01
    256
    0
    13.09
    1.04
    194
    0
    2
    10.00
    UNDOTBS1
    70
    0
    1.86
    1.00
    60
    0
    0
    0.00
    STG_DATA01
    63
    0
    1.27
    1.00
    60
    0
    0
    0.00
    USERS
    63
    0
    0.32
    1.00
    60
    0
    0
    0.00
    INDUS_LOB_DATA01
    62
    0
    0.32
    1.00
    60
    0
    0
    0.00
    TS_AUDIT
    62
    0
    0.48
    1.00
    60
    0
    0
    0.00
    18th Oct
    Tablespace
    Reads
    Av Reads/s
    Av Rd(ms)
    Av Blks/Rd
    Writes
    Av Writes/s
    Buffer Waits
    Av Buf Wt(ms)
    INDUS_TRN_DATA01
    980,283
    91
    1.40
    4.74

    The AWR reports for two days  with same sqlID with different  execution plan  and  Elapsed Time (s), Executions time please help me to find out what is  reason for this change.
    Please find the below detail 17th  day my process are very slow as compare to 18th
    You wrote with different  execution plan, I  think, you saw plans. It is very difficult, you get old plan.
    I think Execution plans is not changed in  different days, if you not added index  or ...
    What say ADDM report about this script?
    As you know, It is normally, different Elapsed Time for same statement in different  day.
    It is depend your database workload.
    It think you must use SQL Access and SQl Tuning advisor for this script.
    You can get solution for slow running problem.
    Regards
    Mahir M. Quluzade

  • Execution time of query on different indexes

    Hello,
    I have a query on the table, the execution time has hugh difference using different indexes on the table. The table has about 200,000 rows. Any explaination on it?
    Thanks,
    create table TB_test
    ( A1 number(9),
    A2 number(9)
    select count(*) from TB_test
    where A1=123 and A2=456;
    A. With index IDX_test on column A1:
    Create index IDX_test on TB_test(A1);
    Explain plan:
    SELECT STATEMENT
    Cost: 3,100
    SORT AGGREGATE
    Bytes: 38 Cardinality: 1
    TABLE ACCESS BY INDEX ROWID TABLE TB_test
    Cost: 3,100 Bytes: 36 Cardinality: 1
    INDEX RANGE SCAN INDEX IDX_test
    Cost: 40 Cardinality: 21,271
    Execution time is : 5 Minutes
    B. With index IDX_test on column A1 and A2:
    Create index IDX_test on TB_test(A1, A2);
    Explain plan:
    SELECT STATMENT
    Cost: 3 Bytes: 37 Cardinality: 1
    SORT AGGREGATE
    Bytes: 37 Cardinality: 1
    INDEX RANGE SCAN INDEZ IDX_test
    Cost: 3 Bytes 37 Cardinality:1
    Execution time is: 1.5 Seconds

    Additional you should check how many values you have in your table for the specific column values.
    The following select might be helful for that.
    select count(*)  "total_count"
           ,count(case when A1=123 then 1 end) "A1_count"
           ,count(case when A1=123 and A2=456 then 1 end) "A1andA2_count"
    from TB_test;Share your output of this.
    I expect the value for A1_count still to be high. But the value for A1+A2_count relatively low.
    However 5 minutes is far to long for such a small table. Even if you run it on a laptop.
    There must be a reason why it is that slow.
    First thing to consider would be to update your statistics for the table and the index.
    Second thing could be that the table is very sparsly fillled. Meaning, if you frequently delete records from this table and load new data using APPEND hint, then the table will grow, because the free space from the deletes is never reused. Any table access in the execution plan, will be slower then needed.
    A similar thing can happen, if many updates on previously empty columns are made on a table (row chaining problem).
    So if you explain a little, how this table is filled and used, we could recognize a typical pattern that leads to performance issues.
    Edited by: Sven W. on Nov 28, 2012 5:54 PM

  • Execution time of sql query differing a lot between two computer

    hi
    execution time of a query in my computer and more than 30 different computer is less than one second but on one of our
    customers' computers, execution time is more than ten minute. databases and data and queries are same. i re-install sql but problem remains. my sql is ms sql 2008 r2.
    any one has idea for this problem?

    Hi mahdi,
    Obviously, we can't get enough information to help you troubleshoot this issue. So, please elaborate your issue with more detail so that the community members can help you in more effecient manner.
    In addition, here is a good article regarding checklist for analyzing Slow-Running queries. Please see:
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    And SQL Server Profiler and Performance Monitor are good tools to troubleshoot performance issue, please see:
    Correlating SQL Server Profiler with Performance Monitor:
    https://www.simple-talk.com/sql/database-administration/correlating-sql-server-profiler-with-performance-monitor/
    Regards,
    Elvis Long
    TechNet Community Support

  • DTP Execution Time is different when run by PC & Manually

    Dear Experts,
    Could you please help me in finding the answers for the below questions.
    1. Execution time for DTp is higher when ran manually  and using process chain. What can be the possible reasons & whic one is correct?
    2. Total Exceution time mentioned in the DTP details ta is less when added the total time for each data package. What are the reason for that difference. Which is correct?
    Thanks & regards,
    Ganesh Thota

    Hi,
    1)Execution time depends on various factors like the number of processes running on the system on a particular time(Ucan check it in SM37)
       in Process chain an instance of the particular process type is created( in your case process type is DTP even though we execute DTP in PC it originally executes the DTP in RSA1 .. i.e., an instance is created)
    2)the thing is when u execute a DTP data is transferred in the form of Packages the mismatch in the execution type is because u might have added the timings of each data package
    but when you check clearly :  2-3 data packages in DTP  start at the same time so you should not add the time taken of each individual data package to get the total time of the DTP
    Hope it answers you questions
    Regards,
    MADhu

Maybe you are looking for