MAXL execution time

Hi,
I need some help. We have a Maxl script running on a daily basis. We would like to get the total execution time.
Is there any Maxl command or other alternatives to find it ?
Kindly advice.
Thanks
Andy

Or(/and), if you're executing from a batch or similar, what about just echo %date% %time% before and after?
I'd still turn timestamp on either way (despite the fact that it looks pretty ugly in my opinion)

Similar Messages

  • Execution time of Business Rule

    Hi,
    My rule is taking more than a day to run. How can i watch its exact execution time. I can't see it in job console as it shows that the business rule is still in processing. And i have also checked in HSP_JOB_STATUS table. It also shows that it is in processing.
    Is there an other way to see the exact execution time of business rule.
    Thanks and Regards.

    To see the exact execution time, you can check in HBRlaunch.log file in planning.
    If you want to track the execution time of different parts of your calculation script/business rule then see the essbase application log in detail in EAS.
    To see log specific to your business rule, use maxl and spool log to text file, this will give you the logs related to your business rule only and the execution time.
    Thanks
    YSP

  • Loading jar files at execution time via URLClassLoader

    Hello�All,
    I'm�making�a�Java�SQL�Client.�I�have�practicaly�all�basic�work�done,�now�I'm�trying�to�improve�it.
    One�thing�I�want�it�to�do�is�to�allow�the�user�to�specify�new�drivers�and�to�use�them�to�make�new�connections.�To�do�this�I�have�this�class:�
    public�class�DriverFinder�extends�URLClassLoader{
    ����private�JarFile�jarFile�=�null;
    ����
    ����private�Vector�drivers�=�new�Vector();
    ����
    ����public�DriverFinder(String�jarName)�throws�Exception{
    ��������super(new�URL[]{�new�URL("jar",�"",�"file:"�+�new�File(jarName).getAbsolutePath()�+"!/")�},�ClassLoader.getSystemClassLoader());
    ��������jarFile�=�new�JarFile(new�File(jarName));
    ��������
    ��������/*
    ��������System.out.println("-->"�+�System.getProperty("java.class.path"));
    ��������System.setProperty("java.class.path",�System.getProperty("java.class.path")+File.pathSeparator+jarName);
    ��������System.out.println("-->"�+�System.getProperty("java.class.path"));
    ��������*/
    ��������
    ��������Enumeration�enumeration�=�jarFile.entries();
    ��������while(enumeration.hasMoreElements()){
    ������������String�className�=�((ZipEntry)enumeration.nextElement()).getName();
    ������������if(className.endsWith(".class")){
    ����������������className�=�className.substring(0,�className.length()-6);
    ����������������if(className.indexOf("Driver")!=-1)System.out.println(className);
    ����������������
    ����������������try{
    ��������������������Class�classe�=�loadClass(className,�true);
    ��������������������Class[]�interfaces�=�classe.getInterfaces();
    ��������������������for(int�i=0;�i<interfaces.length;�i++){
    ������������������������if(interfaces.getName().equals("java.sql.Driver")){
    ����������������������������drivers.add(classe);
    ������������������������}
    ��������������������}
    ��������������������Class�superclasse�=�classe.getSuperclass();
    ��������������������interfaces�=�superclasse.getInterfaces();
    ��������������������for(int�i=0;�i<interfaces.length;�i++){
    ������������������������if(interfaces[i].getName().equals("java.sql.Driver")){
    ����������������������������drivers.add(classe);
    ������������������������}
    ��������������������}
    ����������������}catch(NoClassDefFoundError�e){
    ����������������}catch(Exception�e){}
    ������������}
    ��������}
    ����}
    ����
    ����public�Enumeration�getDrivers(){
    ��������return�drivers.elements();
    ����}
    ����
    ����public�String�getJarFileName(){
    ��������return�jarFile.getName();
    ����}
    ����
    ����public�static�void�main(String[]�args)�throws�Exception{
    ��������DriverFinder�df�=�new�DriverFinder("D:/Classes/db2java.zip");
    ��������System.out.println("jar:�"�+�df.getJarFileName());
    ��������Enumeration�enumeration�=�df.getDrivers();
    ��������while(enumeration.hasMoreElements()){
    ������������Class�classe�=�(Class)enumeration.nextElement();
    ������������System.out.println(classe.getName());
    ��������}
    ����}
    It�loads�a�jar�and�searches�it�looking�for�drivers�(classes�implementing�directly�or�indirectly�interface�java.sql.Driver)�At�the�end�of�the�execution�I�have�found�all�drivers�in�the�jar�file.
    The�main�application�loads�jar�files�from�an�XML�file�and�instantiates�one�DriverFinder�for�each�jar�file.�The�problem�is�at�execution�time,�it�finds�the�drivers�and�i�think�loads�it�by�issuing�this�statement�(Class�classe�=�loadClass(className,�true);),�but�what�i�think�is�not�what�is�happening...�the�execution�of�my�code�throws�this�exception
    java.lang.ClassNotFoundException:�com.ibm.as400.access.AS400JDBCDriver
    ��������at�java.net.URLClassLoader$1.run(URLClassLoader.java:198)
    ��������at�java.security.AccessController.doPrivileged(Native�Method)
    ��������at�java.net.URLClassLoader.findClass(URLClassLoader.java:186)
    ��������at�java.lang.ClassLoader.loadClass(ClassLoader.java:299)
    ��������at�sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:265)
    ��������at�java.lang.ClassLoader.loadClass(ClassLoader.java:255)
    ��������at�java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
    ��������at�java.lang.Class.forName0(Native�Method)
    ��������at�java.lang.Class.forName(Class.java:140)
    ��������at�com.marmots.database.DB.<init>(DB.java:44)
    ��������at�com.marmots.dbreplicator.DBReplicatorConfigHelper.carregaConfiguracio(DBReplicatorConfigHelper.java:296)
    ��������at�com.marmots.dbreplicator.DBReplicatorConfigHelper.<init>(DBReplicatorConfigHelper.java:74)
    ��������at�com.marmots.dbreplicator.DBReplicatorAdmin.<init>(DBReplicatorAdmin.java:115)
    ��������at�com.marmots.dbreplicator.DBReplicatorAdmin.main(DBReplicatorAdmin.java:93)
    Driver�file�is�not�in�the�classpath�!!!�
    I�have�tried�also�(as�you�can�see�in�comented�lines)�to�update�System�property�java.class.path�by�adding�the�path�to�the�jar�but�neither...
    I'm�sure�I'm�making�a/some�mistake/s...�can�you�help�me?
    Thanks�in�advice,
    (if�there�is�some�incorrect�word�or�expression�excuse�me)

    Sorry i have tried to format the code, but it has changed   to �... sorry read this one...
    Hello All,
    I'm making a Java SQL Client. I have practicaly all basic work done, now I'm trying to improve it.
    One thing I want it to do is to allow the user to specify new drivers and to use them to make new connections. To do this I have this class:
    public class DriverFinder extends URLClassLoader{
    private JarFile jarFile = null;
    private Vector drivers = new Vector();
    public DriverFinder(String jarName) throws Exception{
    super(new URL[]{ new URL("jar", "", "file:" + new File(jarName).getAbsolutePath() +"!/") }, ClassLoader.getSystemClassLoader());
    jarFile = new JarFile(new File(jarName));
    System.out.println("-->" + System.getProperty("java.class.path"));
    System.setProperty("java.class.path", System.getProperty("java.class.path")+File.pathSeparator+jarName);
    System.out.println("-->" + System.getProperty("java.class.path"));
    Enumeration enumeration = jarFile.entries();
    while(enumeration.hasMoreElements()){
    String className = ((ZipEntry)enumeration.nextElement()).getName();
    if(className.endsWith(".class")){
    className = className.substring(0, className.length()-6);
    if(className.indexOf("Driver")!=-1)System.out.println(className);
    try{
    Class classe = loadClass(className, true);
    Class[] interfaces = classe.getInterfaces();
    for(int i=0; i<interfaces.length; i++){
    if(interfaces.getName().equals("java.sql.Driver")){
    drivers.add(classe);
    Class superclasse = classe.getSuperclass();
    interfaces = superclasse.getInterfaces();
    for(int i=0; i<interfaces.length; i++){
    if(interfaces[i].getName().equals("java.sql.Driver")){
    drivers.add(classe);
    }catch(NoClassDefFoundError e){
    }catch(Exception e){}
    public Enumeration getDrivers(){
    return drivers.elements();
    public String getJarFileName(){
    return jarFile.getName();
    public static void main(String[] args) throws Exception{
    DriverFinder df = new DriverFinder("D:/Classes/db2java.zip");
    System.out.println("jar: " + df.getJarFileName());
    Enumeration enumeration = df.getDrivers();
    while(enumeration.hasMoreElements()){
    Class classe = (Class)enumeration.nextElement();
    System.out.println(classe.getName());
    It loads a jar and searches it looking for drivers (classes implementing directly or indirectly interface java.sql.Driver) At the end of the execution I have found all drivers in the jar file.
    The main application loads jar files from an XML file and instantiates one DriverFinder for each jar file. The problem is at execution time, it finds the drivers and i think loads it by issuing this statement (Class classe = loadClass(className, true);), but what i think is not what is happening... the execution of my code throws this exception
    java.lang.ClassNotFoundException: com.ibm.as400.access.AS400JDBCDriver
    at java.net.URLClassLoader$1.run(URLClassLoader.java:198)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:186)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:299)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:265)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:255)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:140)
    at com.marmots.database.DB.<init>(DB.java:44)
    at com.marmots.dbreplicator.DBReplicatorConfigHelper.carregaConfiguracio(DBReplicatorConfigHelper.java:296)
    at com.marmots.dbreplicator.DBReplicatorConfigHelper.<init>(DBReplicatorConfigHelper.java:74)
    at com.marmots.dbreplicator.DBReplicatorAdmin.<init>(DBReplicatorAdmin.java:115)
    at com.marmots.dbreplicator.DBReplicatorAdmin.main(DBReplicatorAdmin.java:93)
    Driver file is not in the classpath !!!
    I have tried also (as you can see in comented lines) to update System property java.class.path by adding the path to the jar but neither...
    I'm sure I'm making a/some mistake/s... can you help me?
    Thanks in advice,
    (if there is some incorrect word or expression excuse me)

  • How to get the execution time of a Discoverer Report from qpp_stats table

    Hello
    by reading some threads on this forum I became aware of the information stored in eul5_qpp_stats table. I would like to know if I can use this table to determine the execution time of a worksheet. In particular it looks like the field qs_act_elap_time stores the actual elapsed time of each execution of specific worksheet: am I correct? If so, how is this value computed? What's the unit of measure? I assume it's seconds, but then I've seen that sometimes I get numbers with decimals.
    For example I ran a worksheet and it took more than an hour to run, and the value I get in the qs_act_elap_time column is 2218.313.
    Assuming the unit of measure was seconds than it would mean approx 37 mins. Is that the actual execution time of the query on the database? I guess the actual execution time on my Discoverer client was longer since some calculations were performed at the client level and not on the database.
    I would really appreciate if you could shed some light on this topic.
    Thanks and regards
    Giovanni

    Thanks a lot Rod for your prompt reply.
    I agree with you about the accuracy of the data. Are you aware of any other way to track the execution times of Discoverer reports?
    Thanks
    Giovanni

  • How to get the total execution time from a tkprof file

    Hi,
    I have a tkprof file. How can I get the total execution time. Going through the file i guess the sum of "Total Waited" would give the total time in the section "Elapsed times include waiting on following events:"
    . The sample of tkprof is given below.
    SQL ID: gg52tq1ajzy7t Plan Hash: 3406052038
    SELECT POSTED_FLAG
    FROM
    AP_INVOICE_PAYMENTS WHERE CHECK_ID = :B1 UNION ALL SELECT POSTED_FLAG FROM
      AP_PAYMENT_HISTORY APH, AP_SYSTEM_PARAMETERS ASP WHERE CHECK_ID = :B1 AND
      NVL(APH.ORG_ID, -99) = NVL(ASP.ORG_ID, -99) AND
      (NVL(ASP.WHEN_TO_ACCOUNT_PMT, 'ALWAYS') = 'ALWAYS' OR
      (NVL(ASP.WHEN_TO_ACCOUNT_PMT, 'ALWAYS') = 'CLEARING ONLY' AND
      APH.TRANSACTION_TYPE IN ('PAYMENT CLEARING', 'PAYMENT UNCLEARING')))
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute    442      0.08       0.13          0          0          0           0
    Fetch      963      0.22       4.72        350      16955          0         521
    total     1406      0.31       4.85        350      16955          0         521
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 173     (recursive depth: 1)
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  UNION-ALL  (cr=38 pr=3 pw=0 time=139 us)
             1          1          1   TABLE ACCESS BY INDEX ROWID AP_INVOICE_PAYMENTS_ALL (cr=5 pr=0 pw=0 time=124 us cost=6 size=12 card=1)
             1          1          1    INDEX RANGE SCAN AP_INVOICE_PAYMENTS_N2 (cr=4 pr=0 pw=0 time=92 us cost=3 size=0 card=70)(object id 27741)
             0          0          0   NESTED LOOPS  (cr=33 pr=3 pw=0 time=20897 us)
             0          0          0    NESTED LOOPS  (cr=33 pr=3 pw=0 time=20891 us cost=12 size=41 card=1)
             1          1          1     TABLE ACCESS FULL AP_SYSTEM_PARAMETERS_ALL (cr=30 pr=0 pw=0 time=313 us cost=9 size=11 card=1)
             0          0          0     INDEX RANGE SCAN AP_PAYMENT_HISTORY_N1 (cr=3 pr=3 pw=0 time=20568 us cost=2 size=0 card=1)(object id 27834)
             0          0          0    TABLE ACCESS BY INDEX ROWID AP_PAYMENT_HISTORY_ALL (cr=0 pr=0 pw=0 time=0 us cost=3 size=30 card=1)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                       350        0.15          4.33
      Disk file operations I/O                        3        0.00          0.00
      latch: shared pool                              1        0.17          0.17
    ********************************************************************************

    user13019948 wrote:
    Hi,
    I have a tkprof file. How can I get the total execution time.
    call count cpu elapsed disk query current rows
    total 1406 0.31 4.85 350 16955 0 521TOTAL ELAPSED TIME is 4.85 seconds from line above

  • How to improve the execution time of my VI?

    My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

  • How to find out the execution time of a sql inside a function

    Hi All,
    I am writing one function. There is only one IN parameter. In that parameter, i will pass one SQL select statement. And I want the function to return the exact execution time of that SQL statement.
    CREATE OR REPLACE FUNCTION function_name (p_sql IN VARCHAR2)
    RETURN NUMBER
    IS
    exec_time NUMBER;
    BEGIN
    --Calculate the execution time for the incoming sql statement.
    RETURN exec_time;
    END function_name;
    /

    Please note that wrapping query in a "SELECT COUNT(*) FROM (<query>)" doesn't necessarily reflect the execution time of the stand-alone query because the optimizer is smart and might choose a completely different execution plan for that query.
    A simple test case shows the potential difference of work performed by the database:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Session altered.
    SQL>
    SQL> drop table count_test purge;
    Table dropped.
    Elapsed: 00:00:00.17
    SQL>
    SQL> create table count_test as select * from all_objects;
    Table created.
    Elapsed: 00:00:02.56
    SQL>
    SQL> alter table count_test add constraint pk_count_test primary key (object_id)
    Table altered.
    Elapsed: 00:00:00.04
    SQL>
    SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'COUNT_TEST')
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.29
    SQL>
    SQL> set autotrace traceonly
    SQL>
    SQL> select * from count_test;
    5326 rows selected.
    Elapsed: 00:00:00.10
    Execution Plan
    Plan hash value: 3690877688
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            |  5326 |   431K|    23   (5)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| COUNT_TEST |  5326 |   431K|    23   (5)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
            419  consistent gets
              0  physical reads
              0  redo size
         242637  bytes sent via SQL*Net to client
           4285  bytes received via SQL*Net from client
            357  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
           5326  rows processed
    SQL>
    SQL> select count(*) from (select * from count_test);
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 572193338
    | Id  | Operation             | Name          | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |               |     1 |     5   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE       |               |     1 |            |          |
    |   2 |   INDEX FAST FULL SCAN| PK_COUNT_TEST |  5326 |     5   (0)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
             16  consistent gets
              0  physical reads
              0  redo size
            412  bytes sent via SQL*Net to client
            380  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>As you can see the number of blocks processed (consistent gets) is quite different. You need to actually fetch all records, e.g. using a PL/SQL block on the server to find out how long it takes to process the query, but that's not that easy if you want to have an arbitrary query string as input.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Need to know how to find the last execution time for a function module

    HI all
    I need to know
    1) How to find out the last execution time of the function module ?
      say for eg. I have executed a func. module at 1:39pm. How to retrieve this time  (1:39pm)
    2) I have created 3 billing document in tcode VF01 i.e 3 billing doucment no. would be created in SAP TABLE "VBRP" b/w 12am to 12:30 am.
    How to capture the latest SAP database update b/w time intervals?
    3) Suppose I am downloading TXT file using "GUI_DOWNLOAD" and say in 20th record some error has happened. I can capture the error using the exception.
    Is it possible to run the program once again from 21st records ? All this will be running in background...
    Kindly clarify....
    Points will be rewarded
    Thanks in advance

    1.Use tcode STAT input as Tcode of Fm and execute .
    2. See the billing documents are created in table VBRk header and there will always be Creation date and time.
    VBRk-Erdat "date ., u can check the time field also
    So now if u talk the date and time we can filter then display the records in intervals.
    3. with an error exeption how is my txt download finished .
    once exception is raised there will not be a download .
    regards,
    vijay

  • How to reduce execution time ?

    Hi friends...
    I have created a report to display vendor opening balances,
    total debit ,total credit , total balance & closing balance for the given date range. it is working fine .But it takes more time to execute . How can I reduce execution time ?
    Plz help me. It's a very urgent report...
    The coding is as below.....
    report  yfiin_rep_vendordetail no standard page heading.
    tables : bsik,bsak,lfb1,lfa1.
    type-pools : slis .
    --TABLE STRUCTURE--
    types : begin of tt_bsik,
            bukrs type bukrs,
            lifnr type lifnr,
            budat type budat,
            augdt type augdt,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
         end of tt_bsik,
         begin of tt_lfb1,
             lifnr type lifnr,
             mindk type mindk,
         end of tt_lfb1,
        begin of tt_lfa1,
            lifnr type lifnr,
            name1 type name1,
            ktokk type ktokk,
        end of tt_lfa1,
        begin of tt_opbal,
            bukrs type bukrs,
            lifnr type lifnr,
            gjahr type gjahr,
            belnr type belnr_d,
            budat type budat,
            bldat type bldat,
            waers type waers,
            dmbtr type dmbtr,
            wrbtr type wrbtr,
            shkzg type shkzg,
            blart type blart,
            monat type monat,
            hkont type hkont,
            bstat type bstat_d ,
            prctr type prctr,
            name1 type name1,
            tdr type  dmbtr,
            tcr type  dmbtr,
            tbal type  dmbtr,
          end of tt_opbal,
         begin of tt_bs ,
            bukrs type bukrs,
            lifnr type lifnr,
            name1 type name1,
            prctr type prctr,
            tbal type dmbtr,
            bala type dmbtr,
            balb type dmbtr,
            balc type dmbtr,
            bald type dmbtr,
            bale type dmbtr,
            gbal type dmbtr,
        end of tt_bs.
    ************WORK AREA DECLARATION *********************
    data :  gs_bsik type tt_bsik,
            gs_bsak type tt_bsik,
            gs_lfb1 type tt_lfb1,
            gs_lfa1 type tt_lfa1,
            gs_ageing  type tt_ageing,
            gs_bs type tt_bs,
            gs_opdisp type tt_bs,
            gs_final type tt_bsik,
            gs_opbal type tt_opbal,
            gs_opfinal type tt_opbal.
    ************INTERNAL TABLE DECLARATION*************
    data :  gt_bsik type standard table of tt_bsik,
            gt_bsak type standard table of tt_bsik,
            gt_lfb1 type standard table of tt_lfb1,
            gt_lfa1 type standard table of tt_lfa1,
            gt_ageing type standard table of tt_ageing,
            gt_bs type standard table of tt_bs,
            gt_opdisp type standard table of tt_bs,
            gt_final type standard table of tt_bsik,
            gt_opbal type standard table of tt_opbal,
            gt_opfinal type standard table of tt_opbal.
    ALV DECLARATIONS *******************
    data : gs_fcat type slis_fieldcat_alv ,
           gt_fcat type slis_t_fieldcat_alv ,
           gs_sort type slis_sortinfo_alv,
           gs_fcats type slis_fieldcat_alv ,
           gt_fcats type slis_t_fieldcat_alv.
    **********global data declration***************
    data :   kb type dmbtr ,
              return like  bapireturn ,
              balancespgli like  bapi3008-bal_sglind,
              noteditems like  bapi3008-ntditms_rq,
              keybalance type table of  bapi3008_3 with header line,
             opbalance type p.
    SELECTION SCREEN DECLARATIONS *********************
    selection-screen begin of block b1 with frame .
    select-options : so_bukrs for bsik-bukrs obligatory,
                     so_lifnr for bsik-lifnr,
                     so_hkont for bsik-hkont,
                     so_prctr for bsik-prctr ,
                     so_mindk for lfb1-mindk,
                     so_ktokk for lfa1-ktokk.
    selection-screen end of block b1.
    selection-screen : begin of block b1 with frame.
    parameters       : p_rb1 radiobutton group rad1 .
    select-options   : so_date for sy-datum .
    selection-screen : end of block b1.
    ********************************ASSIGNING ALV GRID
    ****field catalog for balance report
    gs_fcats-col_pos = 1.
    gs_fcats-fieldname = 'BUKRS'.
    gs_fcats-seltext_m =  text-001.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 2 .
    gs_fcats-fieldname = 'LIFNR'.
    gs_fcats-seltext_m = text-002.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 3.
    gs_fcats-fieldname = 'NAME1'.
    gs_fcats-seltext_m =  text-003.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 4.
    gs_fcats-fieldname = 'BALC'.
    gs_fcats-seltext_m =  text-016.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 5.
    gs_fcats-fieldname = 'BALA'.
    gs_fcats-seltext_m =  text-012.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 6.
    gs_fcats-fieldname = 'BALB'.
    gs_fcats-seltext_m =  text-013.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 7.
    gs_fcats-fieldname = 'TBAL'.
    gs_fcats-seltext_m =  text-014.
    append gs_fcats to gt_fcats .
    gs_fcats-col_pos = 8.
    gs_fcats-fieldname = 'GBAL'.
    gs_fcats-seltext_m =  text-015.
    append gs_fcats to gt_fcats .
    data : repid1 type sy-repid.
    repid1 = sy-repid.
    INITIALIZATION EVENTS ******************************
    initialization.
    *Clearing the work area.
    clear gs_bsik.
    Refreshing the internal tables.
    refresh gt_bsik.
    ******************START OF  SELECTION EVENTS **************************
    start-of-selection.
    *get data for balance report.
      perform sub_openbal.
      perform sub_openbal_display.
    *&      Form  sub_openbal
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal .
      if   so_date-low > sy-datum or so_date-high > sy-datum .
          message i005(yfi02).
         leave screen.
    endif.
         select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsik into table gt_opbal
           where bukrs in so_bukrs and lifnr in so_lifnr
           and hkont in so_hkont and prctr in so_prctr
           and budat in so_date .
        select bukrs lifnr gjahr belnr budat bldat
           waers dmbtr wrbtr shkzg blart monat hkont prctr
           from bsak appending table gt_opbal
           for all entries in gt_opbal
           where lifnr = gt_opbal-lifnr
           and budat in so_date .
    if sy-subrc <> 0.
      message i007(yfi02).
      leave screen.
      endif.
    select lifnr mindk from lfb1 into table gt_lfb1
      for all entries in gt_opbal
        where lifnr = gt_opbal-lifnr and mindk in so_mindk.
    select lifnr name1 ktokk from lfa1 into table gt_lfa1
      for all entries in gt_opbal
       where lifnr = gt_opbal-lifnr and ktokk in so_ktokk.
       loop at gt_opbal into gs_opbal .
         loop at gt_lfb1 into gs_lfb1 where lifnr = gs_opbal-lifnr.
           loop at gt_lfa1 into gs_lfa1 where lifnr = gs_opbal-lifnr.
            gs_opfinal-bukrs = gs_opbal-bukrs.
            gs_opfinal-lifnr = gs_opbal-lifnr.
            gs_opfinal-gjahr = gs_opbal-gjahr.
            gs_opfinal-belnr = gs_opbal-belnr.
            gs_opfinal-budat = gs_opbal-budat.
            gs_opfinal-bldat = gs_opbal-bldat.
            gs_opfinal-waers = gs_opbal-waers.
            gs_opfinal-dmbtr = gs_opbal-dmbtr.
            gs_opfinal-wrbtr = gs_opbal-wrbtr.
            gs_opfinal-shkzg = gs_opbal-shkzg.
            gs_opfinal-blart = gs_opbal-blart.
            gs_opfinal-monat = gs_opbal-monat.
            gs_opfinal-hkont = gs_opbal-hkont.
            gs_opfinal-prctr = gs_opbal-prctr.
            gs_opfinal-name1 = gs_lfa1-name1.
        if gs_opbal-shkzg    = 'H'.
            gs_opfinal-tcr   =  gs_opbal-dmbtr * -1.
            gs_opfinal-tdr   =  '000000'.
        else.
            gs_opfinal-tdr   =  gs_opbal-dmbtr.
            gs_opfinal-tcr   =  '000000'.
        endif.
           append gs_opfinal to gt_opfinal.
           endloop.
           endloop.
           endloop.
    sort gt_opfinal by bukrs lifnr prctr .
    so_date-low = so_date-low - 1 .
    loop at gt_opfinal into gs_opfinal.
    call function 'BAPI_AP_ACC_GETKEYDATEBALANCE'
      exporting
        companycode        = gs_opfinal-bukrs
        vendor             =  gs_opfinal-lifnr
        keydate            = so_date-low
       balancespgli        = ' '
       noteditems          = ' '
      importing
        return             = return
      tables
        keybalance         = keybalance.
    clear kb .
    loop at keybalance .
       kb = keybalance-lc_bal + kb .
    endloop.
          gs_opdisp-balc = kb.
          gs_opdisp-bukrs =  gs_opfinal-bukrs.
          gs_opdisp-lifnr =  gs_opfinal-lifnr.
          gs_opdisp-name1 =  gs_opfinal-name1.
        at new lifnr .
          sum .
          gs_opfinal-tbal =  gs_opfinal-tdr + gs_opfinal-tcr  .
          gs_opdisp-tbal = gs_opfinal-tbal.
          gs_opdisp-bala = gs_opfinal-tdr .
          gs_opdisp-balb = gs_opfinal-tcr .
          gs_opdisp-gbal = keybalance-lc_bal + gs_opfinal-tbal .
          append gs_opdisp to gt_opdisp.
        endat.
        clear gs_opdisp.
        clear keybalance .
      endloop.
      delete adjacent duplicates from gt_opdisp.
    endform.                    " sub_openbal
    *&      Form  sub_openbal_display
          text
    -->  p1        text
    <--  p2        text
    form sub_openbal_display .
    call function 'REUSE_ALV_GRID_DISPLAY'
        exporting
      I_INTERFACE_CHECK                 = ' '
      I_BYPASSING_BUFFER                = ' '
      I_BUFFER_ACTIVE                   = ' '
          i_callback_program              =  repid1
      I_CALLBACK_PF_STATUS_SET          = ' '
      I_CALLBACK_USER_COMMAND           = ' '
      I_CALLBACK_TOP_OF_PAGE            = ' '
      I_CALLBACK_HTML_TOP_OF_PAGE       = ' '
      I_CALLBACK_HTML_END_OF_LIST       = ' '
      I_STRUCTURE_NAME                  =
      I_BACKGROUND_ID                   = ' '
      I_GRID_TITLE                      =
      I_GRID_SETTINGS                   =
      IS_LAYOUT                         =
          it_fieldcat                     = gt_fcats
      IT_EXCLUDING                      =
      IT_SPECIAL_GROUPS                 =
      IT_SORT                           =
      IT_FILTER                         =
      IS_SEL_HIDE                       =
      I_DEFAULT                         = 'X'
      I_SAVE                            = 'X'
      IS_VARIANT                        =
       it_events                        =
      IT_EVENT_EXIT                     =
      IS_PRINT                          =
      IS_REPREP_ID                      =
      I_SCREEN_START_COLUMN             = 0
      I_SCREEN_START_LINE               = 0
      I_SCREEN_END_COLUMN               = 0
      I_SCREEN_END_LINE                 = 0
      IT_ALV_GRAPHICS                   =
      IT_HYPERLINK                      =
      IT_ADD_FIELDCAT                   =
      IT_EXCEPT_QINFO                   =
      I_HTML_HEIGHT_TOP                 =
      I_HTML_HEIGHT_END                 =
    IMPORTING
      E_EXIT_CAUSED_BY_CALLER           =
      ES_EXIT_CAUSED_BY_USER            =
         tables
           t_outtab                       = gt_opdisp
      exceptions
        program_error                     = 1
        others                            = 2
      if sy-subrc <> 0.
        message id sy-msgid type sy-msgty number sy-msgno
                with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      endif.
    endform.                    " sub_openbal_display

    I think you are using for all entries statement in almost all select statements but i didnt see any condtion before you are using for all entries statement.
    If you are using for all entries in gt_opbal ... make sure that gt_opbal has some records other wise it will try to read all records from the data base tables.
    Try to check before using for all entries in the select statement like
    if gt_opbal is not initial.
    select adfda adfadf afdadf into table
      for all entries in gt_opbal.
    else.
    select abdf afad into table
    from abcd
    where a = 1
        and b = 2.
    endif.
    i didnt see anything wrong in your report but this is major time consuming when you dont have records in the table which you are using for all entries.

  • Reduce execution time with selects

    Hi,
    I have to reduce the execution time in a report, most of the consumed time is in the select query.
    I have a table, gt_result:
    DATA: BEGIN OF gwa_result,
          tknum            LIKE vttk-tknum,
          stabf            LIKE vttk-stabf,
          shtyp            LIKE vttk-shtyp,
          route            LIKE vttk-route,
          vsart            LIKE vttk-vsart,
          signi            LIKE vttk-signi,
          dtabf            LIKE vttk-dtabf,
          vbeln            LIKE likp-vbeln,
          /bshm/le_nr_cust LIKE likp-/bshm/le_nr_cust,
          vkorg            LIKE likp-vkorg,
          werks            LIKE likp-werks,
          regio            LIKE kna1-regio,
          land1            LIKE kna1-land1,
          xegld            LIKE t005-xegld,
          intca            LIKE t005-intca,
          bezei            LIKE tvrot-bezei,
          bezei1           LIKE t173t-bezei,
          fecha(10) type c.
    DATA: END OF gwa_result.
    DATA: gt_result LIKE STANDARD TABLE OF gwa_result.
    And the select query is this:
      SELECT ktknum kstabf kshtyp kroute kvsart ksigni
    k~dtabf
             lvbeln l/bshm/le_nr_cust lvkorg lwerks   nregio nland1 oxegld ointca
                 tbezei   ttbezei
      FROM vttk AS k
      INNER JOIN vttp  AS p ON ktknum = ptknum
      INNER JOIN likp  AS l ON pvbeln = lvbeln
      INNER JOIN kna1  AS n ON lkunnr = nkunnr
      INNER JOIN t005  AS o ON nland1 = oland1
      INNER JOIN tvrot AS t ON troute = kroute AND t~spras = sy-langu
      INNER JOIN t173t AS tt ON ttvsart = kvsart AND tt~spras = sy-langu
      INTO TABLE gt_result
      WHERE ktknum IN s_tknum AND ktplst IN s_tplst AND k~route IN s_route AND
         k~erdat BETWEEN s_erdat-low AND s_erdat-high AND
         l~/bshm/le_nr_cust <> ' '    "IS NOT NULL
         AND k~stabf = 'X'
         AND ktknum NOT IN ( SELECT tktknum  FROM vttk AS tk
                             INNER JOIN vttp AS tp ON tktknum = tptknum
                             INNER JOIN likp AS tl ON tpvbeln = tlvbeln
                             WHERE l~/bshm/le_nr_cust IS NULL )
         AND k~tknum NOT IN ( SELECT tknum FROM /bshs/ssm_eship )
         AND ( o~xegld = ' '
               OR ( o~xegld = 'X' AND
                    ( ( n~land1 = 'ES'
                        AND ( nregio = '51' OR nregio = '52'
                              OR nregio =  '35' OR nregio =  '38' ) )
                               OR n~land1 = 'ESC' ) )
                      OR ointca = 'AD' OR ointca = 'GI' ).
    Does somebody know how to reduce the execution time ?.
    Thanks.

    Hi,
    Try to remove the join. Use seperate selects as shown in example below and for the sake of selection, keep some key fields in your internal table.
    Then once your final table is created, you can copy the table into GT_FINAL which will contain only fields you need.
    EX
    data : begin of it_likp occurs 0,
             vbeln like likp-vbeln,
             /bshm/le_nr_cust like likp-/bshm/le_nr_cust,
             vkorg like likp-vkorg,
             werks like likp-werks,
             kunnr likr likp-kunnr,
           end of it_likp.
    data : begin of it_kna1 occurs 0,
           kunnr like...
           regio....
           land1...
          end  of it_kna1 occurs 0,
    Select tknum stabf shtyp route vsart signi dtabf
    from VTTP
    into table gt_result
    WHERE tknum IN s_tknum AND
          tplst IN s_tplst AND
          route IN s_route AND
          erdat BETWEEN s_erdat-low AND s_erdat-high.
    select vbeln /bshm/le_nr_cust
           vkorg werks kunnr
           from likp
           into table it_likp
           for all entries in gt_result
           where vbeln = gt_result-vbeln.
    select kunnr
           regio
           land1
           from kna1
           into it_kna1
           for all entries in it_likp.
    similarly for other tables.
    Then loop at gt result and read corresponding table and populate entire record :
    loop at gt_result.
    read table it_likp where vbeln = gt_result-vbeln.
    if sy-subrc eq 0.
      move corresponding fields of it_likp into gt_result.
      gt_result-kunnr = it_likp-kunnr.
      modify gt_result.
    endif.
    read table it_kna1 where kunnr = gt_result-vbeln.
    if sy-subrc eq 0.
      gt_result-regio = it-kna1-regio.
      gt_result-land1 = it-kna1-land1.
      modify gt_result.
    endif.
    endloop.

  • Oracle - select execution time

    hi all,
    when i executed an SQL - select (with joins and so on) - I have observed the following behaviour:
    Query execution times are like: -
    for 1000 records - 4 sec
    5000 records - 10 sec
    10000 records - 7 sec
    25000 records - 16 sec
    50000 records - 33 sec
    I tested this behaviour with different sets of sqls on different sets of data. But in each of the cases, the behaviour is more or less the same.
    Can any one explain - why Oracle takes more time to result 5000 records than that it takes to 10000.
    Please note that this has not something to do with the SQLs as - i tested this with different sets of sql on different sets of data.
    Can there be any Oracle`s internal reason - which can explain this behaviour?
    regards
    at

    That is not normal behaviour. I've never come across anything like that that wasn't explainable by some environment factor (e.g. someone else doing a big sort).
    I ran a couple of tests:
    (1) to insert 5000 rows 0.1 seconds
    to insert 10000 rows 0.18 seconds
    and
    (2) to select 5000 rows joined with 200K row table 0.19 seconds
    to select 10000 rows joined with 200K row table 0.2 seconds
    Although the second is close, I grant you!
    Cheers, APC

  • Same sqlID with different  execution plan  and  Elapsed Time (s), Executions time

    Hello All,
    The AWR reports for two days  with same sqlID with different  execution plan  and  Elapsed Time (s), Executions time please help me to find out what is  reason for this change.
    Please find the below detail 17th  day my process are very slow as compare to 18th
    17th Oct                                                                                                          18th Oct
    221,808,602
    21
    2tc2d3u52rppt
    213,170,100
    72,495,618
    9c8wqzz7kyf37
    209,239,059
    71,477,888
    9c8wqzz7kyf37
    139,331,777
    1
    7b0kzmf0pfpzn
    144,813,295
    1
    0cqc3bxxd1yqy
    102,045,818
    1
    8vp1ap3af0ma5
    128,892,787
    16,673,829
    84cqfur5na6fg
    89,485,065
    1
    5kk8nd3uzkw13
    127,467,250
    16,642,939
    1uz87xssm312g
    67,520,695
    8,058,820
    a9n705a9gfb71
    104,490,582
    12,443,376
    a9n705a9gfb71
    62,627,205
    1
    ctwjy8cs6vng2
    101,677,382
    15,147,771
    3p8q3q0scmr2k
    57,965,892
    268,353
    akp7vwtyfmuas
    98,000,414
    1
    0ybdwg85v9v6m
    57,519,802
    53
    1kn9bv63xvjtc
    87,293,909
    1
    5kk8nd3uzkw13
    52,690,398
    0
    9btkg0axsk114
    77,786,274
    74
    1kn9bv63xvjtc
    34,767,882
    1,003
    bdgma0tn8ajz9
    Not only queries are different but also the number of blocks read by top 10 queries are much higher on 17th than 18th.
    The other big difference is the average read time on two days
    Tablespace IO Stats
    17th Oct
    Tablespace
    Reads
    Av Reads/s
    Av Rd(ms)
    Av Blks/Rd
    Writes
    Av Writes/s
    Buffer Waits
    Av Buf Wt(ms)
    INDUS_TRN_DATA01
    947,766
    59
    4.24
    4.86
    185,084
    11
    2,887
    6.42
    UNDOTBS2
    517,609
    32
    4.27
    1.00
    112,070
    7
    108
    11.85
    INDUS_MST_DATA01
    288,994
    18
    8.63
    8.38
    52,541
    3
    23,490
    7.45
    INDUS_TRN_INDX01
    223,581
    14
    11.50
    2.03
    59,882
    4
    533
    4.26
    TEMP
    198,936
    12
    2.77
    17.88
    11,179
    1
    732
    2.13
    INDUS_LOG_DATA01
    45,838
    3
    4.81
    14.36
    348
    0
    1
    0.00
    INDUS_TMP_DATA01
    44,020
    3
    4.41
    16.55
    244
    0
    1,587
    4.79
    SYSAUX
    19,373
    1
    19.81
    1.05
    14,489
    1
    0
    0.00
    INDUS_LOG_INDX01
    17,559
    1
    4.75
    1.96
    2,837
    0
    2
    0.00
    SYSTEM
    7,881
    0
    12.15
    1.04
    1,361
    0
    109
    7.71
    INDUS_TMP_INDX01
    1,873
    0
    11.48
    13.62
    231
    0
    0
    0.00
    INDUS_MST_INDX01
    256
    0
    13.09
    1.04
    194
    0
    2
    10.00
    UNDOTBS1
    70
    0
    1.86
    1.00
    60
    0
    0
    0.00
    STG_DATA01
    63
    0
    1.27
    1.00
    60
    0
    0
    0.00
    USERS
    63
    0
    0.32
    1.00
    60
    0
    0
    0.00
    INDUS_LOB_DATA01
    62
    0
    0.32
    1.00
    60
    0
    0
    0.00
    TS_AUDIT
    62
    0
    0.48
    1.00
    60
    0
    0
    0.00
    18th Oct
    Tablespace
    Reads
    Av Reads/s
    Av Rd(ms)
    Av Blks/Rd
    Writes
    Av Writes/s
    Buffer Waits
    Av Buf Wt(ms)
    INDUS_TRN_DATA01
    980,283
    91
    1.40
    4.74

    The AWR reports for two days  with same sqlID with different  execution plan  and  Elapsed Time (s), Executions time please help me to find out what is  reason for this change.
    Please find the below detail 17th  day my process are very slow as compare to 18th
    You wrote with different  execution plan, I  think, you saw plans. It is very difficult, you get old plan.
    I think Execution plans is not changed in  different days, if you not added index  or ...
    What say ADDM report about this script?
    As you know, It is normally, different Elapsed Time for same statement in different  day.
    It is depend your database workload.
    It think you must use SQL Access and SQl Tuning advisor for this script.
    You can get solution for slow running problem.
    Regards
    Mahir M. Quluzade

  • Query execution time

    Dear SCN,
    I am new to BOBJ Environment. I have created a webi report on top of bex query by using BISC connection. Bex query is build for Vendor Ageing Analysis. My bex query will take very less time to execute the report (max 1 min). But in case of webi is takeing around 5 min when i click on refresh. I have not used any conditions,filters,restrictions are done at webi level all are done at bex level only.
    Please let me know techniques to optimize the query execution time in webi. Currently we are in BO 4.0.
    Regards,
    PRK

    Hi Praveen
    Go through this document for performance optimization using BICS connection
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/d0e3c552-e419-3010-1298-b32e6210b58d?QuickLink=index&…

  • How to extract execution time of a step

    how to extract execution time of a step ?
    This step calls another sequence, and
    I want to know how long it take to execute that seqeuence.
    I need this information during run time, not in the
    report.
    thanks.

    Hi,
    You could try.
    Enable the Record Result for the step in question.
    This will allow you to extract the TotalTime.
    Then you could use the RemoveElements(Locals.ResultList,"[0]",1) - this assumes you have only logged one result. This is the same as not recording results for that step.
    Otherwise you will have to workout the time yourself by calling the API Execution.SecondsExecuting() - API Execution.SecondsAtStart().
    Regards
    Ray Farmer
    Regards
    Ray Farmer

  • Execution time difference between SELECT & UPDATE statement in JDBC Sender.

    Hi Experts,
    In my scenario, I have used the JDBC Sender Adapter with the SELECT and UPDATE statement.
    Now the problem is in between the execution of Select and update statement, few more entries are coming in the same DB Table.
    So result of this is updation take place for those entries which are not even picked up by the select statement.
    Can we avoid this execution time difference between the SELECT & UPDATE statemet on JDBC Sender side???
    Thanks & Regards
    Jagesh

    Hi
    Use serializable option in additional parameters, now all new entries would also be updated.

Maybe you are looking for