Execution time calculation issue

Hi,
I have a proceudre which update the tables data and it will capture the execution time for each and every table.
For that i am using below procedure and its giving incorrect values. i.e., its giving end_time<start_time
PFB code(Line nos 25,26,33,73,7679,80 code for exeution time calculation) and output.
1 CREATE OR REPLACE PROCEDURE my_proc
2 IS
3 CURSOR c
4 IS
5 SELECT tablename, TYPE
6 FROM table_list;
7
8 TYPE emp_record IS RECORD (
9 empkey tab1.pkey%TYPE,
10 rid ROWID,
11 ID tab_join.ID%TYPE,
12 dt tab_join.dt%TYPE
13 );
14
15 TYPE emp_type IS TABLE OF emp_record
16 INDEX BY BINARY_INTEGER;
17
18 v_emp emp_type;
19
20 TYPE emp_type_rowid IS TABLE OF ROWID
21 INDEX BY BINARY_INTEGER;
22 tab_no Number:=0;
23 emp_rowid emp_type_rowid;
24 r_cur sys_refcursor;
25 v_start_time TIMESTAMP; /** Added for time calculation*/
26 v_end_time TIMESTAMP; /** Added for time calculation*/
27 string1 VARCHAR2 (1000) := 'SELECT b.empkey, b.ROWID rid, a.id id, a.dt dt FROM emp_base a,';
28 string2 VARCHAR2 (1000) := ' b WHERE a.empkey = b.empkey';
29 rowcnt Number;
30BEGIN
31 FOR c1 IN c
32 LOOP
33 tab_no:=tab_no+1;
34 v_start_time := SYSTIMESTAMP; /** Added for time calculation*/
35 BEGIN
36 string_d := string1 || c1.tablename || string2;
37
38 OPEN r_cur FOR string_d;
39
40 LOOP
41 FETCH r_cur
42 BULK COLLECT INTO v_emp LIMIT 50000;
43
44 IF v_emp.COUNT > 0
45 THEN
46 FOR j IN v_emp.FIRST .. v_emp.LAST
47 LOOP
48 emp_rowid (j) := v_emp (j).rid;
49 END LOOP;
50
51 upd_string := ' UPDATE ' || c1.tablename || ' SET id = ' || v_emp (1).ID || 'WHERE ROWID = :emp_rowid';
52 FORALL i IN emp_rowid.FIRST .. emp_rowid.LAST
53 EXECUTE IMMEDIATE upd_string
54 USING emp_rowid (i);
55 rowcnt := rowcnt + emp_rowid.COUNT;
56 END IF;
57
58 EXIT WHEN v_emp.COUNT < 50000;
59 v_emp.DELETE;
60 emp_rowid.DELETE;
61 END LOOP;
62
63 v_emp.DELETE;
64 emp_rowid.DELETE;
65
66 CLOSE r_cur;
67 EXCEPTION
68 WHEN OTHERS
69 THEN
70 DBMS_OUTPUT.put_line (SQLERRM);
71 END;
72
73 v_end_time := SYSTIMESTAMP; /** Added for time calculation*/
74
75 INSERT INTO exec_time
76 VALUES (tab_no||' '||c1.tablename, v_start_time, v_end_time, v_end_time - v_start_time, rowcnt); /** Added for time calculation*/
77
78 COMMIT;
79 v_start_time := NULL; /** Added for time calculation*/
80 v_end_time := NULL; /** Added for time calculation*/
81 rowcnt := 0;
82 END LOOP;
83END;
Output :
TableName: exec_time
"TABLE_NAME"      "START_TIME"     "END_TIME"      "EXCUTION_TIME"      "NO_OF_RECORDS_PROCESSED"
TAB7      5/29/2013 10:52:23.000000 AM      5/29/2013 10:52:24.000000 AM      +00 00:00:00.521707      773
TAB5      5/29/2013 10:52:18.000000 AM      5/29/2013 10:52:15.000000 AM      -00 00:00:03.381468      56525
TAB6      5/29/2013 10:52:15.000000 AM      5/29/2013 10:52:23.000000 AM      +00 00:00:08.624420      49078
TAB2      5/29/2013 10:51:54.000000 AM      5/29/2013 10:51:42.000000 AM      -00 00:00:11.932558      529
TAB4      5/29/2013 10:51:47.000000 AM      5/29/2013 10:52:18.000000 AM      +00 00:00:31.208966      308670
TAB1      5/29/2013 10:51:45.000000 AM      5/29/2013 10:51:54.000000 AM      +00 00:00:09.124238      65921
TAB3      5/29/2013 10:51:42.000000 AM      5/29/2013 10:51:47.000000 AM      +00 00:00:04.502432      12118
Issue: I am getting execution time in negitive values because end_time<start_time coming.
Please suggest me how to resolve this.
Thanks.

Welcome to the forum!!
Please read {message:id=9360002} from the FAQ to know the list of details (WHAT and HOW) you need to specify when asking a question.
I primarily hate 3 things in your code
1. The way you have used BULK update which is totally unnecessary
2. COMMIT inside LOOP
3. The use of WHEN OTHERS exception.
Your code can be simplified like this
create or replace procedure my_proc
is
   v_start_time timestamp;
   v_end_time   timestamp;
   v_rowcnt     integer;
begin
   for c1 in (
                 select rownum tab_no
                      , tablename
                      , type
                   from table_list
   loop
      sql_string := q'[
                              merge into #tablename# a
                              using (
                                      select id, dt
                                        from emp_base
                                    ) b
                                 on (
                                      a.empkey = b.empkey
                                when matched then
                                    update set a.id = b.id;
      sql_string := replace(sql_string, '#tablename#', c1.tablename);
      v_start_time := systimestamp;
      execute immediate sql_string;
      v_end_time   := systimestamp;
      v_rowcnt     := sql%rowcount;
      insert into exec_time
           values
                c1.tab_no || ' ' || c1.tablename
              , v_start_time
              , v_end_time
              , v_end_time - v_start_time
              , v_rowcnt
   end loop;
   commit;
end; In the INSERT statement on the table EXEC_TIME try to include the column list.
NOTE: The above code is untested.

Similar Messages

  • PHP. How is execution time calculated?

    As, unfortunately, I am rather good at programming infinite
    loops, I have set
    the maximum execution time on the test version of my program
    to one second.
    However it usually seems to take anything up to about a
    minute to time out,
    after which it announces that the maximum execution time of
    one second has been
    exceeded.
    Why is there such a large discrepancy?
    Clancy

    .oO(Clancy)
    >As, unfortunately, I am rather good at programming
    infinite loops, I have set
    >the maximum execution time on the test version of my
    program to one second.
    >However it usually seems to take anything up to about a
    minute to time out,
    >after which it announces that the maximum execution time
    of one second has been
    >exceeded.
    >
    >Why is there such a large discrepancy?
    The set_time_limit directive doesn't set the "real" clock
    time after
    which the script will be killed. It's a bit more complicated
    and there
    are some more things to consider.
    It depends on what the script is actually doing, because many
    kinds of
    operations like IO stuff, database operations, system calls
    etc. do not
    count to the execution time. If the script is just sleeping
    and idling
    around, it won't eat up any CPU time, so it might run a lot
    longer than
    just the time defined with set_time_limit.
    In other cases the web server can be an issue as well,
    especially when
    you use a script to send large files to the browser. The
    download might
    take much more time than the PHP script is allowed to run,
    because the
    script delivers its content to the web server, which will
    cache it on
    its own before its delivered to the browser. And then the
    script can be
    finished already, while the download is still running.
    So there are many things that have to be taken into account
    when it
    comes to determine the real script execution time.
    Micha

  • Execution Time Issue

    Help Please!!!
    I've been searching for an execution time issue in our application for a while now. Here is some background on the application:
    Collects analog data from a cDAQ chassis with a 9205 at 5kHz
    Data is collected in 100ms chunks
    Some of the data is saved directly to a TDMS file while the rest is averaged for a single data point. That single data point is saved to disk in a text file every 200ms.
    Problem: During operation, the VI that writes the data to the text file will periodically take many hundreds of milliseconds to execute. Normal operation execution times are on the order of 1ms or less. This issue will happen randomly during operation. It's usually many seconds between times that this occurs and it doesn't seem to have any pattern to when the event happens.
    Attached is a screenshot of the VI in question. The timing check labeled "A" is the one that will show the troubling execution time. All the other timing checks show 0ms every time this issue occurs. I simply can't see what else is holding this thing up. The only unchecked subVI is the "append error call chain" call. I've gone through the heirarchy of that VI and ensured that everything is set for reentrant execution. I will check that too soon, but I really don't expect to find anything.
    Where else can I look for where the time went? It doesn't seem to make sense.
    Thanks for reading!
    Tim
    Attachments:
    Screen Shot 2013-09-06 at 9.32.46 AM.png ‏87 KB

    You should probably increase how much data you write with a single Write to Text File.  Move the Write to Text File out of the FOR loop.  Just have the data to be written autoindex to create an array of strings.  The Write to Text File will accept the array of strings directly, writing a single line for each element in the arry.
    Another idea I am having is to use another loop (yes another queue as well) for the writing of the file.  But you put the Dequeue Element inside of another WHILE loop.  On the first iteration of this inside loop, set the timeout to something normal or -1 for wait forever.  Any further iteration should have a timeout of 0.  You do this with a shift register.  Autoindex the read strings out of the loop.  This array goes straight into the Write to Text File.  This way you can quickly catch up when your file write takes a long time.
    NOTE:  This is just a very quick example I put together. It is far from a complete idea, but it shows the general idea I was having with reading the queue.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Write all data on queue.png ‏16 KB

  • Execution time issues with SU01 demo script

    Having worked with Scripting in a Box for a while now I wanted to try out the examples there. I read FM: SO_USER_LIST_READ or another one explaining why my attempt to narrow the returned users failed (Craig, did you find out why the functionality was removed?) and Re: Issue with "Scripting in a Box" seeing that  Harry had the same problems with only ~200 users in his system. However, Craig's original post states he successfully managed with over 400 users. I'm a bit confused...
    I included some simple timing stuff and found out that processing of one user in the loop takes about 1.7 seconds - little surprise then that every script times out. This seems to be due to the additional calls to GetStatus() and GetValid() - by commenting them out I get the whole list rather quickly.
    Unfortunately commenting them out also means no nice icons for 'Status' and 'Valid', which is not desired. I probably could create a Z FM to deliver me the userlist with these two fields already added (which would save on rfc-calls, assuming the operation is much quicker on the server directly), but I hoped to get a solution based purely on PHP, not own ABAP coding (being aware that Craig also used a Z FM anyway, but still...)
    I'm a bit unsure now how easy it is to actually create useful frontends in PHP, with such long execution times. I assume this will happen in other occasions as well, not only for user lists. Is there an alternative? Or a general way to do those things quicker?
    :Frederic:

    Craig: you say it's easy to go from 1.7 seconds per user lookup down to a small fraction of it? Then apparently I'm lacking these skills. Could you please give me a hint what should be done there?
    I though about creating a Z function, but having to write custom wrappers - possibly for about any transaction in this style - I wanted to avoid.
    Bala: the two functions only take one user as input, not a list. So w/o modifying the ABAP side I can't feed the whole list in there. I wonder how much it would improve the result time anyway, so perhaps I'm trying it. It's just not a solution I'd prefer.
    Paging is a good idea, the actual call to get the whole userlist is quite quick. Having like 20 users displayed at a time is manageable - still slow, but the script won't timeout anymore. I think I'll implement this today.
    About AJAX: yes, I want to play around a bit with AJAX, however having the two columns valid and status removed and the information only displayed on mouseover etc is a bit like cheating. And 1.7+ seconds waiting for a hoover info is too long. So I'd like to optimize on the rfc-calling side primarily.
    Craig: surely it was just a demo, and I'm just trying to get it to work for understanding it
    :Frederic:

  • Execution times and other issues in changing TextEdit documents

    (HNY -- back to terrorize everyone with off-the-wall questions)
    --This relates to other questions I've asked recently, but that may be neither here nor there.
    --Basically, I want to change a specific character in an open TextEdit document, in text that can be pretty lengthy. But I don't want pre-existing formatting of the text to change.
    --For test purposes the front TextEdit document is simply an unbroken string of letters (say, all "q's") ranging in number from 5 and upwards. Following are some of the results I've gotten:
    --1) Using a do shell script routine (below), the execution is very fast, well under 0.1 second for changing 250 q's to e's and changing the front document. The problem is that the formatting of the first character becomes the formatting for the entire string (in fact, for the entire document if there is subsequent text). So that doesn't meet my needs, although I certainly like the speed.
    --SCRIPT 1
    tell application "TextEdit"
    set T to text of front document
    set Tnew to do shell script "echo " & quoted form of T & " | sed 's/q/e/g'"
    set text of front document to Tnew
    end tell
    --END SCRIPT 1
    --The only practical way I've found to change a character AND maintain formatting is the "set every character where it is "q" to "e"" routine (below). But, for long text, I've run into a serious execution speed problem. For example, if the string consists of 10 q's, the script executes in about 0.03 second. If the string is 40 characters, the execution is 0.14 second, a roughly linear increase. If the string is 100 characters, the execution is 2.00 seconds, which doesn't correlate to a linear increase at all. And if the string is 250 characters, I'm looking at 70 seconds. At some point, increasing the number of string characters leads to a timeout or stall. One interesting aspect of this is that, if only the last 4 characters (example) of the 250-character string are "q", then the execution time is again very quick.
    --SCRIPT 2
    tell application "TextEdit"
    set T to text of front document
    tell text of front document
    set every character where it is "q" to "e"
    end tell
    end tell
    --END SCRIPT 2
    --Any insight into this issue (or workaround) will be appreciated.
    --In the real world, I most often encounter the issue when trying to deal with spaces in long text, which can be numerous.

    OK, Camelot, helpful but maddening. Based on your response. I elected to look at this some more, even though I'm stuck with TextEdit on this project. Here's what I found, not necessarily in the order I did things:
    1) I ran your "repeat" script on my usual machine (2.7 PPC with 10.4.6)) and was surprised to consisently get about 4.25 seconds -- I didn't think it should matter, but I happened to run it with Script Debugger.
    2) Then, curious as to what a slower processor speed would do, I ran it at ancient history speed -- a 7500 souped up to 700 MHz. On a 10.4.6 partition, the execution time was about 17 seconds, but on a 10.3.6 partition it was only about 9.5 seconds. (The other complication with this older machine is that it uses XPostFacto to accommodate OS X.) And I don't have Script Debugger for 10.3.x, so I ran the script in Script Editor on that partition.
    3) That got me wondering about Script Editor vs. Script Debugger, so (using 10.4.6) I ran the script on both the old machine and my (fast) usual machine using Script Editor. On the old machine, it was somewhat faster at about 14 seconds. But, surprise!, on the current machine it took twice as long at 8.6 seconds. The story doesn't end here.
    (BTW, I added a "ticks" routine to the script, so the method of measuring time should be consistent. And I've been copying and pasting the script to the various editors, so there shouldn't be any inconsistencies with it. I've consistently used a 250-character unbroken string of the target in a TextEdit document.)
    4) Mixed in with all these trials, I wrote a script to get a list of offsets of all the target characters; it can be configured to change the characters or not. But I found some intriguing SE vs. SD differences there also. In tests on the fast machine, running the script simply to get the offset list (without making any changes), the list is generated in under a second -- but sometimes barely so. The surprise was that SE ran it in about half the time as SD. although SD was about twice as fast with a script that called for changes. Go figure.
    5) Since getting the offset list is pretty fast in either case, I was hoping to think up some innovative way of using the offset list to make changes in the document more quickly. But running a repeat routine with the list simply isn't innovative, and the result is roughly what I get with your repeat script coupled with an added fraction of a second for generating the list. Changing each character as each offset is generated also yields about the same result.
    My conclusion from all this is that the very fast approaches (which lose formatting) are changing the characters globally, not one at a time as occurs visibly with techniques where the formatting isn't lost. I don't know what to make of SE vs. SD, but I repeated the runs several times in each editor, with consistent results.
    Finally, while writing the offset list script, I encountered a couple AS issues that I've seen several times in the past (having nothing specifically to do with this topic), but I'll present that as a new post.
    Thanks for your comments and any others will be welcome.

  • How to know the execution time of rule in Calculation Manager

    Hi All,
    How do we come to know the time of execution of rule in calculation manager ?
    Regards
    Vikram

    At this point there is no way to know the execution time of a rule in calculation Manager . If you are working on Planning rules, I believe planning displays the execution time in its job console.
    -SR

  • Calculating execution time in the case of refcursor

    I have a package like the one given below.
    create or replace package scott_test is
    type ref_cursor is ref cursor;
    procedure scott_test(p_dno in number, p_cursor out ref_cursor );
    end scott_test;
    CREATE OR REPLACE package body scott_test is
    procedure scott_test(p_dno in number, p_cursor out ref_cursor )is
    begin
         OPEN p_cursor FOR
         select empno,ename,dname from emp natural join dept where deptno=p_dno;
    end;
    end scott_test ;
    If i execute the sp by
    var x refcursor
    execute scott_test.scott_test(10,:x)
    it will return all rows selected by the query.
    I want to know the exact time it will take when i execute the sp in this package. Since this returns a ref cursor the fetch happens only when we execute this sp.
    Which is the best way of capturing execution time in refcursor usage?

    I hope you are looking for this way?
    SQL> VAR V REFCURSOR
    SQL> create or replace procedure timetest(r out sys_refcursor) as
      2  st  PLS_INTEGER;
      3  nt  PLS_INTEGER;
      4  begin
      5  st:= dbms_utility.get_time;
      6  open r for select * from all_objects;
      7  nt:= dbms_utility.get_time;
      8  dbms_output.put_line( nt-st ||' elapsed');
      9  end timetest;
    10  /
    Procedure created.
    SQL> EXECUTE TIMETEST(:V);
    3 elapsed
    PL/SQL procedure successfully completed.

  • Loading jar files at execution time via URLClassLoader

    Hello�All,
    I'm�making�a�Java�SQL�Client.�I�have�practicaly�all�basic�work�done,�now�I'm�trying�to�improve�it.
    One�thing�I�want�it�to�do�is�to�allow�the�user�to�specify�new�drivers�and�to�use�them�to�make�new�connections.�To�do�this�I�have�this�class:�
    public�class�DriverFinder�extends�URLClassLoader{
    ����private�JarFile�jarFile�=�null;
    ����
    ����private�Vector�drivers�=�new�Vector();
    ����
    ����public�DriverFinder(String�jarName)�throws�Exception{
    ��������super(new�URL[]{�new�URL("jar",�"",�"file:"�+�new�File(jarName).getAbsolutePath()�+"!/")�},�ClassLoader.getSystemClassLoader());
    ��������jarFile�=�new�JarFile(new�File(jarName));
    ��������
    ��������/*
    ��������System.out.println("-->"�+�System.getProperty("java.class.path"));
    ��������System.setProperty("java.class.path",�System.getProperty("java.class.path")+File.pathSeparator+jarName);
    ��������System.out.println("-->"�+�System.getProperty("java.class.path"));
    ��������*/
    ��������
    ��������Enumeration�enumeration�=�jarFile.entries();
    ��������while(enumeration.hasMoreElements()){
    ������������String�className�=�((ZipEntry)enumeration.nextElement()).getName();
    ������������if(className.endsWith(".class")){
    ����������������className�=�className.substring(0,�className.length()-6);
    ����������������if(className.indexOf("Driver")!=-1)System.out.println(className);
    ����������������
    ����������������try{
    ��������������������Class�classe�=�loadClass(className,�true);
    ��������������������Class[]�interfaces�=�classe.getInterfaces();
    ��������������������for(int�i=0;�i<interfaces.length;�i++){
    ������������������������if(interfaces.getName().equals("java.sql.Driver")){
    ����������������������������drivers.add(classe);
    ������������������������}
    ��������������������}
    ��������������������Class�superclasse�=�classe.getSuperclass();
    ��������������������interfaces�=�superclasse.getInterfaces();
    ��������������������for(int�i=0;�i<interfaces.length;�i++){
    ������������������������if(interfaces[i].getName().equals("java.sql.Driver")){
    ����������������������������drivers.add(classe);
    ������������������������}
    ��������������������}
    ����������������}catch(NoClassDefFoundError�e){
    ����������������}catch(Exception�e){}
    ������������}
    ��������}
    ����}
    ����
    ����public�Enumeration�getDrivers(){
    ��������return�drivers.elements();
    ����}
    ����
    ����public�String�getJarFileName(){
    ��������return�jarFile.getName();
    ����}
    ����
    ����public�static�void�main(String[]�args)�throws�Exception{
    ��������DriverFinder�df�=�new�DriverFinder("D:/Classes/db2java.zip");
    ��������System.out.println("jar:�"�+�df.getJarFileName());
    ��������Enumeration�enumeration�=�df.getDrivers();
    ��������while(enumeration.hasMoreElements()){
    ������������Class�classe�=�(Class)enumeration.nextElement();
    ������������System.out.println(classe.getName());
    ��������}
    ����}
    It�loads�a�jar�and�searches�it�looking�for�drivers�(classes�implementing�directly�or�indirectly�interface�java.sql.Driver)�At�the�end�of�the�execution�I�have�found�all�drivers�in�the�jar�file.
    The�main�application�loads�jar�files�from�an�XML�file�and�instantiates�one�DriverFinder�for�each�jar�file.�The�problem�is�at�execution�time,�it�finds�the�drivers�and�i�think�loads�it�by�issuing�this�statement�(Class�classe�=�loadClass(className,�true);),�but�what�i�think�is�not�what�is�happening...�the�execution�of�my�code�throws�this�exception
    java.lang.ClassNotFoundException:�com.ibm.as400.access.AS400JDBCDriver
    ��������at�java.net.URLClassLoader$1.run(URLClassLoader.java:198)
    ��������at�java.security.AccessController.doPrivileged(Native�Method)
    ��������at�java.net.URLClassLoader.findClass(URLClassLoader.java:186)
    ��������at�java.lang.ClassLoader.loadClass(ClassLoader.java:299)
    ��������at�sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:265)
    ��������at�java.lang.ClassLoader.loadClass(ClassLoader.java:255)
    ��������at�java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
    ��������at�java.lang.Class.forName0(Native�Method)
    ��������at�java.lang.Class.forName(Class.java:140)
    ��������at�com.marmots.database.DB.<init>(DB.java:44)
    ��������at�com.marmots.dbreplicator.DBReplicatorConfigHelper.carregaConfiguracio(DBReplicatorConfigHelper.java:296)
    ��������at�com.marmots.dbreplicator.DBReplicatorConfigHelper.<init>(DBReplicatorConfigHelper.java:74)
    ��������at�com.marmots.dbreplicator.DBReplicatorAdmin.<init>(DBReplicatorAdmin.java:115)
    ��������at�com.marmots.dbreplicator.DBReplicatorAdmin.main(DBReplicatorAdmin.java:93)
    Driver�file�is�not�in�the�classpath�!!!�
    I�have�tried�also�(as�you�can�see�in�comented�lines)�to�update�System�property�java.class.path�by�adding�the�path�to�the�jar�but�neither...
    I'm�sure�I'm�making�a/some�mistake/s...�can�you�help�me?
    Thanks�in�advice,
    (if�there�is�some�incorrect�word�or�expression�excuse�me)

    Sorry i have tried to format the code, but it has changed   to �... sorry read this one...
    Hello All,
    I'm making a Java SQL Client. I have practicaly all basic work done, now I'm trying to improve it.
    One thing I want it to do is to allow the user to specify new drivers and to use them to make new connections. To do this I have this class:
    public class DriverFinder extends URLClassLoader{
    private JarFile jarFile = null;
    private Vector drivers = new Vector();
    public DriverFinder(String jarName) throws Exception{
    super(new URL[]{ new URL("jar", "", "file:" + new File(jarName).getAbsolutePath() +"!/") }, ClassLoader.getSystemClassLoader());
    jarFile = new JarFile(new File(jarName));
    System.out.println("-->" + System.getProperty("java.class.path"));
    System.setProperty("java.class.path", System.getProperty("java.class.path")+File.pathSeparator+jarName);
    System.out.println("-->" + System.getProperty("java.class.path"));
    Enumeration enumeration = jarFile.entries();
    while(enumeration.hasMoreElements()){
    String className = ((ZipEntry)enumeration.nextElement()).getName();
    if(className.endsWith(".class")){
    className = className.substring(0, className.length()-6);
    if(className.indexOf("Driver")!=-1)System.out.println(className);
    try{
    Class classe = loadClass(className, true);
    Class[] interfaces = classe.getInterfaces();
    for(int i=0; i<interfaces.length; i++){
    if(interfaces.getName().equals("java.sql.Driver")){
    drivers.add(classe);
    Class superclasse = classe.getSuperclass();
    interfaces = superclasse.getInterfaces();
    for(int i=0; i<interfaces.length; i++){
    if(interfaces[i].getName().equals("java.sql.Driver")){
    drivers.add(classe);
    }catch(NoClassDefFoundError e){
    }catch(Exception e){}
    public Enumeration getDrivers(){
    return drivers.elements();
    public String getJarFileName(){
    return jarFile.getName();
    public static void main(String[] args) throws Exception{
    DriverFinder df = new DriverFinder("D:/Classes/db2java.zip");
    System.out.println("jar: " + df.getJarFileName());
    Enumeration enumeration = df.getDrivers();
    while(enumeration.hasMoreElements()){
    Class classe = (Class)enumeration.nextElement();
    System.out.println(classe.getName());
    It loads a jar and searches it looking for drivers (classes implementing directly or indirectly interface java.sql.Driver) At the end of the execution I have found all drivers in the jar file.
    The main application loads jar files from an XML file and instantiates one DriverFinder for each jar file. The problem is at execution time, it finds the drivers and i think loads it by issuing this statement (Class classe = loadClass(className, true);), but what i think is not what is happening... the execution of my code throws this exception
    java.lang.ClassNotFoundException: com.ibm.as400.access.AS400JDBCDriver
    at java.net.URLClassLoader$1.run(URLClassLoader.java:198)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:186)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:299)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:265)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:255)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:315)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:140)
    at com.marmots.database.DB.<init>(DB.java:44)
    at com.marmots.dbreplicator.DBReplicatorConfigHelper.carregaConfiguracio(DBReplicatorConfigHelper.java:296)
    at com.marmots.dbreplicator.DBReplicatorConfigHelper.<init>(DBReplicatorConfigHelper.java:74)
    at com.marmots.dbreplicator.DBReplicatorAdmin.<init>(DBReplicatorAdmin.java:115)
    at com.marmots.dbreplicator.DBReplicatorAdmin.main(DBReplicatorAdmin.java:93)
    Driver file is not in the classpath !!!
    I have tried also (as you can see in comented lines) to update System property java.class.path by adding the path to the jar but neither...
    I'm sure I'm making a/some mistake/s... can you help me?
    Thanks in advice,
    (if there is some incorrect word or expression excuse me)

  • How to get the execution time of a Discoverer Report from qpp_stats table

    Hello
    by reading some threads on this forum I became aware of the information stored in eul5_qpp_stats table. I would like to know if I can use this table to determine the execution time of a worksheet. In particular it looks like the field qs_act_elap_time stores the actual elapsed time of each execution of specific worksheet: am I correct? If so, how is this value computed? What's the unit of measure? I assume it's seconds, but then I've seen that sometimes I get numbers with decimals.
    For example I ran a worksheet and it took more than an hour to run, and the value I get in the qs_act_elap_time column is 2218.313.
    Assuming the unit of measure was seconds than it would mean approx 37 mins. Is that the actual execution time of the query on the database? I guess the actual execution time on my Discoverer client was longer since some calculations were performed at the client level and not on the database.
    I would really appreciate if you could shed some light on this topic.
    Thanks and regards
    Giovanni

    Thanks a lot Rod for your prompt reply.
    I agree with you about the accuracy of the data. Are you aware of any other way to track the execution times of Discoverer reports?
    Thanks
    Giovanni

  • How to extend the execution time of an ABAP Program using the Process chain

    Hello Sapians,
    Our Environment has got 600seconds = 10 mintues as the execution time.
    My ABAP Program is taking more than this 600 seconds, to show the result, I found this when I tried to execute in debug mode, it shows the result.
    If I execute in background also it shows the results succesfully.
    Only issue is when I execute this report in foreground it has been taking ages and goes on Time OUT Error.
    It has been decided that we can extend the execution time only for this report, and it will reset the time back to 10mintues once the report has been executed successfully or failed in between for any other reasons.
    And we can achieve this by using the process chains.
    Can any body help me please in this regard
    Thanks,

    Hi,,,,,,,,,,
    Besides Process Chain There is another way out for this........
    Resetting time counter of dialog process so that time-out does not
    happen. Use this fm within your program at appropriate locations to
    reset time counter.
    "CALL FUNCTION 'TH_REDISPATCH'."
    Thanks
    Saurabh

  • SOAP ADAPTER TIME OUT ISSUE

    We are getting the data from sql server into PI through Soap adapter.
    Till day before yesterday everything is running fine .But now it has started giving us the below error.
    there were no changes done in the interface.
    I am not able to see any error in the runtime workbench ,communication channel monitoring, message monitoring.
    This below error is giving in sql server.
    Can any PI expert explain or provide me a solution to solve this issue.
    Msg 6522, Level 16, State 1, Procedure sp_PI_WS_Backflush_Production_V2, Line 0
    A .NET Framework error occurred during execution of user-defined routine or aggregate "sp_PI_WS_Backflush_Production_V2":
    System.Net.WebException: The operation has timed out
    System.Net.WebException:
       at System.Net.HttpWebRequest.GetRequestStream()
       at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
       at PI_WS_Backflush_Production_V2.Backflush_OB_Sync_SIService.Backflush_OB_Sync_SI(BackflushRequest_DT BackflushRequest_MT)
       at PI_WS_Backflush_Production_V2.StoredProcedures.Backflush_Production_V2(String PalletNumber, String StockKeepingUnit, String ProductionPlant, String BatchNumber, DateTime ProductionDate, String Quantity, String UnitOfMeasure, String Destination, String& uState, String SAPUser, String SAPPassword)

    Time out issue could be due to the long running process in your SQL server. please check the SQL server for its performance and also check the number of http worker threads on the SQL server side where the SOAP adapter makes the call and get the request back.

  • Different execution times for back ground jobs - why?

    We have few jobs scheduled and each time they run, we see a different execution times. Some times, it increases and some times it decreases steeply . What could be the reasons?
    Note:
    1. We have the same load of jobs on the system in all the cases.
    2. We haven't changed any settings at system level.
    3. Data is more or less at the same range in all the executions.
    4. We mainly run these jobs
    Thanks,
    Kiran
    Edited by: kiran dasari on Mar 29, 2010 7:12 PM

    Thank you Sandra.
    We have no RFC calls or other instances. Ours is a very simple system. We have two monster jobs, the first one for HR dataand second an extract program to update a Z table for BW loads.
    Our basis and admin confirmed that there are no network issues
    Note: We are executing these jobs over the weekend nights.
    Thanks,
    Kiran

  • Max execution time per pixel causing rendering differences between GPUs

    Is there a maximum execution time different graphics cards
    will process each pixel as part of the shader? When running the
    Raytracing script (
    http://www.adobe.com/cfusion/exchange/index.cfm?event=extensionDetail&loc=en_us&extid=1634 018
    ) on my Macbook Pro (256MB ATI Radeon X1600) then many pixels come
    out grey or black as the loop per pixel that is tracing the ray
    takes longer than some built in execution limit.
    I first noticed this with a filter I've been working on which
    looks fine on my alu iMac (512MB Nvidia GeForce 8800 GS) but
    rubbish on the Macbook Pro or older iMacs.
    Are there ways around this limit like splitting long for or
    while loops into smaller chunks, or is it just a hardwired max
    execution time per pixel?

    I don't think you can time out on processing an individual
    pixel, but I could be wrong. You could try reducing the number of
    reflections in the filter and seeing if that fixes the problem. It
    could be a math precision difference between the cards.
    Shaders can (and will) time out, but individual pixels
    shouldn't. It could also be a driver issue with the structure of
    the filter. I have a x1600 mac book pro here and I'll try it out if
    I get a chance.

  • Execution time of query on different indexes

    Hello,
    I have a query on the table, the execution time has hugh difference using different indexes on the table. The table has about 200,000 rows. Any explaination on it?
    Thanks,
    create table TB_test
    ( A1 number(9),
    A2 number(9)
    select count(*) from TB_test
    where A1=123 and A2=456;
    A. With index IDX_test on column A1:
    Create index IDX_test on TB_test(A1);
    Explain plan:
    SELECT STATEMENT
    Cost: 3,100
    SORT AGGREGATE
    Bytes: 38 Cardinality: 1
    TABLE ACCESS BY INDEX ROWID TABLE TB_test
    Cost: 3,100 Bytes: 36 Cardinality: 1
    INDEX RANGE SCAN INDEX IDX_test
    Cost: 40 Cardinality: 21,271
    Execution time is : 5 Minutes
    B. With index IDX_test on column A1 and A2:
    Create index IDX_test on TB_test(A1, A2);
    Explain plan:
    SELECT STATMENT
    Cost: 3 Bytes: 37 Cardinality: 1
    SORT AGGREGATE
    Bytes: 37 Cardinality: 1
    INDEX RANGE SCAN INDEZ IDX_test
    Cost: 3 Bytes 37 Cardinality:1
    Execution time is: 1.5 Seconds

    Additional you should check how many values you have in your table for the specific column values.
    The following select might be helful for that.
    select count(*)  "total_count"
           ,count(case when A1=123 then 1 end) "A1_count"
           ,count(case when A1=123 and A2=456 then 1 end) "A1andA2_count"
    from TB_test;Share your output of this.
    I expect the value for A1_count still to be high. But the value for A1+A2_count relatively low.
    However 5 minutes is far to long for such a small table. Even if you run it on a laptop.
    There must be a reason why it is that slow.
    First thing to consider would be to update your statistics for the table and the index.
    Second thing could be that the table is very sparsly fillled. Meaning, if you frequently delete records from this table and load new data using APPEND hint, then the table will grow, because the free space from the deletes is never reused. Any table access in the execution plan, will be slower then needed.
    A similar thing can happen, if many updates on previously empty columns are made on a table (row chaining problem).
    So if you explain a little, how this table is filled and used, we could recognize a typical pattern that leads to performance issues.
    Edited by: Sven W. on Nov 28, 2012 5:54 PM

  • To reduce execution time of a Business Objects Dataservices job.

    The  issue that we are facing-
    Our goal-  To compare a record from a file  with 422928 records from another table on basis of  name & country, if a match is found then you take some specific columns of that matched record (from the table ) as our ouput.
    What we are doing-  We are at 1st  removing duplicates by doing matching on the address components (i.e.- addr_line1, city, state, postal code & country), here                    the break key for match transform is country & postal_code, its taking 1823.98 secs. Now the record count is 193317
                      Then we are merging the file record along with the 193317 records to put them in the same path and send them for matching.
                      The match criteria is the firm name & iso_country_cd,
                       the break key for match transform is the  iso_country_cd & the 1st  letter  of the name.
                       It took 1155.156 secs.
    We have used the "Run match as seperate process' option for the match to reduce the time.
    The whole job took  3038.805 secs.
    Please suggest how to reduce the execution time.
    Edited by: Susmit Das on Mar 29, 2010 7:41 AM

    This really is impossible to help with without seeing your code.
    Replacing while loops with Timed Loops will not help. Timed Loops are used for slowing while loops down in a controlled manner. You would use them to synchronise with a clock rate, and can set other parameters for priority and CPU allocation etc. These will not likely help to reduce your execution time.
    If you are seeing code execution of 1 second then you will need to optimise your code to reduce the execution time. There are LabVIEW guides on how to improve your code execution time, just search around for them.
    Also try using the Profiling tools to learn which VIs (I presume your code is componentised and each while loop contains subVIs?) are hogging the most CPU time.
    If you cannot share your VI then there it is very hard to know where your code execution bottlenecks are.
    Thoric (CLA, CLED, CTD and LabVIEW Champion)

Maybe you are looking for