Execution Time with Forge
Is there any Performance improvements that can be applied to improve the run time with Forge? It takes us 90 minutes to complete.
Hi Harley,
The short answer is "it depends". There's a few quick hit things you can watch out for:
unnecessary properties,
work done in Forge that could be pushed to a database,
Then there's more involved efforts that may or may not apply to your case, such as moving to parallel forge or removing Java manipulators.
Dgidx has a few other things you can tune but it's mostly dependent on the character of your index and whether or not you are enabling unnecessary features on your properties and dimensions.
Hope that helps, let us know if there's anything else you need.
Patrick Rafferty
http://branchbird.com
Similar Messages
-
SQL Tuning and OPTIMIZER - Execution Time with " AND col .."
Hi all,
I get a question about SQL Tuning and OPTIMIZER.
There are three samples with EXPLAIN PLAN and execution time.
This "tw_pkg.getMaxAktion" is a PLSQL Package.
1.) Execution Time : 0.25 Second
2.) Execution Time : 0.59 Second
3.) Execution Time : 1.11 Second
The only difference is some additional "AND col <> .."
Why is this execution time growing so strong?
Many Thanks,
Thomas
----[First example]---
Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
Connected as dbadmin2
SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
3 FROM studie
4 ) max_aktion
5 WHERE max_aktion.max_aktion_id < 900 ;
Explained
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3201460684
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
| 0 | SELECT STATEMENT | | 220 | 880 | 5 (40)| 00:00:
|* 1 | INDEX FAST FULL SCAN| SYS_C005393 | 220 | 880 | 5 (40)| 00:00:
Predicate Information (identified by operation id):
1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900)
13 rows selected
SQL>
Execution time (PL/SQL Developer says): 0.25 seconds
----[/First]---
----[Second example]---
Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
Connected as dbadmin2
SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
3 FROM studie
4 ) max_aktion
5 WHERE max_aktion.max_aktion_id < 900
6 AND max_aktion.max_aktion_id <> 692;
Explained
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3201460684
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
| 0 | SELECT STATEMENT | | 11 | 44 | 6 (50)| 00:00:
|* 1 | INDEX FAST FULL SCAN| SYS_C005393 | 11 | 44 | 6 (50)| 00:00:
Predicate Information (identified by operation id):
1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
"TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692)
14 rows selected
SQL>
Execution time (PL/SQL Developer says): 0.59 seconds
----[/Second]---
----[Third example]---
SQL> EXPLAIN PLAN FOR
2 SELECT * FROM ( SELECT studie_id, tw_pkg.getMaxAktion(studie_id) AS max_aktion_id
3 FROM studie
4 ) max_aktion
5 WHERE max_aktion.max_aktion_id < 900
6 AND max_aktion.max_aktion_id <> 692
7 AND max_aktion.max_aktion_id <> 392;
Explained
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3201460684
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
| 0 | SELECT STATEMENT | | 1 | 4 | 6 (50)| 00:00:
|* 1 | INDEX FAST FULL SCAN| SYS_C005393 | 1 | 4 | 6 (50)| 00:00:
Predicate Information (identified by operation id):
1 - filter("TW_PKG"."GETMAXAKTION"("STUDIE_ID")<900 AND
"TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>692 AND
"TW_PKG"."GETMAXAKTION"("STUDIE_ID")<>392)
15 rows selected
SQL>
Execution time (PL/SQL Developer says): 1.11 seconds
----[/Third]---Edited by: thomas_w on Jul 9, 2010 11:35 AM
Edited by: thomas_w on Jul 12, 2010 8:29 AMHi,
this is likely because SQL Developer fetches and displays only limited number of rows from query results.
This number is a parameter called 'sql array fetch size', you can find it in SQL Developer preferences under Tools/Preferences/Database/Advanced tab, and it's default value is 50 rows.
Query scans a table from the beginning and continue scanning until first 50 rows are selected.
If query conditions are more selective, then more table rows (or index entries) must be scanned to fetch first 50 results and execution time grows.
This effect is usually unnoticeable when query uses simple and fast built-in comparison operators (like = <> etc) or oracle built-in functions, but your query uses a PL/SQL function that is much more slower than built-in functions/operators.
Try to change this parameter to 1000 and most likely you will see that execution time of all 3 queries will be similar.
Look at this simple test to figure out how it works:
CREATE TABLE studie AS
SELECT row_number() OVER (ORDER BY object_id) studie_id, o.*
FROM (
SELECT * FROM all_objects
CROSS JOIN
(SELECT 1 FROM dual CONNECT BY LEVEL <= 100)
) o;
CREATE INDEX studie_ix ON studie(object_name, studie_id);
ANALYZE TABLE studie COMPUTE STATISTICS;
CREATE OR REPLACE FUNCTION very_slow_function(action IN NUMBER)
RETURN NUMBER
IS
BEGIN
RETURN action;
END;
/'SQL array fetch size' parameter in SQLDeveloper has been set to 50 (default). We will run 3 different queries on test table.
Query 1:
SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
FROM studie
) max_aktion
WHERE max_aktion.max_aktion_id < 900
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1.22 1.29 0 1310 0 50
total 3 1.22 1.29 0 1310 0 50
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 93 (TEST)
Rows Row Source Operation
50 INDEX FAST FULL SCAN STUDIE_IX (cr=1310 pr=0 pw=0 time=355838 us cost=5536 size=827075 card=165415)(object id 79865)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
50 INDEX MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
FROM studie
) max_aktion
WHERE max_aktion.max_aktion_id < 900
AND max_aktion.max_aktion_id > 800
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 8.40 8.62 0 9351 0 50
total 3 8.40 8.64 0 9351 0 50
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 93 (TEST)
Rows Row Source Operation
50 INDEX FAST FULL SCAN STUDIE_IX (cr=9351 pr=0 pw=0 time=16988202 us cost=5552 size=41355 card=8271)(object id 79865)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
50 INDEX MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
FROM studie
) max_aktion
WHERE max_aktion.max_aktion_id = 600
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 18.72 19.16 0 19315 0 1
total 3 18.73 19.16 0 19315 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 93 (TEST)
Rows Row Source Operation
1 INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 INDEX MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 1 - 1,29 sec, 50 rows fetched, 1310 index entries scanned to find these 50 rows.
Query 2 - 8,64 sec, 50 rows fetched, 9351 index entries scanned to find these 50 rows.
Query 3 - 19,16 sec, only 1 row fetched, 19315 index entries scanned (full index).
Now 'SQL array fetch size' parameter in SQLDeveloper has been set to 1000.
Query 1:
SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
FROM studie
) max_aktion
WHERE max_aktion.max_aktion_id < 900
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 18.35 18.46 0 19315 0 899
total 3 18.35 18.46 0 19315 0 899
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 93 (TEST)
Rows Row Source Operation
899 INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=20571272 us cost=5536 size=827075 card=165415)(object id 79865)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
899 INDEX MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 2:
SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
FROM studie
) max_aktion
WHERE max_aktion.max_aktion_id < 900
AND max_aktion.max_aktion_id > 800
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 18.79 18.86 0 19315 0 99
total 3 18.79 18.86 0 19315 0 99
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 93 (TEST)
Rows Row Source Operation
99 INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=32805696 us cost=5552 size=41355 card=8271)(object id 79865)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
99 INDEX MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)Query 3:
SELECT * FROM ( SELECT studie_id, very_slow_function(studie_id) AS max_aktion_id
FROM studie
) max_aktion
WHERE max_aktion.max_aktion_id = 600
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 18.69 18.84 0 19315 0 1
total 3 18.69 18.84 0 19315 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 93 (TEST)
Rows Row Source Operation
1 INDEX FAST FULL SCAN STUDIE_IX (cr=19315 pr=0 pw=0 time=0 us cost=5536 size=165415 card=33083)(object id 79865)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 INDEX MODE: ANALYZED (FAST FULL SCAN) OF 'STUDIE_IX' (INDEX)And now:
Query 1 - 18.46 sec, 899 rows fetched, 19315 index entries scanned.
Query 2 - 18.86 sec, 99 rows fetched, 19315 index entries scanned.
Query 3 - 18.84 sec, 1 row fetched, 19315 index entries scanned. -
Reduce execution time with selects
Hi,
I have to reduce the execution time in a report, most of the consumed time is in the select query.
I have a table, gt_result:
DATA: BEGIN OF gwa_result,
tknum LIKE vttk-tknum,
stabf LIKE vttk-stabf,
shtyp LIKE vttk-shtyp,
route LIKE vttk-route,
vsart LIKE vttk-vsart,
signi LIKE vttk-signi,
dtabf LIKE vttk-dtabf,
vbeln LIKE likp-vbeln,
/bshm/le_nr_cust LIKE likp-/bshm/le_nr_cust,
vkorg LIKE likp-vkorg,
werks LIKE likp-werks,
regio LIKE kna1-regio,
land1 LIKE kna1-land1,
xegld LIKE t005-xegld,
intca LIKE t005-intca,
bezei LIKE tvrot-bezei,
bezei1 LIKE t173t-bezei,
fecha(10) type c.
DATA: END OF gwa_result.
DATA: gt_result LIKE STANDARD TABLE OF gwa_result.
And the select query is this:
SELECT ktknum kstabf kshtyp kroute kvsart ksigni
k~dtabf
lvbeln l/bshm/le_nr_cust lvkorg lwerks nregio nland1 oxegld ointca
tbezei ttbezei
FROM vttk AS k
INNER JOIN vttp AS p ON ktknum = ptknum
INNER JOIN likp AS l ON pvbeln = lvbeln
INNER JOIN kna1 AS n ON lkunnr = nkunnr
INNER JOIN t005 AS o ON nland1 = oland1
INNER JOIN tvrot AS t ON troute = kroute AND t~spras = sy-langu
INNER JOIN t173t AS tt ON ttvsart = kvsart AND tt~spras = sy-langu
INTO TABLE gt_result
WHERE ktknum IN s_tknum AND ktplst IN s_tplst AND k~route IN s_route AND
k~erdat BETWEEN s_erdat-low AND s_erdat-high AND
l~/bshm/le_nr_cust <> ' ' "IS NOT NULL
AND k~stabf = 'X'
AND ktknum NOT IN ( SELECT tktknum FROM vttk AS tk
INNER JOIN vttp AS tp ON tktknum = tptknum
INNER JOIN likp AS tl ON tpvbeln = tlvbeln
WHERE l~/bshm/le_nr_cust IS NULL )
AND k~tknum NOT IN ( SELECT tknum FROM /bshs/ssm_eship )
AND ( o~xegld = ' '
OR ( o~xegld = 'X' AND
( ( n~land1 = 'ES'
AND ( nregio = '51' OR nregio = '52'
OR nregio = '35' OR nregio = '38' ) )
OR n~land1 = 'ESC' ) )
OR ointca = 'AD' OR ointca = 'GI' ).
Does somebody know how to reduce the execution time ?.
Thanks.Hi,
Try to remove the join. Use seperate selects as shown in example below and for the sake of selection, keep some key fields in your internal table.
Then once your final table is created, you can copy the table into GT_FINAL which will contain only fields you need.
EX
data : begin of it_likp occurs 0,
vbeln like likp-vbeln,
/bshm/le_nr_cust like likp-/bshm/le_nr_cust,
vkorg like likp-vkorg,
werks like likp-werks,
kunnr likr likp-kunnr,
end of it_likp.
data : begin of it_kna1 occurs 0,
kunnr like...
regio....
land1...
end of it_kna1 occurs 0,
Select tknum stabf shtyp route vsart signi dtabf
from VTTP
into table gt_result
WHERE tknum IN s_tknum AND
tplst IN s_tplst AND
route IN s_route AND
erdat BETWEEN s_erdat-low AND s_erdat-high.
select vbeln /bshm/le_nr_cust
vkorg werks kunnr
from likp
into table it_likp
for all entries in gt_result
where vbeln = gt_result-vbeln.
select kunnr
regio
land1
from kna1
into it_kna1
for all entries in it_likp.
similarly for other tables.
Then loop at gt result and read corresponding table and populate entire record :
loop at gt_result.
read table it_likp where vbeln = gt_result-vbeln.
if sy-subrc eq 0.
move corresponding fields of it_likp into gt_result.
gt_result-kunnr = it_likp-kunnr.
modify gt_result.
endif.
read table it_kna1 where kunnr = gt_result-vbeln.
if sy-subrc eq 0.
gt_result-regio = it-kna1-regio.
gt_result-land1 = it-kna1-land1.
modify gt_result.
endif.
endloop. -
How to know child procedure Execution time with in parent procedure
Hi Team,
I've a requirement in which I need to get the execution time of a child procedure while its running in a parent procedure in PLSQL. While the child process is running, I want to know its execution time so that if it execution time exceeds more than 5 seconds than I want to through an error. Please let me know by what means this can be achieved in plsql.
Regards,
Tech D.TechD wrote:
Hi Team,
I've a requirement in which I need to get the execution time of a child procedure while its running in a parent procedure in PLSQL. While the child process is running, I want to know its execution time so that if it execution time exceeds more than 5 seconds than I want to through an error. Please let me know by what means this can be achieved in plsql.
Regards,
Tech D.PL/SQL is NOT a Real Time programming language.
The procedure that invokes the child procedure is effectively dormant while the child runs.
Plus there is no easy way to know when 5 seconds has elapsed. -
Measure execution time with PowerShell
What would be the best way to accomplish following:
1. Get start time and write it to variable
2. Do something
3. Get end time and measure how long it taked from start to end and store the result in variable
I would need the result be in format like this hours:minutes:seconds and to support 24 hour clock so if the execution starts e.g 11:30 and ends 14:30 it shows correct time.PowerShell already comes with a CmdLet that allows you to measure the time a script block takes to execute, that sounds like what you are after.
See these for more details on Measure-Command:
https://technet.microsoft.com/en-us/library/hh849910.aspx?f=255&MSPPError=-2147217396
https://technet.microsoft.com/en-us/library/ee176899.aspx -
How Can I reduce Query Execution time with use of filter
Dear Experts,
In a query execution is faster when there is no product filter and these products can be a list contaning mare than 300 items.
I am using In operator for that filter.Maybe if you posted the quer[y][ies] we could get a better idea.
-
Why the execution time increases with a while loop, but not with "Run continuously" ?
Hi all,
I have a serious time problem that I don't know how to solve because I don't know exactly where it comes from.
I command two RF switches via a DAQ card (NI USB-6008). Only one position at the same time can be selected on each switch. Basically, the VI created for this functionnality (by a co-worker) resets all the DAQ outputs, and then activates the desired ones. It has three inputs, two simp0le string controls, and an array of cluster, which contains the list of all the outputs and some informations to know what is connected (specific to my application).
I use this VI in a complex application, and I get some problems with the execution time, which increased each time I callled the VI, so I made a test VI (TimeTesting.vi) to figure out where the problem came from. In this special VI I record the execution time in a csv file to analyse then with excel.
After several tests, I found that if I run this test VI with the while loop, the execution time increases at each cycle, but if I remove the while loop and use the "Run continuously" funtionnality, the execution time remains the same. In my top level application I have while loops and events, and so the execution time increases too.
Could someone explain me why the execution time increases, and how can I avoid that? I attached my test VI and the necessary subVIs, as well as a picture of a graph which shows the execution time with a while loop and with the "run continuously".
Thanks a lot for your help!
Solved!
Go to Solution.
Attachments:
TimeTesting.zip 70 KB
Graph.PNG 20 KBjul7290 wrote:
Thank you very much for your help! I added the "Clear task" vi and now it works properly.
If you are still using the RUn Continuously you should stop. That is meant strictly for debugging. In fact, I can't even tell you the last time I ever used it. If you want your code to repeat you should use loops and control the behavior of the code.
Mark Yedinak
"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot -
Long execution times for TestStand conditional statements
I have two test stations – one here, one at the factory in China that has been operating for about a year. The test program uses TestStand 3.1 and calls primarily dll's developed using CVI. Up until a couple months ago, both test stations performed in a similar manner, then the computer at the factory died and was replaced with a new, faster computer. Now the same test sequence at the factory take three times as long to execute (30 min at the facotry, 10min here).
I have recoded the execution time at various point during the execution, and have found that the extra times seems to be occurring during the evaluation of conditional statements in TestStand (i.e. for loops, if statements, case statements). For example, one particular ‘for’ evaluation takes 30 ms on the test station here, but takes 400 ms at the test station at the factory (note: this is just the evaluation of the for condition, not the execution of the steps contained within the for loop).
The actual dll calls seem to be slightly faster with the new computer.
Also the ‘Module Times’ reported don’t seem to match the actual time for the module for the computer at the factory. For example, for the following piece of TestStand code:
Label1
Subsequence Call
Label2
I record the execution time to the report text in both Label1 and Label2. Subtracting one from the other gives me about 18 seconds. However the ‘Module Time’ recorded for ‘Subsequence Call’ is only 3.43 seconds.
Any body have any ideas why the long execution time with the new computer? I always setup the computers in exactly the same way, but maybe there is a TestStand setting somewhere that I have missed? Keep in mind, both test stations are running exactly the same revision of code.Got some more results from the factory this morning:
1) Task Manager shows that the TestExec.exe is the only thing using CPU to any significant degree. Also CPU Usage History show that the CPU Usage never reaches 100%.
2) I sent a new test program that will log test execution time in more places. Longer execution times are seen in nearly every area of the program, but one area where this is very dramatic is the time taken to return from one particular subsequence call. In this subsequence I log the time just before the <End Group> at then end of Main. There is nothing in Cleanup. I then log the time immediately after returning from this sequence. On the test system I have here this takes approximately 160 ms. On the test system at the factory this takes approximately 14.5 seconds! The program seems to be hanging here for some reason. Every time this function is called the same thing happens and for the same amount of time (and this function is called about 40 times in the test program, so this is kill me). -
Same sqlID with different execution plan and Elapsed Time (s), Executions time
Hello All,
The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
17th Oct 18th Oct
221,808,602
21
2tc2d3u52rppt
213,170,100
72,495,618
9c8wqzz7kyf37
209,239,059
71,477,888
9c8wqzz7kyf37
139,331,777
1
7b0kzmf0pfpzn
144,813,295
1
0cqc3bxxd1yqy
102,045,818
1
8vp1ap3af0ma5
128,892,787
16,673,829
84cqfur5na6fg
89,485,065
1
5kk8nd3uzkw13
127,467,250
16,642,939
1uz87xssm312g
67,520,695
8,058,820
a9n705a9gfb71
104,490,582
12,443,376
a9n705a9gfb71
62,627,205
1
ctwjy8cs6vng2
101,677,382
15,147,771
3p8q3q0scmr2k
57,965,892
268,353
akp7vwtyfmuas
98,000,414
1
0ybdwg85v9v6m
57,519,802
53
1kn9bv63xvjtc
87,293,909
1
5kk8nd3uzkw13
52,690,398
0
9btkg0axsk114
77,786,274
74
1kn9bv63xvjtc
34,767,882
1,003
bdgma0tn8ajz9
Not only queries are different but also the number of blocks read by top 10 queries are much higher on 17th than 18th.
The other big difference is the average read time on two days
Tablespace IO Stats
17th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
947,766
59
4.24
4.86
185,084
11
2,887
6.42
UNDOTBS2
517,609
32
4.27
1.00
112,070
7
108
11.85
INDUS_MST_DATA01
288,994
18
8.63
8.38
52,541
3
23,490
7.45
INDUS_TRN_INDX01
223,581
14
11.50
2.03
59,882
4
533
4.26
TEMP
198,936
12
2.77
17.88
11,179
1
732
2.13
INDUS_LOG_DATA01
45,838
3
4.81
14.36
348
0
1
0.00
INDUS_TMP_DATA01
44,020
3
4.41
16.55
244
0
1,587
4.79
SYSAUX
19,373
1
19.81
1.05
14,489
1
0
0.00
INDUS_LOG_INDX01
17,559
1
4.75
1.96
2,837
0
2
0.00
SYSTEM
7,881
0
12.15
1.04
1,361
0
109
7.71
INDUS_TMP_INDX01
1,873
0
11.48
13.62
231
0
0
0.00
INDUS_MST_INDX01
256
0
13.09
1.04
194
0
2
10.00
UNDOTBS1
70
0
1.86
1.00
60
0
0
0.00
STG_DATA01
63
0
1.27
1.00
60
0
0
0.00
USERS
63
0
0.32
1.00
60
0
0
0.00
INDUS_LOB_DATA01
62
0
0.32
1.00
60
0
0
0.00
TS_AUDIT
62
0
0.48
1.00
60
0
0
0.00
18th Oct
Tablespace
Reads
Av Reads/s
Av Rd(ms)
Av Blks/Rd
Writes
Av Writes/s
Buffer Waits
Av Buf Wt(ms)
INDUS_TRN_DATA01
980,283
91
1.40
4.74The AWR reports for two days with same sqlID with different execution plan and Elapsed Time (s), Executions time please help me to find out what is reason for this change.
Please find the below detail 17th day my process are very slow as compare to 18th
You wrote with different execution plan, I think, you saw plans. It is very difficult, you get old plan.
I think Execution plans is not changed in different days, if you not added index or ...
What say ADDM report about this script?
As you know, It is normally, different Elapsed Time for same statement in different day.
It is depend your database workload.
It think you must use SQL Access and SQl Tuning advisor for this script.
You can get solution for slow running problem.
Regards
Mahir M. Quluzade -
Execution time of query with high variance
I have an Oracle Database 11.2 R2 which is set up just for testing purposes, so there is no other activity except mine. Now I have a query which I ran 10 times in a row. Between the executions I always flushed BUFFER_CACHE and SHARED_POOL. The strange thing is, that the execution time of the query is strongly varying from 13 seconds up to 207 seconds. From the 10 executions I have 4 times <25 seconds and 4 times > 120 seconds.
What could be the reason for this? As I've said, there is no other activity on the database and it is always the same query with the same parameters running on the same set of data.
The background to this is that I would like to compare the execution time of exactly the same query with different database settings. So I thought I could just run the query ten times and use the average but I didn't expect such a high variance.
Kind regards
PeterHi,
for each execution, look at:
1) plan hash value
2) total logical I/O
3) physical I/O
That should give you some clues as to what's going on.
Best regards,
Nikolay -
Execution time issues with SU01 demo script
Having worked with Scripting in a Box for a while now I wanted to try out the examples there. I read FM: SO_USER_LIST_READ or another one explaining why my attempt to narrow the returned users failed (Craig, did you find out why the functionality was removed?) and Re: Issue with "Scripting in a Box" seeing that Harry had the same problems with only ~200 users in his system. However, Craig's original post states he successfully managed with over 400 users. I'm a bit confused...
I included some simple timing stuff and found out that processing of one user in the loop takes about 1.7 seconds - little surprise then that every script times out. This seems to be due to the additional calls to GetStatus() and GetValid() - by commenting them out I get the whole list rather quickly.
Unfortunately commenting them out also means no nice icons for 'Status' and 'Valid', which is not desired. I probably could create a Z FM to deliver me the userlist with these two fields already added (which would save on rfc-calls, assuming the operation is much quicker on the server directly), but I hoped to get a solution based purely on PHP, not own ABAP coding (being aware that Craig also used a Z FM anyway, but still...)
I'm a bit unsure now how easy it is to actually create useful frontends in PHP, with such long execution times. I assume this will happen in other occasions as well, not only for user lists. Is there an alternative? Or a general way to do those things quicker?
:Frederic:Craig: you say it's easy to go from 1.7 seconds per user lookup down to a small fraction of it? Then apparently I'm lacking these skills. Could you please give me a hint what should be done there?
I though about creating a Z function, but having to write custom wrappers - possibly for about any transaction in this style - I wanted to avoid.
Bala: the two functions only take one user as input, not a list. So w/o modifying the ABAP side I can't feed the whole list in there. I wonder how much it would improve the result time anyway, so perhaps I'm trying it. It's just not a solution I'd prefer.
Paging is a good idea, the actual call to get the whole userlist is quite quick. Having like 20 users displayed at a time is manageable - still slow, but the script won't timeout anymore. I think I'll implement this today.
About AJAX: yes, I want to play around a bit with AJAX, however having the two columns valid and status removed and the information only displayed on mouseover etc is a bit like cheating. And 1.7+ seconds waiting for a hoover info is too long. So I'd like to optimize on the rfc-calling side primarily.
Craig: surely it was just a demo, and I'm just trying to get it to work for understanding it
:Frederic: -
Hi All,
We are trying to execute coded UI scripts without Visual Studio installed. We are using “Visual Studio Test Agent 2010” for executing coded UI scripts without VS2010 in Windows 7, it is working fine. Also we verified executing the same script with VS2010
Premium, it works fine as well.
Here the challenge we are facing is with the Test execution time.
When we run the coded UI script with IE11-Windows 7 OS-Visual Studio 2010 Premium it takes
3min 36sec to complete the execution whereas with IE11-Windows 7 OS-Visual Studio Test Agent 2010 it takes
6min 40sec for the same script to execute (which is almost double the time it takes while executing using VS2010).
My question is what may be the reason for this difference? and how we can reduce the test execution time when running from Test Agent 2010?
Kindly let us know what is the best practice to execute the coded UI scripts with least test execution time in Windows 7 Operating system using Visual Studio Test Agent 2010 without VS2010
installed.
Looking forward for your positive response.
Thanks in advance..!!Tina-Shi, Thanks for the information.
As you mentioned, we tried to execute the coded UI test using Mstest.exe in command on VS2010 Premium and checked on the execution
time, there was a slight difference.
Please find below execution time.
Using Mstest.exe in command on VS2010 Premium/Win7 – 3.47 minutes
Using VS2010 Premium/Win7 – 3.53 minutes
Using Test Agent/Win7 – 7.3 minutes
Also, I closed all the other processes in Task manager before starting up the execution.
Still , I am facing the same Issue. Could you please suggest any other way to reduce the scripting time of coded UI script execution
through Test agent 2010.
Looking forward for your earliest response. -
Alert with event for delayed job and long execution time
Dear All,
We are planning to send alert via email in case job delayed or long execution time.
I have followed below steps:
1) Create event Raise Event when job is delayed.
2) create job chain with STEP1, Job 1 and assign event in raise event parameter.
3) Once job chain delayed it should raise events.
4) Above event should trigger custom email but I can not put the Mail_To parameter as IN parameter. And can not be recognized during
execution.
It ends with the below error.
Details:
JCS-122035: Unable to persist: JCS-102075: Mandatory parameter not set: Parameter Mail_To for job 20413
at com.redwood.scheduler.model.SchedulerSessionImpl.writeDirtyListLocal(SchedulerSessionImpl.java:805)
at com.redwood.scheduler.model.SchedulerSessionImpl.persist(SchedulerSessionImpl.java:757)
Please let us know if anybody knows how to add Mail_To parameter to script.
Any help is appreciated.
Thanks in advance.
Regards,
JiggiDear Jiggi,
where will you define execution time of particular job? because some jobs will take only 1 or 2 minutes, but some jobs normally take more than hours, so how will you decide execution time of individual jobs?
i thinks you can use P_TO Parameter for sending mail, if you want to add some output log activate spool output script.
Thanks and regards
Muhammad Asif -
what is the difference in execution time between a program written in C language and the same program made with LabView?
Hi Pepe
You cannot say which is faster, the LV or the C programm. The only way to be sure is to program in both environments and to check than. Check this for some benchmark examples:
http://zone.ni.com/devzone/conceptd.nsf/webmain/DC9B6DD177D91D6286256C9400733D7F?OpenDocument&node=200059
Luca
Regards,
Luca -
SQL to find queries with execution time, total execution time so far,
Hello Sir,
We are looking for a query to find queries taking more than 6 seconds to execute, no of its executions so far, average execution time, total execution time so far.Thanks in advance.
-MalSomething like this.
SELECT s.SID, s.serial#, t.sql_fulltext,t.sql_id,s.action FROM v$session s, v$sql t
WHERE s.status = 'ACTIVE'
AND s.sql_address = t.address
AND s.sql_hash_value = t.hash_value
AND s.last_call_et >6HTH
-Anantha
Maybe you are looking for
-
I'm using lightroom 4 and can't figure out how to develop a picture.
I have a bunch of RAW CR2s on my computer. I imported them to LR by adding them to create a catalog. They show up in my library but when I click on develop there's nothing there. How do I get an image from the Library to Develop? What am I missing?
-
Issue in Query to find the Cumulative Values
Hi All, I am developing one Query in which i have to show plant wise last 12 months status of audit. In this audit i am able to show the types of audits say X,Y,Z perfectly. But the thiing i want to calculate is the cumulative values of X,Y,Z before
-
Hi Experts, I have a OO ALV with hierseq list display. My column heading description is coming from Data element and very short. How do I give my own description of the columns. Thanks
-
I have forgotten my password to my Ipod. Everytime I try to restore it, back it up or update it to the new 6.1 version of Itunes, it won't let me. What do I do?
-
Can projects compiled in RH8 work in a 64 bit application?
Using RH8 / RSC 1.3 / Windows XP In the not too distant future my company will be developing 64-bit applications. Will RH8 compiled Webhelp and chm projects work for those applications? If not does RH10? Thanks for any info you can provide.