Set container execution time
Hi,
I created cdb with 252 container databases, set threaded_execution=true and allocated 8Gb of memory to SGA
When I am switching between containers in sqlplus it is taking minutes to execute alter session
Is it expected? Or something can be tuned to make it faster
cdb> alter session set container=pdb252;
Session altered.
Elapsed: 00:03:16.71
cdb> alter session set container=pdb1;
Session altered.
Elapsed: 00:12:57.39
cdb>
cdb> alter session set container=pdb100;
Session altered.
Elapsed: 00:07:31.42
cdb> alter session set container=pdb252;
Session altered.
Elapsed: 00:11:20.17
Thanks,
Andrey
Hi Andrey
I created same number of PDB's but for me switching between containers is very fast. I am using windows . I think issue is somewhere else.
SQL> show parameter threaded_execution
NAME TYPE VALUE
threaded_execution boolean TRUE
SQL> show parameter sga
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 8000M
sga_target big integer 8000M
SQL> set time on
22:06:37 SQL> set timi on
22:06:41 SQL> select count(*) from v$pdbs;
COUNT(*)
253
Elapsed: 00:00:00.00
22:06:43 SQL> alter session set container=Anuj_cl248;
Session altered.
Elapsed: 00:00:00.00
22:06:58 SQL> alter session set container=Anuj_cl241
22:07:03 2 ;
Session altered.
Elapsed: 00:00:00.00
Thanks
Anuj Mohan
Oracle12cSIG(IOUG)
Similar Messages
-
SEM_MATCH query execution time when a predicate is set as a variable
Hello there,
Our ontology contains 70 million triples (12.5 million asserted triples and 58 million inferred triples). We created a virtual model which includes both asserted and inferred triples).
Our semantic network index configuration is 'PSCM' (index status is valid).
The problem we encounter is when we run a SEM_MATCH query which includes a predicate as a variable, the query ran for 22 seconds. When a similar query is run when the variable is subject or an object or both the time for the SEM_MATCH query execution is 0.2 seconds. Is this reasonable? What do we do wrong?
For example:
1. Predicate as a variable (execution time is 22 seconds):
SELECT a FROM TABLE ( SEM_MATCH
( ' { <http://www.tomproj.com/rep#entity_4> ?a 4 }',
SEM_MODELS( 'TOM_VIRTUAL_MODEL') ,
NULL,
NULL,
NULL,
'ALLOW_DUP=T'));
2. Subject as a variable (execution time is 0.2 seconds):
SELECT a FROM TABLE ( SEM_MATCH
( ' { ?a <http://www.tomproj.com/rep#id> 4 }',
SEM_MODELS( 'TOM_VIRTUAL_MODEL') ,
NULL,
NULL,
NULL,
'ALLOW_DUP=T'));
2. Object as a variable (execution time is 0.2 seconds):
SELECT a FROM TABLE ( SEM_MATCH
( ' { <http://www.tomproj.com/rep/#entity_4> <http://www.tomproj.com/rep#id> a }',
SEM_MODELS( 'TOM_VIRTUAL_MODEL') ,
NULL,
NULL,
NULL,
'ALLOW_DUP=T'));
Cheers,
DoronHi,
So far no lack, None of the solutions are working.
Query execution time didn't change after executing the following:
1. SEM_APIS.DROP_SEM_INDEX('SCP');
2. SEM_APIS.ADD_SEM_INDEX('SCPM');
3. SEM_APIS.ALTER_SEM_INDEX_ON_MODEL('TOM','SCPM','REBUILD');
4. SEM_APIS.ALTER_SEM_INDEX_ON_MODEL('TOM_TF','SCPM','REBUILD');
5. SEM_APIS.ALTER_SEM_INDEX_ON_RULES_INDEX('TOM_RI','SCPM','REBUILD');
My original query (see below) execution time remains 22 secs.
SELECT a FROM TABLE ( SEM_MATCH
( ' { <http://www.tomproj.com/rep#entity_4> ?a 4 }',
SEM_MODELS( 'TOM_VIRTUAL_MODEL') ,
NULL,
NULL,
NULL,
'ALLOW_DUP=T'));
When running the SEM_MATCH on each model ('TOM,'TOM_TF') separately execution time was very fast (~0.007 secs):
1. SELECT a FROM TABLE ( SEM_MATCH
( ' { <http://www.tomproj.com/rep#entity_4> ?a 4 }',
SEM_MODELS( 'TOM') ,
NULL,
NULL,
NULL,
NULL));
2. . SELECT a FROM TABLE ( SEM_MATCH
( ' { <http://www.tomproj.com/rep#entity_4> ?a 4 }',
SEM_MODELS( 'TOM_TF') ,
NULL,
NULL,
NULL,
NULL));
When running the query on both models, execution time was 54 secs:
SELECT a FROM TABLE ( SEM_MATCH
( ' { <http://www.tomproj.com/rep#entity_4> ?a 4 }',
SEM_MODELS( 'TOM','TOM_TF') ,
NULL,
NULL,
NULL,
NULL));
We also run the original query via the Jena Adaptor using Java language and SPARQL, but the results were similar.
As for using parallel parameter in SEM_APIS.ALTER_SEM_INDEX…, since we are using Oracle 11g release 1 the SEM_APIS.ALTER_SEM_INDEX doesn’t support the parallel parameter as far as we know.
In Oracle 11g release 2 the SEM_APIS.ALTER_SEM_INDEX was extended to support the use of 'parallel' parameter.
Any ideas on what do we do wrong?
Cheers,
Doron -
Long execution times for TestStand conditional statements
I have two test stations – one here, one at the factory in China that has been operating for about a year. The test program uses TestStand 3.1 and calls primarily dll's developed using CVI. Up until a couple months ago, both test stations performed in a similar manner, then the computer at the factory died and was replaced with a new, faster computer. Now the same test sequence at the factory take three times as long to execute (30 min at the facotry, 10min here).
I have recoded the execution time at various point during the execution, and have found that the extra times seems to be occurring during the evaluation of conditional statements in TestStand (i.e. for loops, if statements, case statements). For example, one particular ‘for’ evaluation takes 30 ms on the test station here, but takes 400 ms at the test station at the factory (note: this is just the evaluation of the for condition, not the execution of the steps contained within the for loop).
The actual dll calls seem to be slightly faster with the new computer.
Also the ‘Module Times’ reported don’t seem to match the actual time for the module for the computer at the factory. For example, for the following piece of TestStand code:
Label1
Subsequence Call
Label2
I record the execution time to the report text in both Label1 and Label2. Subtracting one from the other gives me about 18 seconds. However the ‘Module Time’ recorded for ‘Subsequence Call’ is only 3.43 seconds.
Any body have any ideas why the long execution time with the new computer? I always setup the computers in exactly the same way, but maybe there is a TestStand setting somewhere that I have missed? Keep in mind, both test stations are running exactly the same revision of code.Got some more results from the factory this morning:
1) Task Manager shows that the TestExec.exe is the only thing using CPU to any significant degree. Also CPU Usage History show that the CPU Usage never reaches 100%.
2) I sent a new test program that will log test execution time in more places. Longer execution times are seen in nearly every area of the program, but one area where this is very dramatic is the time taken to return from one particular subsequence call. In this subsequence I log the time just before the <End Group> at then end of Main. There is nothing in Cleanup. I then log the time immediately after returning from this sequence. On the test system I have here this takes approximately 160 ms. On the test system at the factory this takes approximately 14.5 seconds! The program seems to be hanging here for some reason. Every time this function is called the same thing happens and for the same amount of time (and this function is called about 40 times in the test program, so this is kill me). -
HI All,
we have been Created mapping. It contains only one expression. in that we are doing To_char conversion and triming the data.
The target table structure contains 99 Columns. In that one column is Unique.
The count of records at source level is 3 Lakhs + records,
For that reason we had created two mappings. One is for initial Load and another to load last 30 days data.
We had created Process flow, to load last 30 days data. based on fileter *(Updatedate>=(sysdate-30)* and it is taking execution time is 3 Hrs + (12000+)
we tried by reducing last 7 days data to load into Target . It is taking execution time is 3 Hrs + to load (3000+ records)
How we can reduce the execution time.
Regards,Try indexing the Updatedate column if the % of data that you're retrieving is small compared to the total in the table.
Configure the mapping to run SET BASED.
Configure the mapping so that the DEFAULT AUDIT LEVEL is ERROR DETAILS or NONE.
Drop/disable indexes on target and rebuild afterwards.
Drop/disable/novalidate FKs on target and reapply afterwards.
Cheers
Si -
To reduce execution time of a Business Objects Dataservices job.
The issue that we are facing-
Our goal- To compare a record from a file with 422928 records from another table on basis of name & country, if a match is found then you take some specific columns of that matched record (from the table ) as our ouput.
What we are doing- We are at 1st removing duplicates by doing matching on the address components (i.e.- addr_line1, city, state, postal code & country), here the break key for match transform is country & postal_code, its taking 1823.98 secs. Now the record count is 193317
Then we are merging the file record along with the 193317 records to put them in the same path and send them for matching.
The match criteria is the firm name & iso_country_cd,
the break key for match transform is the iso_country_cd & the 1st letter of the name.
It took 1155.156 secs.
We have used the "Run match as seperate process' option for the match to reduce the time.
The whole job took 3038.805 secs.
Please suggest how to reduce the execution time.
Edited by: Susmit Das on Mar 29, 2010 7:41 AMThis really is impossible to help with without seeing your code.
Replacing while loops with Timed Loops will not help. Timed Loops are used for slowing while loops down in a controlled manner. You would use them to synchronise with a clock rate, and can set other parameters for priority and CPU allocation etc. These will not likely help to reduce your execution time.
If you are seeing code execution of 1 second then you will need to optimise your code to reduce the execution time. There are LabVIEW guides on how to improve your code execution time, just search around for them.
Also try using the Profiling tools to learn which VIs (I presume your code is componentised and each while loop contains subVIs?) are hogging the most CPU time.
If you cannot share your VI then there it is very hard to know where your code execution bottlenecks are.
Thoric (CLA, CLED, CTD and LabVIEW Champion) -
TO REDUCE THE EXECUTION TIME OF REPORT
HI,
CAN ANYONE TELL ME THAT, HOW CAN I REDUCE THE EXECUTION TIME OF THE REPORT. IS THERE ANY IDEA TO IMPROVE THE PERFORMANCE OF THE REPORT.Hi Santosh,
Good check out the following documentation
<b>Performance tuning</b>
For all entries
Nested selects
Select using JOINS
Use the selection criteria
Use the aggregated functions
Select with view
Select with index support
Select Into table
Select with selection list
Key access to multiple lines
Copying internal tables
Modifying a set of lines
Deleting a sequence of lines
Linear search vs. binary
Comparison of internal tables
Modify selected components
Appending two internal tables
Deleting a set of lines
Tools available in SAP to pin-point a performance problem
<b>Optimizing the load of the database</b>
For all entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Some steps that might make FOR ALL ENTRIES more efficient:
Removing duplicates from the the driver table
Sorting the driver table
If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:
FOR ALL ENTRIES IN i_tab
WHERE mykey >= i_tab-low and
mykey <= i_tab-high.
Nested selects
The plus:
Small amount of data
Mixing processing and reading of data
Easy to code - and understand
The minus:
Large amount of data
when mixed processing isnt needed
Performance killer no. 1
Select using JOINS
The plus
Very large amount of data
Similar to Nested selects - when the accesses are planned by the programmer
In some cases the fastest
Not so memory critical
The minus
Very difficult to program/understand
Mixing processing and reading of data not possible
Use the selection criteria
SELECT * FROM SBOOK.
CHECK: SBOOK-CARRID = 'LH' AND
SBOOK-CONNID = '0400'.
ENDSELECT.
SELECT * FROM SBOOK
WHERE CARRID = 'LH' AND
CONNID = '0400'.
ENDSELECT.
Use the aggregated functions
C4A = '000'.
SELECT * FROM T100
WHERE SPRSL = 'D' AND
ARBGB = '00'.
CHECK: T100-MSGNR > C4A.
C4A = T100-MSGNR.
ENDSELECT.
SELECT MAX( MSGNR ) FROM T100 INTO C4A
WHERE SPRSL = 'D' AND
ARBGB = '00'.
Select with view
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T
WHERE DOMNAME = DD01L-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
SELECT * FROM DD01V
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
Select with index support
SELECT * FROM T100
WHERE ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
SELECT * FROM T002.
SELECT * FROM T100
WHERE SPRSL = T002-SPRAS
AND ARBGB = '00'
AND MSGNR = '999'.
ENDSELECT.
ENDSELECT.
Select Into table
REFRESH X006.
SELECT * FROM T006 INTO X006.
APPEND X006.
ENDSELECT
SELECT * FROM T006 INTO TABLE X006.
Select with selection list
SELECT * FROM DD01L
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
SELECT DOMNAME FROM DD01L
INTO DD01L-DOMNAME
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
ENDSELECT
Key access to multiple lines
LOOP AT TAB.
CHECK TAB-K = KVAL.
ENDLOOP.
LOOP AT TAB WHERE K = KVAL.
ENDLOOP.
Copying internal tables
REFRESH TAB_DEST.
LOOP AT TAB_SRC INTO TAB_DEST.
APPEND TAB_DEST.
ENDLOOP.
TAB_DEST[] = TAB_SRC[].
Modifying a set of lines
LOOP AT TAB.
IF TAB-FLAG IS INITIAL.
TAB-FLAG = 'X'.
ENDIF.
MODIFY TAB.
ENDLOOP.
TAB-FLAG = 'X'.
MODIFY TAB TRANSPORTING FLAG
WHERE FLAG IS INITIAL.
Deleting a sequence of lines
DO 101 TIMES.
DELETE TAB_DEST INDEX 450.
ENDDO.
DELETE TAB_DEST FROM 450 TO 550.
Linear search vs. binary
READ TABLE TAB WITH KEY K = 'X'.
READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.
Comparison of internal tables
DESCRIBE TABLE: TAB1 LINES L1,
TAB2 LINES L2.
IF L1 <> L2.
TAB_DIFFERENT = 'X'.
ELSE.
TAB_DIFFERENT = SPACE.
LOOP AT TAB1.
READ TABLE TAB2 INDEX SY-TABIX.
IF TAB1 <> TAB2.
TAB_DIFFERENT = 'X'. EXIT.
ENDIF.
ENDLOOP.
ENDIF.
IF TAB_DIFFERENT = SPACE.
ENDIF.
IF TAB1[] = TAB2[].
ENDIF.
Modify selected components
LOOP AT TAB.
TAB-DATE = SY-DATUM.
MODIFY TAB.
ENDLOOP.
WA-DATE = SY-DATUM.
LOOP AT TAB.
MODIFY TAB FROM WA TRANSPORTING DATE.
ENDLOOP.
Appending two internal tables
LOOP AT TAB_SRC.
APPEND TAB_SRC TO TAB_DEST.
ENDLOOP
APPEND LINES OF TAB_SRC TO TAB_DEST.
Deleting a set of lines
LOOP AT TAB_DEST WHERE K = KVAL.
DELETE TAB_DEST.
ENDLOOP
DELETE TAB_DEST WHERE K = KVAL.
Tools available in SAP to pin-point a performance problem
The runtime analysis (SE30)
SQL Trace (ST05)
Tips and Tricks tool
The performance database
Optimizing the load of the database
Using table buffering
Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:
Select DISTINCT
ORDER BY / GROUP BY / HAVING clause
Any WHERE clasuse that contains a subquery or IS NULL expression
JOIN s
A SELECT... FOR UPDATE
If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECR clause.
Use the ABAP SORT Clause Instead of ORDER BY
The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.
If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.
Avoid ther SELECT DISTINCT Statement
As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.
Good Luck and thanks
AK -
CVI XML Functions Execution Times Increase When Looped
I have written multiple functions using CVI that read XML files. I have confirmed in the Resource Tracking utility that i have cleaned up all of my lists, elements, documents, etc. I have found that when I loop any of the functions I have created, the execution times increase. The increase is small but it is noticable and does effect my execution.
Are there any other sources of memory that I need to deallocate? It seems that there is a memory leak somewhere but I am unable to see where this increase is located.
I am currently running LabWIndows/CVI 2009 on Windows 2008 Server. I have looped my functions using TestStand 4.2.1. Any help would be appreciated!
Thanks in advance,
Kyle
Solved!
Go to Solution.HI Daniel,
Thanks for the quick response.
It is indeed slow down in execution speed when we loop. When looped, the XML reader is overwriting variables, not adding to an array. Our application is structured differently than my test case. We run a CVI function from TestStand that contains a series of commands, which contains the XML reading. The XML looping is really done in CVI. I used TestStand in my test case just to get execution times. Our psuedocode for the CVI function is as followed:
For loop (looping over values, like amplitude or frequency)
Reading the XML
Applying the data from the XML to set up some instrument(s)
Do something...
End loop
I can confirm that the instrument set up is not the cause of the slow down. We have written the same XML reading in C# and applied the values to the instrument setup and do not experience the slow down.
I tested with On-The-Fly Reporting enabled and the execution time continued to slow down.
I hope that answers all of your questions!
Thanks,
Kyle -
Continuing a loop for a set amount of time
Hi,
I'm am trying to alter a previously written program have run into a problem.
The program is set up so that when the input exceeds a set level an output voltage to a buzzer is created; the buzzer continues as long as there is an input. What I need it to do is to continue this output for a set amount of time.
I've been searching the forum and found this straight forward solution from adamsjr.
"The easiest way to do this is to use a two Tick Count functions. The first
one is outside of a While loop, the second one is inside of the While loop.
Wire the first one through a tunnel into the While loop, add on the appropriate
ms (5*1000), wire the sum to one input of a compare function. Wire the
other input of the compare function to the second Tick Count function. Exit
the loop when the compare shows the second Tick count value is greater than
the summed value."
I've got placed the output subVI into a while loop and followed the above directions and this doesn't seem to work. I'm wondering if I would also have to include the input subVI into the loop or is there another possible solution
Thanks ahead of time,
KristenHi Kristen,
You have the compare function implemented incorrectly. You need to make sure that the first terminal of the "Greater than or equal" function connected to the output of the sum, and the second terminal to the tick count in the while loop. The first terminal corresponds to x parameter and the second terminal is for the y parameter. So if x is greater than or equal to y, then the loop will need to exit.
You can see how long a subVI is taking if you put a flat sequence structure within your subVI around the entire subVI code. The first frame would only have a tick count going into an indicator, the middle frame your normal code, and then the final frame would contain another tick count wired into another indicator. Then you can subtract the second indicator value from the first and get hold long it took to execute the middle (core) of your code for that subVI.
I highly suggest using the highlight execution and probes and other debugging techniques such as breakpoints to diagnose where the data is being held up. -
How to set a specific time for measurement file?
Hello,
I am trying to get a sampling from a milling machine via RS-232 as
shown in the VI (milling machine). This communication is set at
4800 baud rate; however I would like to save this data to file with a
specific time (let says every second).
Can anyone please tell me how I can deal with this?
kind regards,
tao-ncl
Attachments:
Milling Machine.llb 401 KBHi JamesC
This is a file that I got from that code. As you can see, in this file
a number of sampling is relate to an execution time of the machine.
However I need to do a sampling with a specific time, like every second.
Regards,
taoncl
Attachments:
first test.xls 110 KB -
How to find out the execution time of a sql inside a function
Hi All,
I am writing one function. There is only one IN parameter. In that parameter, i will pass one SQL select statement. And I want the function to return the exact execution time of that SQL statement.
CREATE OR REPLACE FUNCTION function_name (p_sql IN VARCHAR2)
RETURN NUMBER
IS
exec_time NUMBER;
BEGIN
--Calculate the execution time for the incoming sql statement.
RETURN exec_time;
END function_name;
/Please note that wrapping query in a "SELECT COUNT(*) FROM (<query>)" doesn't necessarily reflect the execution time of the stand-alone query because the optimizer is smart and might choose a completely different execution plan for that query.
A simple test case shows the potential difference of work performed by the database:
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Session altered.
SQL>
SQL> drop table count_test purge;
Table dropped.
Elapsed: 00:00:00.17
SQL>
SQL> create table count_test as select * from all_objects;
Table created.
Elapsed: 00:00:02.56
SQL>
SQL> alter table count_test add constraint pk_count_test primary key (object_id)
Table altered.
Elapsed: 00:00:00.04
SQL>
SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'COUNT_TEST')
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.29
SQL>
SQL> set autotrace traceonly
SQL>
SQL> select * from count_test;
5326 rows selected.
Elapsed: 00:00:00.10
Execution Plan
Plan hash value: 3690877688
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5326 | 431K| 23 (5)| 00:00:01 |
| 1 | TABLE ACCESS FULL| COUNT_TEST | 5326 | 431K| 23 (5)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
419 consistent gets
0 physical reads
0 redo size
242637 bytes sent via SQL*Net to client
4285 bytes received via SQL*Net from client
357 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5326 rows processed
SQL>
SQL> select count(*) from (select * from count_test);
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 572193338
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 5 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FAST FULL SCAN| PK_COUNT_TEST | 5326 | 5 (0)| 00:00:01 |
Statistics
1 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
412 bytes sent via SQL*Net to client
380 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>As you can see the number of blocks processed (consistent gets) is quite different. You need to actually fetch all records, e.g. using a PL/SQL block on the server to find out how long it takes to process the query, but that's not that easy if you want to have an arbitrary query string as input.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle:
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Reduce execution time with selects
Hi,
I have to reduce the execution time in a report, most of the consumed time is in the select query.
I have a table, gt_result:
DATA: BEGIN OF gwa_result,
tknum LIKE vttk-tknum,
stabf LIKE vttk-stabf,
shtyp LIKE vttk-shtyp,
route LIKE vttk-route,
vsart LIKE vttk-vsart,
signi LIKE vttk-signi,
dtabf LIKE vttk-dtabf,
vbeln LIKE likp-vbeln,
/bshm/le_nr_cust LIKE likp-/bshm/le_nr_cust,
vkorg LIKE likp-vkorg,
werks LIKE likp-werks,
regio LIKE kna1-regio,
land1 LIKE kna1-land1,
xegld LIKE t005-xegld,
intca LIKE t005-intca,
bezei LIKE tvrot-bezei,
bezei1 LIKE t173t-bezei,
fecha(10) type c.
DATA: END OF gwa_result.
DATA: gt_result LIKE STANDARD TABLE OF gwa_result.
And the select query is this:
SELECT ktknum kstabf kshtyp kroute kvsart ksigni
k~dtabf
lvbeln l/bshm/le_nr_cust lvkorg lwerks nregio nland1 oxegld ointca
tbezei ttbezei
FROM vttk AS k
INNER JOIN vttp AS p ON ktknum = ptknum
INNER JOIN likp AS l ON pvbeln = lvbeln
INNER JOIN kna1 AS n ON lkunnr = nkunnr
INNER JOIN t005 AS o ON nland1 = oland1
INNER JOIN tvrot AS t ON troute = kroute AND t~spras = sy-langu
INNER JOIN t173t AS tt ON ttvsart = kvsart AND tt~spras = sy-langu
INTO TABLE gt_result
WHERE ktknum IN s_tknum AND ktplst IN s_tplst AND k~route IN s_route AND
k~erdat BETWEEN s_erdat-low AND s_erdat-high AND
l~/bshm/le_nr_cust <> ' ' "IS NOT NULL
AND k~stabf = 'X'
AND ktknum NOT IN ( SELECT tktknum FROM vttk AS tk
INNER JOIN vttp AS tp ON tktknum = tptknum
INNER JOIN likp AS tl ON tpvbeln = tlvbeln
WHERE l~/bshm/le_nr_cust IS NULL )
AND k~tknum NOT IN ( SELECT tknum FROM /bshs/ssm_eship )
AND ( o~xegld = ' '
OR ( o~xegld = 'X' AND
( ( n~land1 = 'ES'
AND ( nregio = '51' OR nregio = '52'
OR nregio = '35' OR nregio = '38' ) )
OR n~land1 = 'ESC' ) )
OR ointca = 'AD' OR ointca = 'GI' ).
Does somebody know how to reduce the execution time ?.
Thanks.Hi,
Try to remove the join. Use seperate selects as shown in example below and for the sake of selection, keep some key fields in your internal table.
Then once your final table is created, you can copy the table into GT_FINAL which will contain only fields you need.
EX
data : begin of it_likp occurs 0,
vbeln like likp-vbeln,
/bshm/le_nr_cust like likp-/bshm/le_nr_cust,
vkorg like likp-vkorg,
werks like likp-werks,
kunnr likr likp-kunnr,
end of it_likp.
data : begin of it_kna1 occurs 0,
kunnr like...
regio....
land1...
end of it_kna1 occurs 0,
Select tknum stabf shtyp route vsart signi dtabf
from VTTP
into table gt_result
WHERE tknum IN s_tknum AND
tplst IN s_tplst AND
route IN s_route AND
erdat BETWEEN s_erdat-low AND s_erdat-high.
select vbeln /bshm/le_nr_cust
vkorg werks kunnr
from likp
into table it_likp
for all entries in gt_result
where vbeln = gt_result-vbeln.
select kunnr
regio
land1
from kna1
into it_kna1
for all entries in it_likp.
similarly for other tables.
Then loop at gt result and read corresponding table and populate entire record :
loop at gt_result.
read table it_likp where vbeln = gt_result-vbeln.
if sy-subrc eq 0.
move corresponding fields of it_likp into gt_result.
gt_result-kunnr = it_likp-kunnr.
modify gt_result.
endif.
read table it_kna1 where kunnr = gt_result-vbeln.
if sy-subrc eq 0.
gt_result-regio = it-kna1-regio.
gt_result-land1 = it-kna1-land1.
modify gt_result.
endif.
endloop. -
Oracle - select execution time
hi all,
when i executed an SQL - select (with joins and so on) - I have observed the following behaviour:
Query execution times are like: -
for 1000 records - 4 sec
5000 records - 10 sec
10000 records - 7 sec
25000 records - 16 sec
50000 records - 33 sec
I tested this behaviour with different sets of sqls on different sets of data. But in each of the cases, the behaviour is more or less the same.
Can any one explain - why Oracle takes more time to result 5000 records than that it takes to 10000.
Please note that this has not something to do with the SQLs as - i tested this with different sets of sql on different sets of data.
Can there be any Oracle`s internal reason - which can explain this behaviour?
regards
atThat is not normal behaviour. I've never come across anything like that that wasn't explainable by some environment factor (e.g. someone else doing a big sort).
I ran a couple of tests:
(1) to insert 5000 rows 0.1 seconds
to insert 10000 rows 0.18 seconds
and
(2) to select 5000 rows joined with 200K row table 0.19 seconds
to select 10000 rows joined with 200K row table 0.2 seconds
Although the second is close, I grant you!
Cheers, APC -
Why the execution time increases with a while loop, but not with "Run continuously" ?
Hi all,
I have a serious time problem that I don't know how to solve because I don't know exactly where it comes from.
I command two RF switches via a DAQ card (NI USB-6008). Only one position at the same time can be selected on each switch. Basically, the VI created for this functionnality (by a co-worker) resets all the DAQ outputs, and then activates the desired ones. It has three inputs, two simp0le string controls, and an array of cluster, which contains the list of all the outputs and some informations to know what is connected (specific to my application).
I use this VI in a complex application, and I get some problems with the execution time, which increased each time I callled the VI, so I made a test VI (TimeTesting.vi) to figure out where the problem came from. In this special VI I record the execution time in a csv file to analyse then with excel.
After several tests, I found that if I run this test VI with the while loop, the execution time increases at each cycle, but if I remove the while loop and use the "Run continuously" funtionnality, the execution time remains the same. In my top level application I have while loops and events, and so the execution time increases too.
Could someone explain me why the execution time increases, and how can I avoid that? I attached my test VI and the necessary subVIs, as well as a picture of a graph which shows the execution time with a while loop and with the "run continuously".
Thanks a lot for your help!
Solved!
Go to Solution.
Attachments:
TimeTesting.zip 70 KB
Graph.PNG 20 KBjul7290 wrote:
Thank you very much for your help! I added the "Clear task" vi and now it works properly.
If you are still using the RUn Continuously you should stop. That is meant strictly for debugging. In fact, I can't even tell you the last time I ever used it. If you want your code to repeat you should use loops and control the behavior of the code.
Mark Yedinak
"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot -
Help Please!!!
I've been searching for an execution time issue in our application for a while now. Here is some background on the application:
Collects analog data from a cDAQ chassis with a 9205 at 5kHz
Data is collected in 100ms chunks
Some of the data is saved directly to a TDMS file while the rest is averaged for a single data point. That single data point is saved to disk in a text file every 200ms.
Problem: During operation, the VI that writes the data to the text file will periodically take many hundreds of milliseconds to execute. Normal operation execution times are on the order of 1ms or less. This issue will happen randomly during operation. It's usually many seconds between times that this occurs and it doesn't seem to have any pattern to when the event happens.
Attached is a screenshot of the VI in question. The timing check labeled "A" is the one that will show the troubling execution time. All the other timing checks show 0ms every time this issue occurs. I simply can't see what else is holding this thing up. The only unchecked subVI is the "append error call chain" call. I've gone through the heirarchy of that VI and ensured that everything is set for reentrant execution. I will check that too soon, but I really don't expect to find anything.
Where else can I look for where the time went? It doesn't seem to make sense.
Thanks for reading!
Tim
Attachments:
Screen Shot 2013-09-06 at 9.32.46 AM.png 87 KBYou should probably increase how much data you write with a single Write to Text File. Move the Write to Text File out of the FOR loop. Just have the data to be written autoindex to create an array of strings. The Write to Text File will accept the array of strings directly, writing a single line for each element in the arry.
Another idea I am having is to use another loop (yes another queue as well) for the writing of the file. But you put the Dequeue Element inside of another WHILE loop. On the first iteration of this inside loop, set the timeout to something normal or -1 for wait forever. Any further iteration should have a timeout of 0. You do this with a shift register. Autoindex the read strings out of the loop. This array goes straight into the Write to Text File. This way you can quickly catch up when your file write takes a long time.
NOTE: This is just a very quick example I put together. It is far from a complete idea, but it shows the general idea I was having with reading the queue.
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
Attachments:
Write all data on queue.png 16 KB -
Procedure execution time difference in Oacle 9i and Oracle 10g
Hi,
My procedure is taking time on
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 is 14 min.
same procedure is taking time on oracle Release 9.2.0.1.0 is 1 min.
1) Data is same in both environment.
2) Number of records are same 485 rows for cursor select statement.
3)Please guide me how to reduce the time in oracle 10g for procedure?
i have checked the explain plan for that cursor query it is different in both enviroment.
so i have analysis that procedure is taking time on cursor fetch into statement in oracle 10g.
example:-
create or replace procedure myproc
CURSOR cur_list
IS select num
from tbl
where exist(select.......
EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE = TRUE';
EXECUTE IMMEDIATE 'ALTER SESSION SET TIMED_STATISTICS = TRUE';
OPEN cur_list;
LOOP
FETCH cur_list INTO cur_list; -----My procedure is taking time in this statement only for some list number. there are 485 list number.
end loop;
TRACE file for oracle 10g is look like this:-
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.37 0.46 0 2 0 0
Fetch 486 747.07 730.14 1340 56500700 0 485
total 488 747.45 730.60 1340 56500702 0 485
ORACLE 9i EXPLAIN PLAN FOR cursor query:-
Plan
SELECT STATEMENT CHOOSECost: 2 Bytes: 144 Cardinality: 12
18 INDEX RANGE SCAN UNIQUE LISL.LISL_LIST_PK Cost: 2 Bytes: 144 Cardinality: 12
17 UNION-ALL
2 FILTER
1 TABLE ACCESS FULL SLD.P Cost: 12 Bytes: 36 Cardinality: 1
16 NESTED LOOPS Cost: 171 Bytes: 141 Cardinality: 1
11 NESTED LOOPS Cost: 169 Bytes: 94 Cardinality: 1
8 NESTED LOOPS Cost: 168 Bytes: 78 Cardinality: 1
6 NESTED LOOPS Cost: 168 Bytes: 62 Cardinality: 1
4 TABLE ACCESS BY INDEX ROWID SLD.L Cost: 168 Bytes: 49 Cardinality: 1
3 INDEX RANGE SCAN UNIQUE SLD.PK_L Cost: 162 Cardinality: 9
5 INDEX UNIQUE SCAN UNIQUE SLD.SYS_C0025717 Bytes: 45,760 Cardinality: 3,520
7 INDEX UNIQUE SCAN UNIQUE SLD.PRP Bytes: 63,904 Cardinality: 3,994
10 TABLE ACCESS BY INDEX ROWID SLD.P Cost: 1 Bytes: 10,480 Cardinality: 655
9 INDEX UNIQUE SCAN UNIQUE SLD.PK_P Cardinality: 9
15 TABLE ACCESS BY INDEX ROWID SLD.GRP_E Cost: 2 Bytes: 9,447 Cardinality: 201
14 INDEX UNIQUE SCAN UNIQUE SLD.PRP_E Cost: 1 Cardinality: 29
13 TABLE ACCESS BY INDEX ROWID SLD.E Cost: 2 Bytes: 16 Cardinality: 1
12 INDEX UNIQUE SCAN UNIQUE SLD.SYS_C0025717 Cost: 1 Cardinality: 14,078
ORACLE 10G EXPLAIN PLAN FOR cursor query:-
SELECT STATEMENT ALL_ROWSCost: 206,103 Bytes: 12 Cardinality: 1
18 FILTER
1 INDEX FAST FULL SCAN INDEX (UNIQUE) LISL.LISL_LIST_PK Cost: 2 Bytes: 8,232 Cardinality: 686
17 UNION-ALL
3 FILTER
2 TABLE ACCESS FULL TABLE SLD.P Cost: 26 Bytes: 72 Cardinality: 2
16 NESTED LOOPS Cost: 574 Bytes: 157 Cardinality: 1
14 NESTED LOOPS Cost: 574 Bytes: 141 Cardinality: 1
12 NESTED LOOPS Cost: 574 Bytes: 128 Cardinality: 1
9 NESTED LOOPS Cost: 573 Bytes: 112 Cardinality: 1
6 HASH JOIN RIGHT SEMI Cost: 563 Bytes: 315 Cardinality: 5
4 TABLE ACCESS FULL TABLE SLD.E Cost: 80 Bytes: 223,120 Cardinality: 13,945
5 TABLE ACCESS FULL TABLE SLD.GRP_E Cost: 481 Bytes: 3,238,582 Cardinality: 68,906
8 TABLE ACCESS BY INDEX ROWID TABLE SLD.L Cost: 2 Bytes: 49 Cardinality: 1
7 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.PK_L Cost: 1 Cardinality: 1
11 TABLE ACCESS BY INDEX ROWID TABLE SLD.P Cost: 1 Bytes: 16 Cardinality: 1
10 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.PK_P Cost: 0 Cardinality: 1
13 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.SYS_C0011870 Cost: 0 Bytes: 13 Cardinality: 1
15 INDEX UNIQUE SCAN INDEX (UNIQUE) SLD.PRP Cost: 0 Bytes: 16 Cardinality: 1
so Please guide me how to reduce the time in oracle 10g for procedure?
1) Is this envrionment setting parameter?
2) I have to tune the query? but which is executing fine on oracle 9i?
so how to decrease the execution time?
Thanks in advance.SELECT l_nr
FROM x.ls b
WHERE b.cd = '01'
AND b.co_code = '001'
AND EXISTS (
SELECT T_L
FROM g.C
WHERE C_cd = '01'
AND C_co_code = '001'
AND C_flg = 'A'
AND C_eff_dt <= sysdate
AND C_end_dt >=
sysdate
AND C_type_code <> 1
AND C_type_code <> 1
AND targt_ls_type = 'C'
AND T_L <> 9999
AND T_L = b.l_nr
UNION ALL
SELECT l.T_L
FROM g.C C,
g.ep_e B,
g.ep ep,
g.e A,
g.lk_in l
WHERE l.cd = '01'
AND l.co_code = '001'
AND l.cd = C.C_cd
AND l.co_code = C.C_co_code
AND l.C_nbr = C.C_nbr
AND l.targt_ls_type = 'C'
AND lk_in_eff_dt <=
sysdate
AND lk_in_end_dt >=
( sysdate
+ 1
AND ( (logic_delte_flg = '0')
OR ( logic_delte_flg IN ('1', '3')
AND lk_in_eff_dt <> lk_in_end_dt
AND l.cd = ep.C_cd
AND l.co_code = ep.C_co_code
AND l.C_nbr = ep.C_nbr
AND l.ep_nbr = ep.ep_nbr
AND l.cd = A.e_cd
AND l.co_code = A.e_co_code
AND l.e_nbr = A.e_nbr
AND l.cd = B.cd
AND l.co_code = B.co_code
AND l.C_nbr = B.C_nbr
AND l.ep_nbr = B.ep_nbr
AND l.e_nbr = B.e_nbr
AND l.ep_e_rev_nbr = B.ep_e_rev_nbr
AND B.flg = 'A'
AND EXISTS (
SELECT A.e_nbr
FROM g.e A
WHERE A.e_cd = B.cd
AND A.e_co_code = B.co_code
AND A.e_nbr = B.e_nbr
AND A.e_type_code ^= 8)
AND C_type_code <> 10
AND C.C_type_code <> 13
AND l.T_L = b.l_nr)
--yes index is same
Maybe you are looking for
-
Basic XML Publisher Question: How to access tags in the higher levels?
Hi All, We have a basic question in XML Publisher. We have a xml hierarchy like below: <CD_CATALOG> <CATALOG> <CAT_NAME> CATALOG 1</CAT_NAME> <CD> <TITLE>TITLE1 </TITLE> <ARTIST>ARTIST1 </ARTIST> </CD> <CD> <TITLE> TITLE2</TITLE> <ARTIST>ARTIST2 </AR
-
Windows 8.1 and IE 11 Won't Play Test Flash Movie
I am running a fully patched/updated copy of Windows 8.1 and Internet Explorer version 11 (on the desktop) If I try and run the test movie at http://helpx.adobe.com/flash-player/kb/flash-player-issues-windows-8.html I get the following message: I hav
-
"Command +" doesn't work on iWeb websites in browser??
I've made a number of iWeb websites. I notice that I can't "Command +" the text larger or smaller. Is this a function of an iWeb site .. or is there something I'm missing? B.
-
You can't open the application because PowerPC applications are no longer supported?
What does this mean? I just bought a digital game from the Blizzard company the makers of Warcraft and Starcraft but I cant install the game. Its says that "PowerPC applications are no longer supported". Another route I tried as I was just messing ar
-
Optimum RAM to run Aperture?
lMy new 27" iMac computer came with 8Gb of RAM. I also use the NIK plugins ocassionally, and want to know if I would benefit from adding more RAM --- and how much? It is simple to install new chips, but it does cost money.