Performance Issue - higher fetch count
Hi,
The database version is 10.2.0.4.
Below is the tkprof report of an application session having performance issue.
We shared the screens of application team and were able to see the lag in report generation.
It shows an elapsed time of 157 seconds, however the same query when executed in database is taking fractions of a second.
Kindly help and suggest if more detail is needed.
call count cpu elapsed disk query current rows
Parse 149 0.00 0.00 0 0 0 0
Execute 298 0.02 0.02 0 0 0 0
Fetch 298 157.22 156.39 0 38336806 0 298
total 745 157.25 156.42 0 38336806 0 298
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 80
Rows Row Source Operation
2 SORT AGGREGATE (cr=257294 pr=0 pw=0 time=1023217 us)
32 FILTER (cr=257294 pr=0 pw=0 time=6944757 us)
22770 NESTED LOOPS (cr=166134 pr=0 pw=0 time=4691233 us)
22770 NESTED LOOPS (cr=166130 pr=0 pw=0 time=4600141 us)
82910 INDEX FULL SCAN S_LIT_BU_U1 (cr=326 pr=0 pw=0 time=248782 us)(object id 69340)
22770 TABLE ACCESS BY INDEX ROWID S_LIT (cr=165804 pr=0 pw=0 time=559291 us)
82890 INDEX UNIQUE SCAN S_LIT_P1 (cr=82914 pr=0 pw=0 time=247901 us)(object id 69332)
22770 INDEX UNIQUE SCAN S_BU_U2 (cr=4 pr=0 pw=0 time=48958 us)(object id 63064)
20 NESTED LOOPS (cr=91032 pr=0 pw=0 time=268508 us)
22758 INDEX UNIQUE SCAN S_ORDER_P1 (cr=45516 pr=0 pw=0 time=104182 us)(object id 70915)
20 INDEX RANGE SCAN CX_ORDER_LIT_U1 (cr=45516 pr=0 pw=0 time=114669 us)(object id 158158)
20 NESTED LOOPS (cr=128 pr=0 pw=0 time=364 us)
32 INDEX UNIQUE SCAN S_ORDER_P1 (cr=64 pr=0 pw=0 time=144 us)(object id 70915)
20 INDEX RANGE SCAN CX_ORDER_LIT_U1 (cr=64 pr=0 pw=0 time=158 us)(object id 158158)Rgds,
Sanjay
Edited by: 911847 on Feb 2, 2012 5:53 AM
Edited by: 911847 on Feb 5, 2012 11:50 PM
Hi,
I changed optimizer to first_rows and taken below details.
09:21:31 SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_mode string FIRST_ROWS_100
09:21:51 SQL> ALTER SESSION SET STATISTICS_LEVEL=ALL;
Session altered.
PLAN_TABLE_OUTPUT
SQL_ID fkcs93gkrt2zz, child number 0
SELECT COUNT (*) FROM SIEBEL.S_LIT_BU T1, SIEBEL.S_BU T2, SIEBEL.S_LIT T3
WHERE T3.BU_ID = T2.PAR_ROW_ID AND T1.BU_ID = '0-R9NH' AND T3.ROW_ID = T1.LIT_ID
AND (T3.X_VISIBILITY_BUSCOMP_ORDER = 'Y') AND (T3.ROW_ID = '1-28B0AH' OR T3.ROW_ID =
'1-28B0AF' OR T3.ROW_ID = '1-2V4GCV' OR T3.ROW_ID = '1-2F5USL' OR T3.ROW_ID =
'1-27PFED' OR T3.ROW_ID = '1-1KO7WJ' OR T3.ROW_ID IN ( SELECT SQ1_T1.LIT_ID FROM
SIEBEL.CX_ORDER_LIT SQ1_T1, SIEBEL.S_ORDER SQ1_T2 WHERE ( SQ1_T1.ORDER_ID =
SQ1_T2.ROW_ID) AND (SQ1_T2.ROW_ID = '1-2VVI61')) AND (T3.ROW_ID = '1-28B0AH' OR
T3.ROW_ID = '1-28B0AF' OR T3.ROW_ID = '1-2V4GCV' OR T3.ROW_ID = '1-2F5USL' OR
T3.ROW_ID = '1-27PFED' OR T3.ROW_ID = '1-1KO7WJ' OR T3.ROW_ID IN ( SELECT
SQ1_T1.LIT_ID FROM SIEBEL.CX_ORDER_LIT SQ1_T1, SIEBEL.S_ORDER SQ1_T2 WHERE (
SQ1_T1.ORDER_ID = SQ1_T2.ROW_ID) AND (SQ1_T2.ROW_ID = '1-2VVI61'))))
Plan hash value: 307628812
| Id | Operation | Name | E-Rows | OMem | 1Mem | Used-Mem |
| 1 | SORT AGGREGATE | | 1 | | | |
|* 2 | FILTER | | | | | |
| 3 | NESTED LOOPS | | 7102 | | | |
| 4 | MERGE JOIN | | 7102 | | | |
|* 5 | TABLE ACCESS BY INDEX ROWID| S_LIT | 7102 | | | |
| 6 | INDEX FULL SCAN | S_LIT_P1 | 41408 | | | |
|* 7 | SORT JOIN | | 41360 | 1186K| 567K| 1054K (0)|
|* 8 | INDEX FULL SCAN | S_LIT_BU_U1 | 41360 | | | |
|* 9 | INDEX UNIQUE SCAN | S_BU_U2 | 1 | | | |
| 10 | NESTED LOOPS | | 1 | | | |
|* 11 | INDEX UNIQUE SCAN | S_ORDER_P1 | 1 | | | |
|* 12 | INDEX RANGE SCAN | CX_ORDER_LIT_U1 | 1 | | | |
| 13 | NESTED LOOPS | | 1 | | | |
|* 14 | INDEX UNIQUE SCAN | S_ORDER_P1 | 1 | | | |
|* 15 | INDEX RANGE SCAN | CX_ORDER_LIT_U1 | 1 | | | |
Predicate Information (identified by operation id):
2 - filter((((INTERNAL_FUNCTION("T3"."ROW_ID") OR IS NOT NULL) AND IS NOT NULL)
OR INTERNAL_FUNCTION("T3"."ROW_ID")))
5 - filter("T3"."X_VISIBILITY_BUSCOMP_ORDER"='Y')
7 - access("T3"."ROW_ID"="T1"."LIT_ID")
filter("T3"."ROW_ID"="T1"."LIT_ID")
8 - access("T1"."BU_ID"='0-R9NH')
filter("T1"."BU_ID"='0-R9NH')
9 - access("T3"."BU_ID"="T2"."PAR_ROW_ID")
11 - access("SQ1_T2"."ROW_ID"='1-2VVI61')
12 - access("SQ1_T1"."ORDER_ID"='1-2VVI61' AND "SQ1_T1"."LIT_ID"=:B1)
14 - access("SQ1_T2"."ROW_ID"='1-2VVI61')
15 - access("SQ1_T1"."ORDER_ID"='1-2VVI61' AND "SQ1_T1"."LIT_ID"=:B1)
Note
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level
Similar Messages
-
Performance Degradation - High fetches and Prses
Hello,
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
2) High fetches and poor number/ low number of rows being processed
Please let me kno as to how the performance degradation can be minimised, Perhaps the high number of SQL* Net Client wait events may be due to multiple fetches and transactions with the client.
EXPLAIN PLAN FOR SELECT /*+ FIRST_ROWS (1) */ * FROM SAPNXP.INOB
WHERE MANDT = :A0
AND KLART = :A1
AND OBTAB = :A2
AND OBJEK LIKE :A3 AND ROWNUM <= :A4;
call count cpu elapsed disk query current rows
Parse 119 0.00 0.00 0 0 0 0
Execute 239 0.16 0.13 0 0 0 0
Fetch 239 2069.31 2127.88 0 13738804 0 0
total 597 2069.47 2128.01 0 13738804 0 0
PLAN_TABLE_OUTPUT
Plan hash value: 1235313998
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 268 | 1 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| INOB | 2 | 268 | 1 (0)| 00:00:01 |
|* 3 | INDEX SKIP SCAN | INOB~2 | 7514 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(ROWNUM<=TO_NUMBER(:A4))
2 - filter("OBJEK" LIKE :A3 AND "KLART"=:A1)
3 - access("MANDT"=:A0 AND "OBTAB"=:A2)
filter("OBTAB"=:A2)
18 rows selected.
SQL> SELECT INDEX_NAME,TABLE_NAME,COLUMN_NAME FROM DBA_IND_COLUMNS WHERE INDEX_OWNER='SAPNXP' AND INDEX_NAME='INOB~2';
INDEX_NAME TABLE_NAME COLUMN_NAME
INOB~2 INOB MANDT
INOB~2 INOB CLINT
INOB~2 INOB OBTAB
Is it possible to Maximise the rows/fetch
call count cpu elapsed disk query current rows
Parse 163 0.03 0.00 0 0 0 0
Execute 163 0.01 0.03 0 0 0 0
Fetch 174899 55.26 59.14 0 1387649 0 4718932
total 175225 55.30 59.19 0 1387649 0 4718932
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 27
Rows Row Source Operation
28952 TABLE ACCESS BY INDEX ROWID EDIDC (cr=8505 pr=0 pw=0 time=202797 us)
28952 INDEX RANGE SCAN EDIDC~1 (cr=1457 pr=0 pw=0 time=29112 us)(object id 202995)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 174899 0.00 0.16
SQL*Net more data to client 155767 0.01 5.69
SQL*Net message from client 174899 0.11 208.21
latch: cache buffers chains 2 0.00 0.00
latch free 4 0.00 0.00
********************************************************************************user4566776 wrote:
My analysis on a particular job trace file drew my attention towards:
1) High rate of Parses instead of Bind variables usage.
But if you look at the text you are using bind variables.
The first query is executed 239 times - which matches the 239 fetches. You cut off some of the useful information from the tkprof output, but the figures show that you're executing more than once per parse call. The time is CPU time spent using a bad execution plan to find no data -- this looks like a bad choice of index, possibly a side effect of the first_rows(1) hint.
2) High fetches and poor number/ low number of rows being processedThe second query is doing a lot of fetches because in 163 executions it is fetching 4.7 million rows at roughly 25 rows per fetch. You might improve performance a little by increasing the array fetch size - but probably not by more than a factor of 2.
You'll notice that even though you record 163 parse calls for the second statement the number of " Misses in library cache during parse" is zero - so the parse calls are pretty irrelevant, the cursor is being re-used.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Oracle 9i Performance Issue High Physical Reads
Dear All,
I have Oracle 9i Release 9.2.0.5.0 database under HP Unix, I have run the query and got following output. Can any body just have a look and advise what to do in the following situation? We have performance issues.
Many thanks in advance
Buffer Pool Advisory for DB: DBPR Instance: DBPR End Snap: 902
-> Only rows with estimated physical reads >0 are displayed
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
D 416 .1 51,610 4.27 1,185,670,652
D 832 .2 103,220 2.97 825,437,374
D 1,248 .3 154,830 2.03 563,139,985
D 1,664 .4 206,440 1.49 412,550,232
D 2,080 .5 258,050 1.32 366,745,510
D 2,496 .6 309,660 1.23 340,820,773
D 2,912 .7 361,270 1.14 317,544,771
D 3,328 .8 412,880 1.09 301,680,173
D 3,744 .9 464,490 1.04 288,191,418
D 4,096 1.0 508,160 1.00 276,929,627Hi,
Actually you didnt give the exact problem statement.
Seems to be your database is I/O bound. Ok, do the following one by one:
1. Identify the FTS queries and try to create the optimal indexes (depending on the disk reads factor!!) on the problem queries.
2. To reduce the 276M physical reads, you need to allocate more memory to db_cache_size. try 8GB (initially) and then depending on the buffer advisery you can increase further if you have more memory on the box.
3. as a Next step , configure KEEP and RECYCLE cache to get the benefits of reduced I/O by multiple pools. Allocate objects to the KEEP/RECYCLE pools.
Thanks, -
Performance issue while fetching metadata from Informatica Repository
I'm working on Informatica 8.6(ETL tool) which contains its own repository to save metadata and using their Mapiing SDK API's I'm developing a java application which fetches the objects from the repository.
For this purpose by using "mapfwk.jar", i'm first connecting to the repository using RepositoryConnectionManager class and then at the time of fetching the metadata i used getFolder, getSource & getTarget functions.
Issue: Program is taking to much time for fetching the folders. The time taken by it depends on the number of metadata objects present in it,i.e. as object number increases, time increases.
Please advise how to reduce time for fetching metadata from repository.
Source Code:
#1 - Code for connecting to repository
protected static String PC_CLIENT_INSTALL_PATH = "E:\\Informatica\\PowerCenter8.6.0\\client\\bin";
protected static String TARGET_REPO_NAME = "test_rep";
protected static String REPO_SERVER_HOST = "blrdxp-nfadate";
protected static String REPO_SERVER_PORT = "6001";
protected static String ADMIN_USERNAME = "Administrator";
protected static String ADMIN_PASSWORD = "Administrator";
protected static String REPO_SERVER_DOMAIN_NAME = "Domain_blrdxp-nfadate";
protected void initializeRepositoryProps(){
CachedRepositoryConnectionManager rpMgr = new CachedRepositoryConnectionManager(new PmrepRepositoryConnectionManager());
RepoProperties repoProp = new RepoProperties();
repoProp.setProperty(RepoPropsConstant.PC_CLIENT_INSTALL_PATH, PC_CLIENT_INSTALL_PATH);
repoProp.setProperty(RepoPropsConstant.TARGET_REPO_NAME, TARGET_REPO_NAME);
repoProp.setProperty(RepoPropsConstant.REPO_SERVER_DOMAIN_NAME, REPO_SERVER_DOMAIN_NAME);
repoProp.setProperty(RepoPropsConstant.REPO_SERVER_HOST, REPO_SERVER_HOST);
repoProp.setProperty(RepoPropsConstant.REPO_SERVER_PORT, REPO_SERVER_PORT);
repoProp.setProperty(RepoPropsConstant.ADMIN_USERNAME, ADMIN_USERNAME);
repoProp.setProperty(RepoPropsConstant.ADMIN_PASSWORD, ADMIN_PASSWORD);
rep.setProperties(repoProp);
rep.setRepositoryConnectionManager(rpMgr);
}#2 - Code for fetching metadata
Vector<Folder> rep_FetchedFolders = new Vector<Folder>();
public void fetchRepositoryFolders(){
initializeRepositoryProps();
System.out.println("Repository Properties set");
//To fetch Folder
Vector<Folder> folders = new Vector<Folder>();
folders = (Vector<Folder>)rep.getFolder();
for(int i=1 ; i < folders.size(); i++){
Folder t_folder = new Folder();
t_folder.setName(((Folder)folders.get(i)).getName());
Vector listOfSources = ((Folder)folders.get((i))).getSource();
//To fetch Sources from folder
for(int b=0; b<listOfSources.size();b++){
Source src = ((Source)listOfSources.get(b));
t_folder.addSource(src);
Vector listOfTargets = ((Folder)folders.get((i))).getTarget();
//To fetch Sources from folder
for(int b=0; b<listOfTargets.size();b++){
Target trg = ((Target)listOfTargets.get(b));
t_folder.addTarget(trg);
rep_FetchedFolders.addElement(t_folder);
}Hi neel007,
Just use a List instead of a Vector, it's more performant :
List<Folder> rep_FetchedFolders = new ArrayList<Folder>();If you need to synchronize your list, then
List<Folder> rep_FetchedFolders = Collections.synchronizedList(new ArrayList<Folder>());Also, if you're using Java 5 or higher and if you're sure listOfTargets contains only Target objects, instead of this
for(int b=0; b<listOfTargets.size();b++){
Target trg = ((Target)listOfTargets.get(b));
t_folder.addTarget(trg);
}you may do this :
for (Target trg : listOfTargets) {
t_folder.addTarget(trg);
}Edited by: Chicon on May 18, 2009 7:29 AM
Edited by: Chicon on May 18, 2009 7:35 AM -
Performance issue High disk I/O
Hi All,
we have Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production running on REd Hat EE 4
We are having some high disk I/O issues after doing some schema changes last week.
I checked some wait events but found pmon timer and smon timer at the top all the time. but not getting any clue of reason behind it.
If anyone has faced such issue then please advice as what could be reason of it.
SELECT a.SID, a.event, a.wait_time, a.seconds_in_wait, a.state
FROM v$session_wait a, v$session b
WHERE a.SID = b.SID AND event <> 'SQL*Net message from client'
ORDER BY seconds_in_wait DESC
SID EVENT WAIT_TIME SECONDS_IN_WAIT STATE
1 pmon timer 0 4090 WAITING
8 smon timer 0 1209 WAITING
9 rdbms ipc message 0 799 WAITING
regards
vinayTo add some more information to this
The disk activity before doing schema changes used to be somewhere around 30-45% but now it has suddenly went to 70-80%.
The schema changes includes addition of some complex views, tables and indexes.
but i haven't seen any considarable chnages in size of newly added objects they are all together less than 1.5 gb and our total database size is 700GB.
i took snapshot for over period of time i found top event as direct path write
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
direct path write 1,444,566 495,020 38.06
direct path read 677,912 139,936 10.76
PX Deq: Execution Msg 1,252 129,432 9.95
db file scattered read 1,235,215 117,431 9.03
db file sequential read 912,288 98,714 7.59
Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
direct path write 3,923,608 1,218,268 33.98
db file scattered read 1,770,687 533,507 14.88
direct path read 3,676,295 468,525 13.07
db file sequential read 1,911,892 331,635 9.25
SQL*Net more data to client 7,126,163 188,606 5.26
there is considerable slow response has been reported on application but but still usable.
Regards
Vinay -
Performance issue with high CPU and IO
Hi guys,
I am encountering huge user response time on a production system and I don’t know how to solve it.
Doing some extra tests and using the instrumentation that we have in the code we concluded that the DB is the bottleneck.
We generated some AWR reports and noticed the CPU was in top wait events. Also noticed that in a random manner some simple sql take a long time to execute. We activated the sql trace on the system and noticed that for very simple SQLs (unique index access on one table) we have huge exec times. 9s
In the trace file the huge time we had it in fetch area: 9.1s cpu and elapsed 9.2.
And no or very small waits for this specific SQL.
it seems like the bottle neck is on the CPU but at that point there were very few processes running on the DB. Why can we have such a big cpu wait on a simple select? This is a machine with 128 cores. We have quicker responses on machines smaller/busier than this.
We noticed that we had a huge db_cache_size (12G) and after we scale it down we noticed some improvements but not enough. How can I prove that there is a link between high CPU and big cache_size? (there was not wait involved in SQL execution). what can we do in the case we need big DB cache size?
The second issue is that I tried to execute an sql on a big table (FTS on a big table. no join). Again on that smaller machine it runs in 30 seconds and on this machine it runs in 1038 seconds.
Also generated a trace for this SQL on the problematic machine:
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 402.08 1038.31 1842916 6174343 0 1
total 3 402.08 1038.32 1842916 6174343 0 1
db file sequential read 12419 0.21 40.02
i/o slave wait 135475 0.51 613.03
db file scattered read 135475 0.52 675.15
log file switch completion 5 0.06 0.18
latch: In memory undo latch 6 0.00 0.00
latch: object queue header operation 1 0.00 0.00
********************************************************************************The high CPU is present here also but here I have huge wait on db file scattered read.
Looking at the session with the select the AWG_wait for db scattered read was 0.5. on the other machine it is like 0.07.
I though this is an IO issue. I did some IO tests at SO level and it seems like the read and writes operation are very fast…much faster than the machine that has the awg_wait smaller. Why the difference in waits?
One difference between these two DBs is that the problem one has the db block size = 16k and the other one has 8k.
I received some reports done at OS level on CPU and IO usage on the problematic machine (in normal operations). It seems like the CPU is very used and the IO stays very low.
On the other machine, the smaller and the faster one, it is other way around.
What is the problem here? How can I test further? Can I link the high CPU to low/slow IO?
we have 10G on sun os with ASM.
Thanks in advance.Yes, there are many things you can and should do to isolate this. But first check MOS Poor Performance With Oracle9i and 10g Releases When Using Dynamic Intimate Shared Memory (DISM) [ID 1018855.1] isn't messing you up to start.
Also, be sure and post exact patch levels for both Oracle and OS.
Be sure and check all your I/O settings and see what MOS has to say about those.
Are you using ASSM? See Long running update
Since it got a little better with shrinking the SGA size, that might indicate (wild speculation here, something like) one of the problems is simply too much thrashing within the SGA, as oracle decides "small" objects being full scanned in memory is faster than range scans (or whatever) from disk, overloading the cpu, not allowing the cpu to ask for other full scans from I/O. Possibly made worse by row level locking, or some other app issue that just does too much cpu.
You probably have more than one thing wrong. High fetch count might mean you need to adjust the array size on the clients.
Now that that is all out of the way, if you still haven't found the problem, go through http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Edit: Oh, see Solaris 10 memory management conflicts with Automatic PGA Memory Management [ID 460424.1] too.
Edited by: jgarry on Nov 15, 2011 1:45 PM -
Performance Issue in ABAP coding
Hi Experts,
I am facing performance issue when fetch the data from view IAOM_CRM_AUFK.My perspective to fetch the internal order from that table by passing the OBJECT_ID which is not the primary key..Also the field IHREZ is not the primary key of table VBKD..
My code is under....
select EXT_OBJECT_ID
AUFNR
OBJECT_ID
into table IT_EXAT
from IAOM_CRM_AUFK
client specified
for all entries in IT_VBKD
where OBJECT_ID = IT_VBKD-IHREZ.
Please give the wayout by which I can fetch lot of data without performance issue..Hi,
I have maintain IHREZ of table VBKD this way.
IHREZ type IAOM_CRM_AUFK-OBJECT_ID,
I know that OBJECT_ID is no good access to view IAOM_CRM_AUFK..But I have no way..Because I have got IHREZ from VBKD table which only match with the field OBJECT_ID of view IAOM_CRM_AUFK...
I have to fetch internal order no. on the basis of Sales order no..Forr this reason First I have put Sales order no in VBKD table and get IHREZ and then put IHREZ in the field OBJECT_ID of view IAOM_CRM_AUFK..then I have got AUFNR which is internal order..
Please give me the way which will enhance my coding execution speed.. -
Performance issue when using select count on large tables
Hello Experts,
I have a requirement where i need to get count of data from a database table.Later on i need to display the count in ALV format.
As per my requirement, I have to use this select count inside a nested loops.
Below is the count snippet:
LOOP at systems assigning <fs_sc_systems>.
LOOP at date assigning <fs_sc_date>.
SELECT COUNT( DISTINCT crmd_orderadm_i~header )
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient "MANDT is referred as client
AND crmd_orderadm_iguid EQ bbp_pdigpguid
INTO w_sc_count
WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
AND <fs_sc_date>-end_timestamp
AND bbp_pdigp~zz_scsys EQ <fs_sc_systems>-sys_name.
endloop.
endloop.
In the above code snippet,
<fs_sc_systems>-sys_name is having the system name,
<fs_sc_date>-start_timestamp is having the start date of month
and <fs_sc_date>-end_timestamp is the end date of month.
Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
Can any one pls help me out to optimize this code.
Thanks,
SumanHi Choudhary Suman ,
Try this:
SELECT crmd_orderadm_i~header
INTO it_header " interna table
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient
AND crmd_orderadm_iguid EQ bbp_pdigpguid
FOR ALL ENTRIES IN date
WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
AND date-end_timestamp
AND bbp_pdigp~zz_scsys EQ date-sys_name.
SORT it_header BY header.
DELETE ADJACENT DUPLICATES FROM it_header
COMPARING header.
describe table it_header lines v_lines.
Hope this information is help to you.
Regards,
José -
Fetching from LDB PNP - Performance Issue
Hi,
My apologies if my question sounds too basic, but i have not worked in HR earlier. So just needed to clarify once. I've researched it well on SDN , then created a solution but just wanted to check with you guys once, if this the the correct way in HR.
Requirement: Get Data from few infotypes and display report. Infotypes uses 0000, 0001, 002, 0008 and 0102. On selection screen I have Grievances fields from Infotype 0102 say SUBTY(Subtype) and GRSTY (Reason), PERNR and Employment status..
Issue: Performance issue when no pernr is entered on selection screen but GRSTY is entered as 04 and subty as 1. Basically all the records(PERNR's) are fetched with the given selection criteria of two fields(subtype and reason from 0102) and only the the filtered ones are displayed creating performance issues. (seen in Runtime, all pernr's ar fetched)
Solution I Proposed: I've recommended to Select PERNR's from PA0102 table which satisfies the selection criteria before the GET PERNR event and store those PERNR[] in PNPPERNR[] range. The performance changes are visible and better than earlier.
Question: Have i done the right way? Or if there's any other way to achieve it. This is what we do in general ABAP, so just wanted to check with experts, if I've done it correctly.
Thanks,
SantoshHi Ramesh,
Thanks for your time. Actually that's the problem, the functional team may want to see all data at once just for a particular reason code and for now, not agreeing for other options. Tried many options but at end came with this approach. Authorizations are do required which is the reason had to include GET PERNR as well.
How this is handled in general scenarios where we do not have a filtering criteria as per the PERNR structure?
Thanks,
Santosh -
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
BEA Weblogic performance issue
Hi All,
We are using bea weblogic 10.2.
We have some performance issue in our production.
We face the proble like
1. Our bea query takes maxixmum cpu utilization in the oracle databse.
process are content upload, update, expiry and report fetching.
2. When we run BEA query for report it takes around 10 min to run and content count is around 70,000,
is this OK?
3. Some times we get Socket connection error.
4. Sometimes we get portal datasource connection pool error, currently it is set as 30.
5. Almost every process is slow.
Can anyone help me for optimization?
From BBA,
we idetified the query which is taking high CPU utilization..
SELECT
DISTINCT B.NODE_ID, B.NODE_VERSION_ID, B.CM_MODIFIED_DATE, B.MODIFIED_BY,B.VERSION_COMMENT,
B.LIFECYCLE_STATUS,A.OBJECT_CLASS_ID,A.REPOSITORY_NAME
FROM CMV_NODE A, CMV_NODE_VERSION B,(SELECT NODE_ID, MAX(CAST(NODE_VERSION_ID AS INTEGER)) NVI
FROM CMV_NODE_VERSION GROUP BY NODE_ID) A1, ( SELECT B.NODE_ID,B.NODE_VERSION_ID, B.CM_MODIFIED_DATE,
B.MODIFIED_BY, B.VERSION_COMMENT, B.LIFECYCLE_STATUS, A.OBJECT_CLASS_ID, A.REPOSITORY_NAME FROM CMV_NODE A,
CMV_NODE_VERSION B, CMV_NODE_VERSION_PROPERTY N1, CMV_PROPERTY P1, CMV_VALUE_V V1 WHERE A.NODE_ID = B.NODE_ID
AND B.NODE_ID = N1.NODE_ID AND B.NODE_VERSION_ID = N1.NODE_VERSION_ID AND N1.PROPERTY_ID = P1.PROPERTY_ID
AND P1.PROPERTY_ID = V1.PROPERTY_ID AND P1.PROPERTY_NAME = :1 AND (UPPER(V1.TEXT_VALUE) LIKE :2 ESCAPE :"SYS_B_0" )
UNION
SELECT B.NODE_ID, B.NODE_VERSION_ID, B.CM_MODIFIED_DATE, B.MODIFIED_BY, B.VERSION_COMMENT,
B.LIFECYCLE_STATUS, A.OBJECT_CLASS_ID, A.REPOSITORY_NAME FROM CMV_NODE A, CMV_NODE_VERSION B, CMV_NODE_
Edited by: Arvind Rai on Apr 13, 2010 12:49 PMIf your DB CPU is pegged, then anything that does db operations will take time (including some portal operations). If you take a thread dump you should be able to see them waiting on the DB
When we run BEA query for report it takes around 10 min to run and content count is around 70,000,No. If it causes your CPU to max out it isnt. However your content items arent that much, so what query are you running? You could always export and import just the content tables and run your queries on some other machine (Assuming this is the cause)
3. Some times we get Socket connection error.More information needed
4. Sometimes we get portal datasource connection pool error, currently it is set as 30.What is the error?
5. Almost every process is slow.If the DB is maxed this is what you should expect.
When is the query fired? Are your indexes created? I assume caching is not much good to you since these are reporting queries? -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance issue with pl/sql code
Hi Oracle Gurus,
I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Elapsed Times are 30minutes , 40 minutes, 65 minutes , 3 minutes ,3 seconds.
Expected elapsed time is maximum of 3 minutes. ( But some times it took 3 seconds too...! )
Output on all different executions are same that is deletion and insertion of 12K records into a table.
Here is the auto trace details of two different scenarios.
Slow execution - 33.65 minutes
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 1,712,343 1,712,342.6 41.4
CPU Time (ms) 1,679,689 1,679,688.6 44.7
Executions 1 N/A N/A
Buffer Gets ########## 167,257,973.0 86.9
Disk Reads 1,284 1,284.0 0.4
Parse Calls 1 1.0 0.0
User I/O Wait Time (ms) 4,264 N/A N/A
Cluster Wait Time (ms) 3,468 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 6 N/A N/A
Invalidations 0 N/A N/A
Version Count 4 N/A N/A
Sharable Mem(KB) 85 N/A N/A
-------------------------------------------------------------Fast Exection : 5 seconds
Stat Name Statement Per Execution % Snap
Elapsed Time (ms) 41,550 41,550.3 0.7
CPU Time (ms) 40,776 40,776.3 1.0
Executions 1 N/A N/A
Buffer Gets 2,995,677 2,995,677.0 4.2
Disk Reads 22 22.0 0.0
Parse Calls 1 1.0 0.0
User I/O Wait Time (ms) 162 N/A N/A
Cluster Wait Time (ms) 621 N/A N/A
Application Wait Time (ms) 0 N/A N/A
Concurrency Wait Time (ms) 55 N/A N/A
Invalidations 0 N/A N/A
Version Count 4 N/A N/A
Sharable Mem(KB) 85 N/A N/A
-------------------------------------------------------------For security reasons, I cannot share the actual code. Its a report generating code that deletes and load the data into table using insert into select statement.
Delete from table ;
cursor X to get the master data ( 98 records )
For each X loop
insert into tableA select * from tables where a= X.a and b= X.b and c=X.c ..... ;
-- 12 K records inserted on average
insert into tableB select * from tables where a= X.a and b= X.b and c=X.c ..... ;
-- 12 K records inserted on average
end loop ;1. The select query is complex with bind variables ( explain plan varies for each values )
2. I have checked the tablespace of the tables involved, it is 82% used. DBA confirmed that it is not the reason.
3. Disk reads are high during long execution.
4. At long running times, I can see a db sequential read wait event on a index object. This index is on the table where data is inserted.
All I need to find is why this code is taking 3 seconds and 60 minutes on the same day and on the consecutive executions ?
Is there any other approach to find the root cause of this behaviour and to fix it ? Kindly adivse.
Thanks in advance your help.
Regards,
Hari
Edited by: BluShadow on 26-Sep-2012 08:24
edited to add {noformat}{noformat} tags. You've been a member long enough to know to do this yourself... so please do so in future. ({message:id=9360002})Hariharan ST wrote:
Hi Oracle Gurus,
I am in need of your recommendations for a performance issue that I am facing in production envrionment. There is a pl/sql procedure which executes with different elapsed time at different executions. Please reedit your post and add some code tags around the trace information. This would improve readability greatly and will help us to help you
example
{<b></b>code}
select * from dual;{<b></b>code}
Based upon your description I can imagine two things.
a) The execution plan for the select query does change frequently.
A typical reason can be not up to date statistics.
b) Some locking / wait conflict. For example upon a UK index.
Are there any other operations going on while it is slow? If anybody inserts a value, then your session will wait, if the same (PK/UK) value also is to be inserted.
Those wait events can be recognized using standard tools like oracle sql developer or enterprise manager while the query is slow.
Also go through the links that are in the FAQ. They tell you how to get better information for makeing a tuning request.
SQL and PL/SQL FAQ
Edited by: Sven W. on Sep 25, 2012 6:41 PM -
Performance issue while generating Query
Hi BI Gurus.
I am facing performance issue while generating query on 0IC_C03.
It has a variable as (from & to) for generating the report for a particular time duration.
if the variable (from & to) fields is filled then after taking a long time it shows run time error.
& if the query is executed without mentioning the variable(which is optional) then the data is extracted from beginning to till date. the same takes less time in execution.
& after that the period has to be selected manually by option keep filter value. please suggest how can i solve the error
Regards
RitikaHI RITIKA,
WEL COME TO SDN.
YOUHAVE TO CHECK THE FOLLOWING RUN TIME SEGMENTS USING ST03N TCODE:
High Database Runtime
High OLAP Runtime
High Frontend Runtime
if its high Database Runtime :
- check the aggregates or create aggregates on cube and this helps you.
if its High OLAP Runtime :
- check the user exits if any.
- check the hier. are used and fetching in deep level.
If its high frontend runtime:
- check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
For From and to date variables, create one more set and use it and try.
Regs,
VACHAN -
Performance issue on the sys.dba_audit_session
i have the following query which is taking long time and had performance issue.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS
failed_count
FROM sys.dba_audit_session
WHERE returncode != 0
AND timestamp >= current_timestamp - TO_DSINTERVAL('0 0:30:00')
call count cpu elapsed disk query current rows
Parse 1 0.01 0.04 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 68.42 216.08 3943789 3960058 0 1
total 4 68.43 216.13 3943789 3960058 0 1
The view dba_audit_session is a select from the view dba_audit_trail. If you
look at the definition of dba_audit_trail it does a CAST on the ntimestamp#
column. Therefore disabling index access because there is not a function
based index on ntimestamp#. I am not even sure a function based index would
work to match what the view does.
cast ( /* TIMESTAMP */
(from_tz(ntimestamp#,'00:00') at local) as date),
To get index access the metric would have to avoid the use of the view. I have changed the query like this.
SELECT /*+ INDEX(a I_AUD3) */ TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD
HH24:MI:SS TZD') AS curr_timestamp, COUNT(userid) AS failed_count
FROM sys.aud$ a
WHERE returncode != 0
and action# between 100 and 102
AND ntimestamp# >= systimestamp at time zone 'GMT' - 30/1440
is it correct way to di it?
could you comment on this ?The query is run by grid Control (or DBConsole) to count the metric related to audit sessions which is ON by default in 11g. To decrease the impact of this query you should purge the aud$ table regularly.
Best way is to use DBMS_AUDIT_MGMT to periodically purge the data older than "whatever date". If you don't need the audit infor, you can simply truncate aud$.
Maybe you are looking for
-
How to fill the data of two different tables into one?
Hi Experts, I have two tables named CDHDR and CDSHW(structure). I have extracted the data from these two tables through two function modules named CHANGEDDOCUMENT_HEADER and CHANGEDOCUMENT_POSITION. Now I have the data in to different tables. These
-
how do i move the reload page button in firefox OSX to beside the back button where it was before i updated. i know there is a classic theme addon for windows but i need a fix for osx...its very frustrating when you update and things like this change
-
Blurry/Pixelated Icon on Launchpad - How to Fix?
I, recently, upgraded my Evernote application to the newest version (5.0.0) and, then, noticed that the application icon was pixelated in, both, the Dock and the Launchpad. I did a "get info" on the application and the preview showed the same, low-r
-
[c#][UI API]:handling events
I have big problem here.I m using c# , and i need some samples (or maybe one sample) in c# where I can find how to deal with events (c# and UI API)... I can't access the Service Market Place because i don't have a user id and a password there... plz
-
Hi Guys, I just got a 3D TV and LG 3D Blu Ray/Home Theatre for Christmas, i've been playing with for a few days but one thing that is driving me nuts is the Home Theatre came with a Iphone/Ipod Dock that allows control throught the Theatre. However w