[u][b]Performance Tuning Help[/b][/u] : Repeating HUGE Select Statement...
I have a select statement that I am repeating 12 times with the only difference between them all being the date range that is grabbed. Essentially, I am grabbing the last 12 months of information from the same table here is a copy of one of the sections:
(select
a.account_id as account_id,
a.company_id as company_id,
a.account_number as acct_number,
to_char(a.trx_date, 'YYYY/MM') as month,
sum(a.amount_due_original) as amount
from
crystal.financial_vw a
where
a.cust_since_dt is not null
and a.cust_end_dt is null
and a.trx_date > add_months(sysdate, -13)
and a.trx_date <= add_months(sysdate, -12)
group by
a.account_id,
a.company_id,
a.account_number,
to_char(a.trx_date, 'YYYY/MM')) b
I am now looking to do some tuning on this and was wondering if anyone has any suggestions. My initial thought was to use cursors or some sort of in memory storage to temporarily process the information into a pipe-delimited flat file.
"Don't need:
to_char(a.trx_date, 'YYYY/MM')"
Are you sure?
"Change to (just to make it easier to read):
a.trx_date between add_months(sysdate, -13)
and a.trx_date <= add_months(sysdate, -12)"
What? That's not even valid syntax is it? Besides the fact that the BETWEEN operator is inclusive (i.e. > add_months(sysdate, -13) is not the same as between add_months(sysdate, -13) ...).
"And be sure you have an index on:
cust_since_dt, cust_end_dt, trx_date in financial_vw."
What information did you base this conclusion on. Just because something is in the where clause doesn't mean you should automatically throw an index on it. What if 90% of the rows satisfy those null/not null criteria? What if there's only one year of data in this table? Are you certain an index would help pick out all the data for one month more efficiently than a full table scan?
My immediate question was why are you breaking the data for each month up into separate subqueries like this at all? What is it that your doing with these subqueries that you don't believe can be accomplished with a single grouped query?
Similar Messages
-
hello friends,
i am supposed to use ST05 and SE30 to do performance tuning,and also perform cost estimation,
can some plz help me understand, i am not that good at reading large paragraphs, so plz donot give me links to help.sap
thank you.Hi
Se30 is the runtime analysis
Here it will give you detailed graph about your program .
abap time,database time application time ..
st05 is sql trace where it will give you individual select stmnts in your program
and also proper index is being used or not.
Other performance tips
check any select stmnts inside the loop and replace with read statement
use binary search in read stmnts
use proper index in where condition in select stmnt.
if you want to execute st05..go st05..trace on-execute ur program--come to st05..trace off and --list trace...
you get summary details of select queires and it will be pink or other color.
Thanks -
Query Performance Tuning - Help
Hello Experts,
Good Day to all...
TEST@ora10g>select * from v$version;
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
"CORE 10.2.0.4.0 Production"
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - Production
SELECT fa.user_id,
fa.notation_type,
MAX(fa.created_date) maxDate,
COUNT(*) bk_count
FROM book_notations fa
WHERE fa.user_id IN
( SELECT user_id
FROM
( SELECT /*+ INDEX(f2,FBK_AN_ID_IDX) */ f2.user_id,
MAX(f2.notatn_id) f2_annotation_id
FROM book_notations f2,
title_relation tdpr
WHERE f2.user_id IN ('100002616221644',
'100002616221645',
'100002616221646',
'100002616221647',
'100002616221648')
AND f2.pack_id=tdpr.pack_id
AND tdpr.title_id =93402
GROUP BY f2.user_id
ORDER BY 2 DESC)
WHERE ROWNUM <= 10)
GROUP BY fa.user_id,
fa.notation_type
ORDER BY 3 DESC;Cost of the Query is too much...
Below is the explain plan of the query
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 29 | 1305 | 52 (10)| 00:00:01 |
| 1 | SORT ORDER BY | | 29 | 1305 | 52 (10)| 00:00:01 |
| 2 | HASH GROUP BY | | 29 | 1305 | 52 (10)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID | book_notations | 11 | 319 | 4 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 53 | 2385 | 50 (6)| 00:00:01 |
| 5 | VIEW | VW_NSO_1 | 5 | 80 | 29 (7)| 00:00:01 |
| 6 | HASH UNIQUE | | 5 | 80 | | |
|* 7 | COUNT STOPKEY | | | | | |
| 8 | VIEW | | 5 | 80 | 29 (7)| 00:00:01 |
|* 9 | SORT ORDER BY STOPKEY | | 5 | 180 | 29 (7)| 00:00:01 |
| 10 | HASH GROUP BY | | 5 | 180 | 29 (7)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID | book_notations | 5356 | 135K| 26 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 6917 | 243K| 27 (0)| 00:00:01 |
| 13 | MAT_VIEW ACCESS BY INDEX ROWID| title_relation | 1 | 10 | 1 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | IDX_TITLE_ID | 1 | | 1 (0)| 00:00:01 |
| 15 | INLIST ITERATOR | | | | | |
|* 16 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 5356 | | 4 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 746 | | 1 (0)| 00:00:01 |
Table Details
SELECT COUNT(*) FROM book_notations; --111367
Columns
user_id -- nullable field - VARCHAR2(50 BYTE)
pack_id -- NOT NULL --NUMBER
notation_type-- VARCHAR2(50 BYTE) -- nullable field
CREATED_DATE - DATE -- nullable field
notatn_id - VARCHAR2(50 BYTE) -- nullable field
Index
FBK_AN_ID_IDX - Non unique - Composite columns --> (user_id and pack_id)
SELECT COUNT(*) FROM title_relation; --12678
Columns
pack_id - not null - number(38) - PK
title_id - not null - number(38)
Index
IDX_TITLE_ID - Non Unique - TITLE_ID
Please help...
Thanks...Linus wrote:
Thanks Bravid for your reply; highly appreciate that.
So as you say; index creation on the NULL column doesnt have any impact. OK fine.
What happens to the execution plan, performance and the stats when you remove the index hint?
Find below the Execution Plan and Predicate information
"PLAN_TABLE_OUTPUT"
"Plan hash value: 126058086"
"| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |"
"| 0 | SELECT STATEMENT | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 1 | SORT ORDER BY | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 2 | HASH GROUP BY | | 25 | 1125 | 55 (11)| 00:00:01 |"
"| 3 | TABLE ACCESS BY INDEX ROWID | book_notations | 10 | 290 | 4 (0)| 00:00:01 |"
"| 4 | NESTED LOOPS | | 50 | 2250 | 53 (8)| 00:00:01 |"
"| 5 | VIEW | VW_NSO_1 | 5 | 80 | 32 (10)| 00:00:01 |"
"| 6 | HASH UNIQUE | | 5 | 80 | | |"
"|* 7 | COUNT STOPKEY | | | | | |"
"| 8 | VIEW | | 5 | 80 | 32 (10)| 00:00:01 |"
"|* 9 | SORT ORDER BY STOPKEY | | 5 | 180 | 32 (10)| 00:00:01 |"
"| 10 | HASH GROUP BY | | 5 | 180 | 32 (10)| 00:00:01 |"
"| 11 | TABLE ACCESS BY INDEX ROWID | book_notations | 5875 | 149K| 28 (0)| 00:00:01 |"
"| 12 | NESTED LOOPS | | 7587 | 266K| 29 (0)| 00:00:01 |"
"| 13 | MAT_VIEW ACCESS BY INDEX ROWID| title_relation | 1 | 10 | 1 (0)| 00:00:01 |"
"|* 14 | INDEX RANGE SCAN | IDX_TITLE_ID | 1 | | 1 (0)| 00:00:01 |"
"| 15 | INLIST ITERATOR | | | | | |"
"|* 16 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 5875 | | 4 (0)| 00:00:01 |"
"|* 17 | INDEX RANGE SCAN | FBK_AN_ID_IDX | 775 | | 1 (0)| 00:00:01 |"
"Predicate Information (identified by operation id):"
" 7 - filter(ROWNUM<=10)"
" 9 - filter(ROWNUM<=10)"
" 14 - access(""TDPR"".""TITLE_ID""=93402)"
" 16 - access((""F2"".""USER_ID""='100002616221644' OR ""F2"".""USER_ID""='100002616221645' OR "
" ""F2"".""USER_ID""='100002616221646' OR ""F2"".""USER_ID""='100002616221647' OR "
" ""F2"".""USER_ID""='100002616221648') AND ""F2"".""PACK_ID""=""TDPR"".""PACK_ID"")"
" 17 - access(""FA"".""USER_ID""=""$nso_col_1"")"
The cost is the same because the plan is the same. The optimiser chose to use that index anyway. The point is, now that you have removed it, the optimiser is free to choose other indexes or a full table scan if it wants to.
>
Statistics
BEGIN
DBMS_STATS.GATHER_TABLE_STATS ('TEST', 'BOOK_NOTATIONS');
END;
"COLUMN_NAME" "NUM_DISTINCT" "NUM_BUCKETS" "HISTOGRAM"
"NOTATION_ID" 110269 1 "NONE"
"USER_ID" 213 212 "FREQUENCY"
"PACK_ID" 20 20 "FREQUENCY"
"NOTATION_TYPE" 8 8 "FREQUENCY"
"CREATED_DATE" 87 87 "FREQUENCY"
"CREATED_BY" 1 1 "NONE"
"UPDATED_DATE" 2 1 "NONE"
"UPDATED_BY" 2 1 "NONE"
After removing the hint ; the query still shows the same "COST"
Autotrace
recursive calls 1
db block gets 0
consistent gets 34706
physical reads 0
redo size 0
bytes sent via SQL*Net to client 964
bytes received via SQL*Net from client 1638
SQL*Net roundtrips to/from client 2
sorts (memory) 3
sorts (disk) 0
Output of query
"USER_ID" "NOTATION_TYPE" "MAXDATE" "COUNT"
"100002616221647" "WTF" 08-SEP-11 20000
"100002616221645" "LOL" 08-SEP-11 20000
"100002616221644" "OMG" 08-SEP-11 20000
"100002616221648" "ABC" 08-SEP-11 20000
"100002616221646" "MEH" 08-SEP-11 20000Thanks...I still don't know what we're working towards at the moment. WHat is the current run time? What is the expected run time?
I can't tell you if there's a better way to write this query or if indeed there is another way to write this query because I don't know what it is attempting to achieve.
I can see that you're accessing 100k rows from a 110k row table and it's using an index to look those rows up. That seems like a job for a full table scan rather than index lookups.
David -
I moved our internal application from a P3 500mhz CPU 1U
server to a Duo-Core Xeon 1.6ghz server.
The newer box has a larger HD, more RAM, beefier CPU, yet my
application is running at the same or a smidgen faster than the old
server. Oh, I went from a Win2k, IIS 5.0 box to a Windows Server
2003 running IIS 6, mysql 5.0.
I did a Windows Performance Monitor look-see. I added all the
CF services and watched the graphs do their thing. I noticed that
after I ran query on my application, the jrun service Avg. Req Time
is pinned at 100%. Even several minutes after inactivity on the
box, this process is still at 100%. Anyone know if this is a
causing my performance woes? Anyone know what else I can look at to
increase the speed of the app (barring and serious code rebuild)?
Thanks!
chrisAnyone know what else I can look at to increase the speed of
the app
(barring and serious code rebuild)?
There are some tweaks and tips, but I will let more knowledge
folks
speak to these. But I am afraid you problem maybe a code
issue. From
the brief description I would be concerned there is some kind
of memory
leak thing happening in your code and until you track it down
and plug
it up, no amount of memory or performance tuning is going to
do you much
good. -
Please help - NE where condition in SELECT statement
Dear experts,
I am posting a section of my codes here for your review on performance tuning.
In my second select statement, I used a "NE" where condition. I read somewhere in this forum that using "NE" where condition is not a good decision for improving codes' performance. What alternatives can I have to achieve the same purpose? May I use "NOT IN" here instead? Or do I use a LOOP for this (a rather manual way)?
Just to let you all know that I still consider myself quite inexperienced in ABAP - please also let me know how I can better improvise my programming techniques in the posted codes here too.
I will be most glad to provide you with further information if needed - just let me know.
Many THANKS in advance!
IF p_noncis = 'X'. " Non CIS category of spend selected
" zfi_cis_mat_grp is a bespoke table that stores all CIS MATKL
" and it has two fields only - MANDT and MATKL
SELECT * FROM zfi_cis_mat_grp
INTO TABLE gt_cis_mat_grp.
IF gt_cis_mat_grp IS NOT INITIAL.
SELECT ebeln
ebelp
matkl
FROM ekpo
INTO TABLE gt_ekpo
FOR ALL ENTRIES IN gt_cis_mat_grp
WHERE matkl NE gt_cis_mat_grp-matkl. " NE where condition - is this OK?
ENDIF.
IF gt_ekpo IS NOT INITIAL.
IF s_sakto IS NOT INITIAL.
SELECT ebeln
ebelp
sakto
FROM ekkn
INTO TABLE gt_ekkn
FOR ALL ENTRIES IN gt_ekpo
WHERE ebeln = gt_ekpo-ebeln AND
ebelp = gt_ekpo-ebelp AND
sakto IN s_sakto.
IF gt_ekkn IS NOT INITIAL.
SELECT bukrs
lifnr
belnr
budat
cpudt
xblnr
ebeln
ebelp
zfbdt
zterm
zlspr
FROM bsik
INTO TABLE gt_bsik
FOR ALL ENTRIES IN gt_ekkn
WHERE bukrs IN s_bukrs AND
lifnr IN s_lifnr AND
budat IN s_budat AND
cpudt IN s_cpudt AND
xblnr IN s_xblnr AND
ebeln = gt_ekkn-ebeln AND
ebelp = gt_ekkn-ebelp AND
qsskz NE ''.
ENDIF.
ELSE.
SELECT bukrs
lifnr
belnr
budat
cpudt
xblnr
ebeln
ebelp
zfbdt
zterm
zlspr
FROM bsik
INTO TABLE gt_bsik
FOR ALL ENTRIES IN gt_ekpo
WHERE bukrs IN s_bukrs AND
lifnr IN s_lifnr AND
budat IN s_budat AND
cpudt IN s_cpudt AND
xblnr IN s_xblnr AND
ebeln = gt_ekpo-ebeln AND
ebelp = gt_ekpo-ebelp AND
qsskz NE ''.
ENDIF.
ENDIF.
ELSE. " Complete list of category of spend selected
SELECT bukrs
lifnr
belnr
budat
cpudt
xblnr
ebeln
ebelp
zfbdt
zterm
zlspr
FROM bsik
INTO TABLE gt_bsik
WHERE bukrs IN s_bukrs AND
lifnr IN s_lifnr AND
budat IN s_budat AND
cpudt IN s_cpudt AND
xblnr IN s_xblnr AND
qsskz NE ''.
ENDIF.Hi,
If you want to remove th NE option then try this way..
SELECT bukrs
lifnr
belnr
budat
cpudt
xblnr
ebeln
ebelp
zfbdt
zterm
zlspr
FROM bsik
INTO TABLE gt_bsik
WHERE bukrs IN s_bukrs AND
lifnr IN s_lifnr AND
budat IN s_budat AND
cpudt IN s_cpudt AND
xblnr IN s_xblnr .
IF SY-SUBRC EQ 0.
Delete gt_bsik where qsskz EQ ' '.
ENDIF. -
Performance Tuning in Oracle 10g during Migration - Insert Statements
Some of the tables contain a huge data e.g. 7 million records to be migrated.
The table structure of Source Database is different from the Target Database (Where we are doing the migration).
So we have made a use of temporary tables to collect the data from different tables using joins and populate the main table using INSERT INTO tableA SELECT * FROM tableZ
Now the problem statement.
Some of the queries take infinite time and data doesnt get inserted into the main tables. I have seen the explain plan and didnt find anything unusual. The same query works fine when we use date time stamp and restrict the data.
What parameters do we have to take care ? Is there any configuration needed on Oracle Server side ? If yes which parameters?Lets start with the basics. First what is 10g? Is that 10.1.0.3 or 10.2.0.4 or something else?
Second 7 million records is a relatively small dataset.
No one can possibly help you with no DDL, no DML, no Explain Plan output generated with DBMS_XPLAN.
We can make wild guesses but that is like closing your eyes in a dark room and trying to find the light switch.
Please repost all relevant information so someone can help you. -
Help me to solve this. select statement!
hello alll
is there any functional module which gives me the BELNR(of bseg) by passing the vbeln.
yeah ...we can do tht by passing bseg table ,,,,but its taking loads of time to give me the output as its not a primary key or secondary key.....
when i m using this condition in the prog i cannot use inner join as its a cluster table . if i m writing a condition it is effecting my prog performance
this statement is really taking long time to process.
in bseg table as vbeln is not primary key or seconday key, i guess thz the reason it is taking so much time ,,,
can anyone help me to sort this out... ??
any fun mod to get the data of bseg cluster table by giving vbeln?/
or any other conditions to reduce the processing time?
clear t_vbrkvbrp.
sort t_vbrkvbrp by vbeln.
loop at t_vbrkvbrp.
at new vbeln.
select bukrs belnr vbeln
from bseg
into corresponding fields of table t_temp
where bukrs = t_vbrkvbrp-bukrs
and vbeln = t_vbrkvbrp-vbeln.
endat.
endloop.
SELECT bukrs belnr buzid koart shkzg dmbtr vbeln hkont kunnr werks
FROM bseg
INTO TABLE t_bseg
for all entries in t_temp
WHERE hkont IN s_hkont
AND bukrs = t_temp-bukrs
AND belnr = t_temp-belnr
AND buzid = ' '
AND koart = 'S'
AND shkzg = 'H'.
i need to get the g/l account number and belnr from bseg for the above condition type so i have to use bseg table
as there is no apporpriate index it is scanning the full table ,,,, so can anyone tell me how to create a index or like wise to get the data faster??Don't use BSEG, use BKPF and fields AWTYP, AWREF which link a financial document to the application and document that generated it. (The "original document" under transaction like FB03)
<i>Example: "RMRP" + invoice number for purchase invoice</i>
In some case you need an intermediate table
<i>Example: from EKPO, EKBE, you get the MKPF records and from them the BKPF/BSEG</i>
For a list of "referenced procedures" Look at TTYP (text table is TTYPT), you will get the code, structure used to build the key (if more than one field) and a function module used to display the origin
<i>Example : MKPF "Material document" structure MKPF_AWKEY function module MB_DOCUMENT_SENDER_MKPF</i>
You can also look at your accounting documents found with the "slow" BSEG version of your program and establish a list of AWTYP used in you company/customer.
An other way, used in some companies, is to append a structure with EBELN field to BSIS and BSAS table, these table are filled via a MOVE-CORRESPONDING statement, so the nesw records will be filled (You may write a program to update past records) then create an index on EBELN on these fields.
Regards
PS: BSEG is a cluster table, so the only real criteria are the primary keys, if you select via EBELN, the program read the whole table, may be correct in Development, but not in Production and there it will be more and more resources consuming. So NEVER select from BSEG vithout the primary keys, use the secondary tables : BSIS/BSAS, BSIM, etc. -
Need help in how to combine select statement
Hi,
Table1 (VIEWREST2005)
- ip
- msg
- fulldata
- ID
Table2 (L200505)
- ip
- msg
- fulldata
I want to create one sql that can select from table 2 to table number 1. This is my select statement, but it cannot running. The different between this 2 table is only column ID.
insert into VIEWREST2005(ip,msg,fulldata) values(select ip, msg,fulldata from L200505)
Can anyone modified my sql statement.Why do you not refer to the Oracle® Database SQL Reference manual? Or do you think your SQL Kung Fu is good enough that no RTFM'ing is needed?
On-line copies of all Oracle documentation is available via http://tahiti.oracle.com/
BTW, the correct syntax for this flavoured INSERT is:
INSERT INTO table SELECT .. FROM table
Drop the VALUES clause as no literal values are supplied. -
Performance wise, a select statement is faster on a view or on a table??
Performance wise, a complex (with multi join) select statement is faster on a view or on a table??
Hi,
the purpose of a view is not to provide performance benefits, it's basically a way to better structure database code and data access. A view is nothing but a stored query. When the optimizer sees references to a view in a query, it tries to merge it (i.e. replace the view with its definition), but it some cases it may be unable to do so (in presence of analytic functions, rownum pseudocolumn etc.) -- in such cases views can lead to a performance degradation.
If you are interested in performance, what you need is a materialized view, which is basically a table built from a query, but then you need to decide how you would refresh it. Please refer to the documentation for details.
Best regards,
Nikolay -
Need help in Performance tuning for function...
Hi all,
I am using the below algorithm for calculating the Luhn Alogorithm to calculate the 15th luhn digit for an IMEI (Phone Sim Card).
But the below function is taking about 6 min for 5 million records. I had 170 million records in a table want to calculate the luhn digit for all of them which might take up to 4-5 hours.Please help me performance tuning (better way or better logic for luhn calculation) to the below function.
A wikipedia link is provided for the luhn algorithm below
Create or Replace FUNCTION AddLuhnToIMEI (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no NUMBER (2) := LENGTH (LuhnPrimitive);
Multiplier NUMBER (1) := 2;
Total_Sum NUMBER (4) := 0;
Plus NUMBER (2);
ReturnLuhn VARCHAR2 (25);
BEGIN
WHILE Index_no >= 1
LOOP
Plus := Multiplier * (TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1)));
Multiplier := (3 - Multiplier);
Total_Sum := Total_Sum + TO_NUMBER (TRUNC ( (Plus / 10))) + MOD (Plus, 10);
Index_no := Index_no - 1;
END LOOP;
ReturnLuhn := LuhnPrimitive || CASE
WHEN MOD (Total_Sum, 10) = 0 THEN '0'
ELSE TO_CHAR (10 - MOD (Total_Sum, 10))
END;
RETURN ReturnLuhn;
EXCEPTION
WHEN OTHERS
THEN
RETURN (LuhnPrimitive);
END AddLuhnToIMEI;
http://en.wikipedia.org/wiki/Luhn_algorithmAny sort of help is much appreciated....
Thanks
RedeThere is a not needed to_number function in it. TRUNC will already return a number.
Also the MOD function can be avoided at some steps. Since multiplying by 2 will never be higher then 18 you can speed up the calculation with this.
create or replace
FUNCTION AddLuhnToIMEI_fast (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no pls_Integer;
Multiplier pls_Integer := 2;
Total_Sum pls_Integer := 0;
Plus pls_Integer;
rest pls_integer;
ReturnLuhn VARCHAR2 (25);
BEGIN
for Index_no in reverse 1..LENGTH (LuhnPrimitive) LOOP
Plus := Multiplier * TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1));
Multiplier := 3 - Multiplier;
if Plus < 10 then
Total_Sum := Total_Sum + Plus ;
else
Total_Sum := Total_Sum + Plus - 9;
end if;
END LOOP;
rest := MOD (Total_Sum, 10);
ReturnLuhn := LuhnPrimitive || CASE WHEN rest = 0 THEN '0' ELSE TO_CHAR (10 - rest) END;
RETURN ReturnLuhn;
END AddLuhnToIMEI_fast;
/My tests gave an improvement for about 40%.
The next step to try could be to use native complilation on this function. This can give an additional big boost.
Edited by: Sven W. on Mar 9, 2011 8:11 PM -
Help request for Oracle DB Performance tuning
I need some help on Oracle performance tuning.
My environment is VB 6 frontend & Oracle 8
in backend.
The problem I am facing is particularly in
muli-user environment. Some query which takes
20 seconds to when there is only one user
working in the network, takes more time
(3 minutes to even 5 minutes) when there are
4-5 users working in the network.
What may be wrong ?
Are there any parameters that I can
fine tune ?
We checked the resource utilization at the
server level CPU utilization is max 50 %,
Memoery utilization is 50 % (250 MB out of available 512 KB)
nullHi Vinay,
There can be many reasons for time delay.
Here are some--
1) You may not be releasing the locks on objects quickly after use.
This can be obtained by querying v$locked_object.
2) You may be holding many sessions concurrently and not closing the older ones.
This u can get by querying v$session.
These are some common problems in multi-user platform.
R u using MTS?
Yogesh. -
Performance tuning issues -- plz help
Hi Tuning gurus
this querry works fine for lesser number of rows
eg :--
where ROWNUM <= 10 )
where rnum >=1;
but takes lot of time as we increase rownum ..
eg :--
where ROWNUM <= 10000 )
where rnum >=9990;
results are posted below
pls suggest me
oracle version -Oracle Database 10g Enterprise Edition
Release 10.2.0.1.0 - Prod
os version red hat enterprise linux ES release 4
also statistics differ when we use table
and its views
results of view v$mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from v$mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.84
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
294 recursive calls
0 db block gets
8715 consistent gets
8669 physical reads
0 redo size
7060 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from v$mail;
Elapsed: 00:00:00.17
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
8 recursive calls
0 db block gets
2171 consistent gets
2057 physical reads
260 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
results of original table mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.21
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
1 recursive calls
119544 db block gets
8686 consistent gets
8648 physical reads
0 redo size
13510 bytes sent via SQL*Net to client
4084 bytes received via SQL*Net from client
41 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from mail;
Elapsed: 00:00:00.34
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
1 recursive calls
0 db block gets
2183 consistent gets
2062 physical reads
72 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Thanks n regards
ps : sorry i could not preserve the format plz
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBAJust to answer the OP's fundamental question:
The query starts off quick (rows between 1 and 10)
but gets increasingly slower as the start of the
window increases (eg to row 1000, 10,000, etc).
The original (unsorted) query would get first rows
very quickly, but each time you move the window, it
has to fetch and discard an increasing number of rows
before it finds the first one you want. So the time
taken is proportional to the rownumber you have
reached.
With Charles's correction (which is unavoidable), the
entire query has to be retrieved and sorted
before the rows you want can be returned. That's
horribly inefficient. This technique works for small
sets (eg 10 - 1000 rows) but I can't tell you how
wrong it is to process data in this way especially if
you are expecting lacs (that's 100,000s isn't
it) of rows returned. You are pounding your database
simply to give you the option of being able to go
back as well as forwards in your query results. The
time taken is proportional to the total number of
rows (so the time to get to the end of the entire set
is proportional to the square of the total
number of rows.
If you really need to page back and forth
through large sets, consider one of the following
options:
1) saving the set (eg as a materialised view or in a
temp table - and include "row number" as an indexed
column)
2) retrieve ALL the rowids into an array/collection
in a single pass, then go get 10 rows by rowid for
each page
3) assuming you can sort by a unique identifier, use
that (instead of rownumber) to remember the first row
in each page; use a range scan on the index on that
UID to get back the rows you want quickly (doing this
with a non-unique sort key is quite a bit harder)
Remember also that if someone else inserts into your
table while you are paging around, some of these
methods can give confusing results - because every
time you start a new query, you get a new
read-consistent point.
Anyway, try to redesign so you don't need to page
through lacs of rows....
HTH
Regards NigelYou are correct regarding the OP's original SQL statement that:
"the entire query has to be retrieved and sorted before the rows you want can be returned"
However, that is not the case with the SQL statement that I posted. The problem with the SQL statement I posted is that Oracle insists on performing full tablescans on the table. The following is a full test run with 2,000,000 rows in a table, including an analysis of the problem, and a method of working around the problem:
CREATE TABLE T1 (
MAIL_ID NUMBER(10),
USER_ID NUMBER(10),
FOLDER_ID NUMBER(10),
MAIL_DATE DATE,
PRIMARY KEY(MAIL_ID));
<br>
CREATE INDEX T1_USER_FOLDER ON T1(USER_ID,FOLDER_ID);
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID);
<br>
INSERT INTO
T1
SELECT
ROWNUM MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
INSERT INTO
T1
SELECT
ROWNUM+1000000 MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
COMMIT;
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
|* 1 | HASH JOIN | | 1 | 8801 | 10 |00:00:15.62 | 13610 | 1010K| 1010K| 930K (0)|
|* 2 | VIEW | | 1 | 8801 | 10 |00:00:00.34 | 6805 | | | |
|* 3 | WINDOW SORT PUSHED RANK| | 1 | 8801 | 910 |00:00:00.34 | 6805 | 74752 | 74752 |65536 (0)|
|* 4 | TABLE ACCESS FULL | T1 | 1 | 8801 | 8630 |00:00:00.29 | 6805 | | | |
| 5 | TABLE ACCESS FULL | T1 | 1 | 2000K| 2000K|00:00:04.00 | 6805 | | | |
<br>
Predicate Information (identified by operation id):
1 - access("MAIL_ID"="M"."MAIL_ID")
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("MAIL_DATE") DESC )<=909)
4 - filter(("USER_ID"=6 AND "FOLDER_ID"=1))The above performed two tablescans of the T1 table and required 15.6 seconds to complete, which was not the desired result. Now, to create an index that will be helpful for the query, and provide Oracle an additional hint:
(http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html "Pagination in Getting Rows N Through M" shows a similar approach)
DROP INDEX T1_USER_FOLDER_MAIL;
<br>
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID,MAIL_DATE DESC,MAIL_ID);
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.01 | 47 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.01 | 7 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 909 |00:00:00.01 | 7 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 910 |00:00:00.01 | 7 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=909)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.01 seconds.
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 8600 AND 8609) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.11 | 81 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.11 | 41 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 8609 |00:00:00.09 | 41 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 8610 |00:00:00.05 | 41 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=8600 AND "RN"<=8609))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=8609)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.11 seconds.
As the above shows, it is possible to efficiently retrieve the desired records very rapidly without having to leave the cursor open.
If this SQL statement will be used in a web browser, it probably does not make sense to leave the cursor open. If the SQL statement will be used in an application that maintains state, and the user is expected to always page from the first row toward the last, then leaving the cursor open and reading rows as needed makes sense.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Very urgent-----help in performance tuning
hai all,
I had used inner join for getting data from 5 tables.
performance is very low in this case.
now i want to increase the performance. how can i do this.
coding for the select statement is ......
SELECT agpnr bname1 bname2 bpstlz bort01 bstras b~street2
bname1_maco bname2_maco baliasname bispadrbsnd
INTO CORRESPONDING FIELDS OF TABLE it_tab
FROM jgtgpnr AS a
INNER JOIN jgtsadr AS b ON
aadrnr = badrnr
INNER JOIN jjtvm AS c ON
agpnr = ckunnr
INNER JOIN kna1 AS k ON
agpnr = kkunnr
INNER JOIN knvv AS v ON
agpnr = vkunnr
WHERE ( ajktokd = 'MADV' OR ajktokd = 'MADO' )
AND c~vkorg = pr_vkorg
AND v~vkorg = pr_vkorg
AND gpnr IN so_gpnr
AND k~aufsd IN so_block
AND v~aufsd IN so_blck
please give me a good suggesstion for this.
thanks in advance.Hi
Try to use for all entries
Eg
if u have itab1 and itab2.
select vbeln netwr from vbak into table itab1.
if not itab1[] is initial
select vbeln matnr matkl from vbap into table itab2
for all entries in itab1 where vbeln = itab1-vbeln.
endif.
Hope this will help u.
regards
P.Thangaraj -
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.dbdan wrote:
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.Here is something you might try as a starting point:
Capture the output of the following (to a table, send to Excel, or spool to a file):
SELECT
STAT_NAME,
VALUE
FROM
V$OSSTAT
ORDER BY
STAT_NAME;
SELECT
STAT_NAME,
VALUE
FROM
V$SYS_TIME_MODEL
ORDER BY
STAT_NAME;
SELECT
EVENT,
TOTAL_WAITS,
TOTAL_TIMEOUTS,
TIME_WAITED
FROM
V$SYSTEM_EVENT
WHERE
WAIT_CLASS != 'Idle'
ORDER BY
EVENT;Wait a known amount of time (5 minutes or 10 minutes)
Execute the above SQL statements again.
Subtract the starting values from the ending values, and post the results for any items where the difference is greater than 0. The Performance Tuning Guide (especially the 11g version) will help you understand what each item means.
To repeat what Ed stated, do not randomly change parameters (even if someone claims that they have successfully made the parameter change 100s of times).
You could also try a Statspack report, but it might be better to start with something which produces less than 70 pages of output.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
VAL_FIELD selection to determine RSDRI or MDX query: performance tuning
according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
I did so and found an interesting issue in one of the COPA report.
with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
so in smmary,
BAS(PARENT) =>MDX query.
BAS(CHILD1)=>RSDRI query.
BAS(CHILD2)=>RSDRI query.
BAS(CHILD3)=>RSDRI query.
BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
I know VAL_FIELD is SAP reserved name for BPC dimensions. my question is why BAS(PARENT) =>MDX query.?
interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
GeorgeOk - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection.
I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
I should have paid more attention to the Info message I got in the BEx Query Designed. It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
After moving the variables to the Characteristic Restrictions my report worked as expected. The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
Hope this helps someone else
Maybe you are looking for
-
How do I access my iTunes Library from old HDD?
I have iTunes on the HDD from my now dead laptop. That version of iTunes is 10.1.2.17. The HDD now lives in a SATA enclosure, and I've looked at the music library, it appears to be intact. However, how do I import that music library into iTunes on my
-
Transfer material type DIEN in ECC system to CRM product type Service
Hi Experts, I have a requirement to transfer material type DIEN in ECC system to CRM product type Service. How can we do this?
-
I'm importing videos shot with my digital camera which takes great HD movies. I notice when I import them into iMovie 11, some of them will come out more grainy than the original. I'm selecting the Optimize video checkbox when importing. Is this what
-
Iphone 3.0 Search Oddities
Why is the iphone 3.0 search option finding emails I deleted months ago. When I do a search for any emails from my contacts it shows me old emails that I got from them at least 2 or 3 months ago that I deleted. It's even finding spam emails that I de
-
ORA-01002: fetch out of sequence using multiple XA datasources
Hi, I have a problem accessing multiple XA datasources : - launch an sql request on datasource 1 - rs.next() on the resultset - use the data to launch another sql request on datasource 2 After 10 iterations, the next() method throws an SQLException (