SQL Performance tuning with wildcard Like condition
Hi,
I have performance issue with SQL query.
When I am using "where emp_name like '%im%' " query is taking longer time than when I use "where emp_name like 'im%' " .
With former condition query takes 40 sec , with later it takes around 1.5 sec.
Both returns almost same no. of rows. We have function based index created on emp_name column.
With wildcard at both ends query goes for full table scan.
Can any one please suggest way so that query responce time can be reduced.?
I even tried using hints but still it is going for full table scan instead of using index.
>
Hi Mandark,
<I've rearranged your post>
When I am using "where emp_name like '%im%' " query is taking longer time than when I use "where emp_name like 'im%' " .
With wildcard at both ends query goes for full table scan.
I even tried using hints but still it is going for full table scan instead of using index.
With former condition query takes 40 sec , with later it takes around 1.5 sec.
Both returns almost same no. of rows. We have function based index created on emp_name column.You are never going to be able to speed things up with a double wild card - or even 1 wild card at the beginning
(unless you have some weird index reversing the string).
With the double wild-card, the system has to search through the string character by character to see
if there are any letter "i's" and then see if that letter is followed by an "m".
That's using your standard B-tree index (see below).
Can any one please suggest way so that query responce time can be reduced.?Yes, I think so - there is full-text indexing - see here:
http://www.dba-oracle.com/oracle_tips_like_sql_index.htm and
http://www.oracle-base.com/articles/9i/full-text-indexing-using-oracle-text-9i.php
AFAIK, it's an extra-cost option - but you can have fun finding out all that for yourself ;)
HTH,
Paul...
Similar Messages
-
New Performance tuning features you like in 11.1, 11.2
I was going through various performance tuning features in 11g . I came across below mentioned feautures .
SQL Management base (SMB)
SQL Tuning adisor
SQL Plan Baselines
SQL Performance Analyzer
Query Result cacheI am wondering if you guys find any of the new 11g performance tuning feauture really USEFUL so that I can give it a try.
Forgive me if any of the above mentioned feautures were available in 10.2 or older versions.all useful ,its depend on your requirement.
start with
SQL Plan Baselines,Query Result cache as simple to configure.
Forgive me if any of the above mentioned feautures were available in 10.2 or older versions.:-) SQL Tuning advisor available in 10g. -
Performance issue with NOT LIKE 'VA%' in Sql Query
I'm fetching vendor details from po_vendor_sites_all , in the where clause i use vendor_site_code NOT LIKE 'VA%'. Database:10g
NOT LIKE is reducing the performance, any other option to increase the performance.
Any suggestions?
Thanks
dow005Assuming a fairly even distribution of vendor_site_codes and assuming a LIKE 'VA%'
would pick up 1% of the rows and assuming an index on vendor_site_codes
and a reasonable clustering we might expect that query to use an index.
Your query is NOT LIKE 'VA%' which implies picking up 99% of the rows = Full table scan.
The only option I can think of is to use parallelism in your query (if the system has the
power to handle it) or perhaps use more where clause restrictions on indexed column(s)
to reduce the number of rows picked up.
I've had to assume a lot of things as you don't give any info about the table and data.
If you could provide more information we might be able to help more.
See SQL and PL/SQL FAQ
Edited by: Paul Horth on 02-May-2012 07:12 -
SQL Query tuning WITH in, not in, like
is there any alternative can i user for in, not in, like to optimize this sql statement.
SELECT TKTNUM||’~’||
CUSNAME||’~’||
DECODE(PRTY,0,’PRIORITY 00’, 1,’PRIORITY 01’ ,2,’PRIORITY 02’, 03,’PRIORITY 03’, 04,’PRIORITY04’,’OTHERS’) ||’~’||
TO_CHAR(NEW_TIME(CREATEDTTM,’GMT’,’&2’),’MM-DD-YYYY HH24:MI:SS’) ||’~’||
CURSTA||’~’||
CURSTS||’~’||
OTGTM||’~’||
TOPGRPNAME||’~’||
TO_CHAR(NEW_TIME(LASTUPDDTTM,’GMT’,’&2’),’MM-DD-YYYY HH24:MI:SS’) ||’~’||
RIM(REPLACE(REPLACE(PROBSUMMARY,’’,’’),CHR(10),’’||’~’
FROM T3TKTHEADER
WHERE
CURSTA NOT IN(‘RESOLVED’,’CLOSED’)
AND TOPGRPNAME IN(‘CASJC.OPS’,EMEA.WTO’,’INCHN.GSD-DBA’)
AND PRIT IN (‘0’,’1’,’2’)
AND CUSNAME NOT LIKE ‘% VANGENT%’
ORDER BY PRTY, CREATEDTTM DESCHi,
Welcome to the forum!
998537 wrote:
is there any alternative can i user for in, not in, like to optimize this sql statement.That depends on what you mean by "optimize".
There are lots of other ways to get the same results. For example
AND prit IN ('0', '1', '2')is equivalent to
AND ( prit = '0'
OR prit = '1'
OR prit = '2'
)and
AND cusname NOT LIKE '% VANGENT%'is equivalent to
AND INSTR (cusname, ' VANGENT') = 0Some people might have a personal preference for these (or other) alternate forms. I don't; I find what you posted easier to understand and maintain. I would suggest you format your code, however.
Do you want it to run faster? See {message:id=9360003}
If you have some very particular requirements and data, then function-based indexes might help.
For example, if you often look for those same 3 values of prit, and, taken together, they account for less than 5% of the rows in the table, then a function-based index might make things faster. There's nothing in what you posted that looks espcially likely, however. -
Help ! SQL Performance Tuning
Hi,
I am having following three sql statements. I am using Oracle 8i.
====================================================================================================================
Statement1 : Insert
Insert Into DBSchema.DstTableName( dstCol1, dstColP, dstColKey, dstCol2, dstCol3, dstCol4, dstCol5, dstCol6 )
( SELECT DbSchema.Seq.nextval, srColP, srColKey, srCol1, srCol2, nvl(srCol3,0), nvl(srCol4,0), SYSDATE
From
SrcTableName SRC
Where
srcColP IS NOT NULL AND
NOT EXISTS
(SELECT 1
From
DBSchema.DstTableName Dst
Where
SRC.srcColP = DST.dstColP AND SRC.srcColKey = DST.dstColKey )
====================================================================================================================
Statement2 : Update
Update DBSchema.DstTableName dst
SET ( dstCol1,dstCol2,dstCol3,dstCol4, dstCol5)
=
( SELECT srCol1, srCol2, nvl(srCol3,0), nvl(srCol4,0), SYSDATE
From
SrcTableName src
Where
src.srcColP = dst.dstColP AND SRC.srcColKey = DST.dstColKey
WHERE EXISTS (
SELECT
1
From
SrcTableName SRC
Where
SRC.srcColP = DST.dstColP AND SRC.srcColKey = DST.dstColKey ) ;
====================================================================================================================
Statement3 : Delete
Delete
FROM DBSchema.DstTableName DST
Where Exists (
SELECT
1
From
SrcTableName SRC
Where
src.srcColP = dst.dstColP )
AND NOT EXISTS
SELECT
1
From
SrcTableName SRC
Where
src.srcColP = dst.dstColP AND SRC.srcColKey = DST.dstColKey ) ;
====================================================================================================================
For the above three statement I have written the following procedure with cursor.
Equivalent Cursor:
PROCEDURE DEMOPROC
is
loop_Count integer := 0;
insert_Count integer := 0;
CURSOR c1
IS
SELECT src.srcCol1,
src.srcCol2,
src.srcCol3,
src.srcCol4,
src.srcCol5,
src.srcCol6,
src.srcCol7,
src.srcCol8,
src.srcCol9,
src.srcColKey,
src.srcColP
FROM
SrcTableName SRC
Where src.srcColP IS NOT NULL
AND NOT EXISTS
(SELECT 1
From
DBSchema.DstTableName Dst
Where
src.srcColP = DST.dstColP AND src.srcColKey = DST.dstColKey )
BEGIN
FOR r1 in c1 LOOP
Insert Into DBSchema.DstTableName( dstCol1, dstColP, dstColKey, dstCol2, dstCol3, dstCol4, dstCol5, dstCol6 )
values(DBSchema.Seq.nextval, r1.srcColP, r1.srcColKey, r1.srcCol1, r1.srcCol2, nvl(r1.srcCol3,0), nvl(r1.srcCol4,0), SYSDATE);
Update DBSchema.DstTableName dst
SET dst.dstCol1=r1.srcCol1 , dst.dstCol2=r1.srcCol2,
dst.dstCol3=nvl(r1.srcCol3,0),
dst.dstCol4=nvl(r1.srcCol4,0),
dst.dstCol5=SYSDATE
Where
r1.srcColP = dst.dstColP
AND
r1.srcColKey = DST.dstColKey ;
Delete
FROM DBSchema.DstTableName DST
Where
r1.srcColP = dst.dstColP ;
insert_Count := insert_Count + 1 ;
/* commit on a pre-defined interval */
if loop_Count > 999
then begin
commit;
loop_Count := 0;
end;
else loop_Count := loop_Count + 1;
end if;
end loop;
/* once the loop ends, commit and display the total number of records inserted */
commit;
dbms_output.put_line('total rows processed: '||TO_CHAR(insert_Count)); /*display insert count*/
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM);
END;
====================================================================================================================
I am not sure whether this cursor is right or not, have to verify it.
In delete and insert statements there is same where not exist clause in the original statement so I included that in my cursor declaration but not sure whether update will work with that or not.
I have to use the three statements mentioned above for few source and destination tables and each are having many rows. How do i tune it ?
What else can be done to improve the performance with the the three statements mentioned above.
Any help will be highly appreciated.
Thanks !
Regards,Hi Tom,
Thanks for replying.
I tried three statement seperately.
As seen in my problem statement I am moving data from one table to another. Like this there are 50 tables. One of my procedure reads the source and destination table and creates these three statements dyanmically (Creates pl/sql block) and runs it.
As you have seen the three statements above, I am not able to write cursor properly for it. I was suggested by someone to write cursor, so i started. But as you can see my cursor won't satisfy all the "where not exists " and "where exists" conditions satifactorily.
I only tried insert in the cursor and compared it with the insert in the cursor. My procedure with cursor was slower than the the insert. But since i didn't tried the three things togather i.e. update, delete and insert, i guess theoritically write cursor will select data from the source table only once, which will improve performance. But i can't do that. The other way to solve this is writing to procedure, one having insert and delete in it (as the conditons are same in both) and the other having update statement in it. But this won't increase the performance to that extent i guess ?
Do you have any other solutions to write the 3 DML statements which I have written above.
Any help would be highly appreciated.
Thanks! -
Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
Checklist for Quick Performance problem Resolution
· get trace, code and other information for given PE case
- Latest Code from Production env
- Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
- Program parameters & their frequently used values
- Run Frequency of the program
- existing Run-time/response time in Production
- Business Purpose
· Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
· Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
· Identify most time consuming operation(s) using Row Source Operation section
· Study program parameter input directly mapped to SQL
· Identify all Input bind parameters being used to SQL
· Is SQL query returning large records for given inputs
· what are the large tables and their respective columns being used to mapped with input parameters
· which operation is scanning highest number of records in Row Source operation/Explain Plan
· Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
· Check the time consuming index on large table and measure Index Selectivity
· Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
· Is correct index being used for all large tables?
· Is there any Full Table Scan on Large tables ?
· Is there any unwanted Table being used in SQL ?
· Evaluate Join condition on Large tables and their columns
· Is FTS on large table b'cos of usage of non index columns
· Is there any implicit or explicit conversion causing index not getting used ?
· Statistics of all large tables are upto date ?
Quick Resolution tips
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
2) Use Data Caching Technique/Options to cache static data
3) Use Pipe Line Table Functions whenever possible
4) Use Global Temporary Table, Materialized view to process complex records
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
8) Follow Oracle PL/SQL Best Practices
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
12) Review Join condition on existing query explain plan
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
14) Avoid applying SQL functions on index columns
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
Thanks
PrafulI understand you were trying to post something helpful to people, but sorry, this list is appalling.
1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
No, use pure SQL.
2) Use Data Caching Technique/Options to cache static data
No, use pure SQL, and the database and operating system will handle caching.
3) Use Pipe Line Table Functions whenever possible
No, use pure SQL
4) Use Global Temporary Table, Materialized view to process complex records
No, use pure SQL
5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
No, use pure SQL
6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
Makes no sense.
7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
What about using the execution trace?
8) Follow Oracle PL/SQL Best Practices
Which are?
9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
You mean design your database and queries properly? And table scanning is not always bad.
10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
It depends if that is necessary or not.
11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan. There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
12) Review Join condition on existing query explain plan
Well, if you don't have your join conditions right then your query won't work, so that's obvious.
13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
No. Oracle recommends you do not use hints for query optimization (it says so in the documentation). Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general. Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
14) Avoid applying SQL functions on index columns
Why? If there's a need for a function based index, then it should be used.
15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
See 13.
In short, there are no silver bullets for dealing with performance. Each situation is different and needs to be evaluated on its own merits. -
Hello everybody,
I have a problem with an search application, users can search in different tables, those tables have an average off 15 million records. The users can add "search lines"
Here is an example off a search line typename = 'Project' attributename = 'name' value LIKE '%demo%'
When a user adds 3 or more search lines the application becomes slow.
Its running on a 10.2.0.10 and the hardware is not that important here, because it will change all the time because to search application will be installed on the intranet off different companies.
The problem will rise when alot off joins happen, I tried materialized views, its faster but the queries will be different all the time. So when alot off users will query and alot off materialized views are made it will maybe slow down the server.
Here is the example sql code that the application dynamically creates, does anyone have a suggestion to optimize this?
All the tables used in this query have indexes on all there columns, maybe to much indexes slows down the query?
SELECT q2.tbl_id
FROM
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Project'
AND attributename = 'name'
AND LOWER(VALUE) LIKE '%demo%')
q1,
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Section'
AND attributename = 'name'
AND LOWER(VALUE) LIKE '%section%')
q2,
(SELECT tbl_search_attstring.tbl_id
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Dir'
AND attributename = 'default label'
AND LOWER(VALUE) LIKE '%a%')
q3,
tbl_relationship rel
WHERE q1.tbl_id = rel.parent
AND q3.tbl_id = rel.parent
AND rel.child = q2.tbl_id
Thanks in advance!Hi,
My suggestion is you should create your query dynamically by pl/sql.
On the other hand pls try this;
SELECT q2.tbl_id
FROM
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename in ('Project','Dir')
AND attributename in( 'name','default label')
AND LOWER(VALUE) LIKE '%demo%' or LOWER(VALUE) LIKE '%a%')
q13,
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Section'
AND attributename = 'name'
AND LOWER(VALUE) LIKE '%section%')
q2,
tbl_relationship rel
WHERE q13.tbl_id = rel.parent
AND rel.child = q2.tbl_id
BR...
efe -
Performance tuning with HTTP compression
We currently are using Oracle 11g and IE8. The 11g UI has been pretty slow and when i looked up for perfomance tuning one of the methods was http compression as stated in the link below -
http://blogs.oracle.com/pa/entry/obiee_11g_user_interface_ui
The portion of the above information of the has been actually except from Oracle Support ID (1312299.1).
Now I have made the changes as suggested but am wondering how can i test it to make sure the changes have surely improved the performance or not. It doesnt talk of any testing or verification methods.Just curious , what is the compression ratio ..?
i mean by how much does the volume come down when the records are moved from F to E tables.
also is the time spend on compression of the Cube or on the aggregates or on the deletion of records in the F table..?
what is the ratio of aggregate volume to the cube volume...?
You might want to set an trace on the compression session to get answers on where it is spending most of its time
Compression is equivalent to executing a query on the F table and summarising the results with the request ID column , appending the results to E table and deleting the compressed request from F table.
Thanks. -
JDev10g: General performance decrease with JClients like Oracle BC Browser
Hi,
I'm observing a significant performance decrease of JClient applications (homegrown and Oracle Business Components Browser)in JDev10g compared against JDev9i (9033)?
Can anyone confirm my observation?
Just create a default Dept-Emp business components package and start the Business components browser. Scroll to some data (it takes some time...) and resize the window (takes even more time)!
My guess: JDev9i-style data binding for JClients is simulated in JDev10g with ADF-Data-Binding?? My own application is a simple "recompile"-migration of a JDev9i project with JDev10g. Does this apply to Oracle Business Components Browser too?
My own application runs still faster with JDev9i!
What is the problem? Any hints are welcome!
Thanks,
MarkusExcellent Blog. Thank You
Small clarification on Step **6) Oracle Home Directory, ...a) Resize the Root Partition**
Ubuntu 11.10 has Gparted available as a Ubuntu software download, DONT use that while trying the above step, instead download the ISO file from http://sourceforge.net/projects/gparted/files/gparted-live-stable/ gparted-live-0.12.0-5.iso (124.6 MB)
Burn that ISO file on a Blank DVD, reboot the Ubuntu , during startup select Boot from DVD Option if not already selected. this will take to Boot Menu Options of Gparted Live then select the first menu option, and this allows to do further action such as Re-sizing .
and once you have chosen and executed step a) . do NOT run step b) also that is "Setup External Storage"
I hope this minor clarification can avoid some confusion
Regards
Madhusudhan Rao
Edited by: MadhusudhanRao on Mar 24, 2012 11:30 PM -
Cm:select performance problem with multiple likes query clause
I have query like <br>
<b>listItem like '*abc.xml*' && serviceId like '*xyz.xml*'</b><br>
Can we have two likes clauses mentioned above in the cm:select. The above is executing successfully but takes too much time to process. <br><br>
Can we simplify the above mentioned query or any solution. Please help me in this issue.<br><br>
Thanks & Regards,<br>
Murthy NalluriA few notes:
1. You seem to have either a VPD policy active or you're using views that add some more predicates to the query, according to the plan posted (the access on the PK_OPERATOR_GROUP index). Could this make any difference?
2. The estimates of the optimizer are really very accurate - actually astonishing - compared to the tkprof output, so the optimizer seems to have a very good picture of the cardinalities and therefore the plan should be reasonable.
3. Did you gather index statistics as well (using COMPUTE STATISTICS when creating the index or "cascade=>true" option) when gathering the statistics? I assume you're on 9i, not 10g according to the plan and tkprof output.
4. Looking at the amount of data that needs to be processed it is unlikely that this query takes only 3 seconds, the 20 seconds seems to be OK.
If you are sure that for a similar amount of underlying data the query took only 3 seconds in the past it would be very useful if you - by any chance - have an execution plan at hand of that "3 seconds" execution.
One thing that I could imagine is that due to the monthly data growth that you've mentioned one or more of the tables have exceeded the "2% of the buffer cache" threshold and therefore are no longer treated as "small tables" in the buffer cache. This could explain that you now have more physical reads than in the past and therefore the query takes longer to execute than before.
I think that this query could only be executed in 3 seconds if it is somewhere using a predicate that is more selective and could benefit from an indexed access path.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
SQL Performance Tuning process
Hi All,
Recently one interviewer asked me one question, saying user complain performance issue during query execution on Production Database, how will you start your analysis (step by step). Why there is performance issue. considering below point.
1. Production Database you do not have system table / Oracle Data Dictionary access. (Very limited acces, only you have table/view read permision).
2. Without dusturbing DBA.
I answered ,
1) by generating explain plan, to see what is the bottle neck for that query. (in any query opmization requred by using hint / query re write etc)
2) by looking when table / index was last analyze.
3) asking AWR report from DBA etc.
He was not satisfied. Request you all can you please help me on this by consedering all those constraints.
Thanks in Advance.
RK885137 wrote:
Hi All,
Recently one interviewer asked me one question, saying user complain performance issue during query execution on Production Database, how will you start your analysis (step by step). Why there is performance issue. considering below point.
1. Production Database you do not have system table / Oracle Data Dictionary access. (Very limited acces, only you have table/view read permision).
2. Without dusturbing DBA.
I answered ,
1) by generating explain plan, to see what is the bottle neck for that query. (in any query opmization requred by using hint / query re write etc)
2) by looking when table / index was last analyze.
3) asking AWR report from DBA etc.
He was not satisfied. Request you all can you please help me on this by consedering all those constraints.
Thanks in Advance.
RK
He asked you a question that you could not answer it specifically, everything is solved on the spot, your answers are quite normal to the question. You asked why He was not satisfied? What are the reasons?
Ramin Hashimzade -
HI:
Can some please give some hints for the better performane of the following query:
Select a.Hedged_Trade_ID, min(a.Trade_Date) Trade_Date, a.DealStartDate, a.MaturityDate
FROM
( Select distinct a.Hedged_Trade_ID,
a.Trade_Date,
b.Trade_Date DealStartDate,
b.Maturity_Date MaturityDate
From CMS_FUTURES_TRANS a, CMS_PAS_ACCT_TRADE b
Where trunc(a.LAST_UPDATE) = to_date(paramRunDate, 'mm/dd/yyyy')
AND a.Trade_Date < to_date(paramRunDate, 'mm/dd/yyyy')
AND b.org_trade_id = to_number(decode(substr(a.Hedged_Trade_Id,1,1), 'U', 0, a.Hedged_Trade_Id))
AND b.fgic_company = paramCompany
UNION
Select distinct a.Hedged_Trade_ID,
a.Trade_Date,
b.Trade_Date DealStartDate,
b.Maturity_Date MaturityDate
From CMS_SWAP_ALLOC a, CMS_PAS_ACCT_TRADE b
Where trunc(a.LAST_UPDATE) = to_date(paramRunDate, 'mm/dd/yyyy')
AND a.Trade_Date < to_date(paramRunDate, 'mm/dd/yyyy')
AND b.org_trade_id = to_number(decode(substr(a.Hedged_Trade_Id,1,1), 'U', 0, a.Hedged_Trade_Id))
AND b.fgic_company = paramCompany
UNION
Select distinct a.Hedged_Trade_ID,
a.Trade_Date,
to_date('01/01/2001', 'mm/dd/yyyy') DealStartDate,
to_date('01/01/9999', 'mm/dd/yyyy') MaturityDate
From CMS_FUTURES_TRANS a
Where trunc(a.LAST_UPDATE) = to_date(paramRunDate, 'mm/dd/yyyy')
AND a.Trade_Date < to_date(paramRunDate, 'mm/dd/yyyy')
AND a.hedged_trade_id IN ( Select unassigned_id
From cms_fas_company
Where company_id = paramCompany)
UNION
Select distinct a.Hedged_Trade_ID,
a.Trade_Date,
to_date('01/01/2001', 'mm/dd/yyyy') DealStartDate,
to_date('01/01/9999', 'mm/dd/yyyy') MaturityDate
From CMS_SWAP_ALLOC a
Where trunc(a.LAST_UPDATE) = to_date(paramRunDate, 'mm/dd/yyyy')
AND a.Trade_Date < to_date(paramRunDate, 'mm/dd/yyyy')
AND a.hedged_trade_id IN ( Select unassigned_id
From cms_fas_company
Where company_id = paramCompany)
UNION
Select distinct to_char(a.Org_Trade_id) Hedged_Trade_ID,
a.History_Date Trade_Date,
b.Trade_Date DealStartDate,
b.Maturity_Date MaturityDate
From CMS_PAS_ACCT_TRADE_HIST a, CMS_PAS_ACCT_TRADE b
Where trunc(a.LAST_UPDATE) = to_date(paramRunDate + 1, 'mm/dd/yyyy')
AND a.History_Date < to_date(paramRunDate, 'mm/dd/yyyy')
AND b.org_trade_id = a.org_trade_id
AND b.fgic_company = paramCompany
UNION
Select distinct c.Hedged_Trade_id,
DECODE(ABS(to_date(a.History_Date,'mm/dd/yyyy') - to_date(c.Trade_Date,'mm/dd/yyyy')), to_date(a.History_Date,'mm/dd/yyyy') - to_date(c.Trade_Date,'mm/dd/yyyy'), to_date(a.History_Date,'mm/dd/yyyy'), to_date(c.Trade_Date,'mm/dd/yyyy')) Trade_Date,
b.Trade_Date DealStartDate,
b.Maturity_Date MaturityDate
From CMS_PAS_ACCT_TRADE_HIST a, CMS_PAS_ACCT_TRADE b, CMS_SWAP_ALLOC c
Where c.swap_trade_id = a.org_trade_id
AND b.org_trade_id = to_number(decode(substr(c.hedged_trade_id,1,1), 'U', 0, c.hedged_trade_id))
AND b.fgic_company = paramCompany
AND trunc(a.LAST_UPDATE) = to_date(paramRunDate + 1, 'mm/dd/yyyy')
AND trunc(a.History_Date) < to_date(paramRunDate, 'mm/dd/yyyy')
AND c.trade_date = ( Select MAX(trade_date)
from CMS_SWAP_ALLOC d
Where d.swap_trade_id = c.swap_trade_id
AND d.trade_date <= to_date(paramRunDate, 'mm/dd/yyyy')) ) a
Group By a.Hedged_Trade_ID, a.DealStartDate, a.MaturityDate;Overall I suggest taking each one of your selects and running an explain plan to confirm they are behaving well in your database.
How many rows does this select return? Big IN lists can be bad
a.hedged_trade_id in(select unassigned_id
from cms_fas_company
where company_id = paramcompany)This can be rewritten
and exists (select 1
from cms_fas_company
where company_id = paramcompany
and unassigned_id = a.hedged_trade_id in
) -
Can anyone send tutor for performance tuning?
can anyone send tutor for performance tuning?I like to chk my coding.
1. Unused/Dead code
Avoid leaving unused code in the program. Either comment out or delete the unused situation. Use program --> check --> extended program to check for the variables, which are not used statically.
2. Subroutine Usage
For good modularization, the decision of whether or not to execute a subroutine should be made before the subroutine is called. For example:
This is better:
IF f1 NE 0.
PERFORM sub1.
ENDIF.
FORM sub1.
ENDFORM.
Than this:
PERFORM sub1.
FORM sub1.
IF f1 NE 0.
ENDIF.
ENDFORM.
3. Usage of IF statements
When coding IF tests, nest the testing conditions so that the outer conditions are those which are most likely to fail. For logical expressions with AND , place the mostly likely false first and for the OR, place the mostly likely true first.
Example - nested IF's:
IF (least likely to be true).
IF (less likely to be true).
IF (most likely to be true).
ENDIF.
ENDIF.
ENDIF.
Example - IF...ELSEIF...ENDIF :
IF (most likely to be true).
ELSEIF (less likely to be true).
ELSEIF (least likely to be true).
ENDIF.
Example - AND:
IF (least likely to be true) AND
(most likely to be true).
ENDIF.
Example - OR:
IF (most likely to be true) OR
(least likely to be true).
4. CASE vs. nested Ifs
When testing fields "equal to" something, one can use either the nested IF or the CASE statement. The CASE is better for two reasons. It is easier to read and after about five nested IFs the performance of the CASE is more efficient.
5. MOVE statements
When records a and b have the exact same structure, it is more efficient to MOVE a TO b than to MOVE-CORRESPONDING a TO b.
MOVE BSEG TO *BSEG.
is better than
MOVE-CORRESPONDING BSEG TO *BSEG.
6. SELECT and SELECT SINGLE
When using the SELECT statement, study the key and always provide as much of the left-most part of the key as possible. If the entire key can be qualified, code a SELECT SINGLE not just a SELECT. If you are only interested in the first row or there is only one row to be returned, using SELECT SINGLE can increase performance by up to three times.
7. Small internal tables vs. complete internal tables
In general it is better to minimize the number of fields declared in an internal table. While it may be convenient to declare an internal table using the LIKE command, in most cases, programs will not use all fields in the SAP standard table.
For example:
Instead of this:
data: t_mara like mara occurs 0 with header line.
Use this:
data: begin of t_mara occurs 0,
matnr like mara-matnr,
end of t_mara.
8. Row-level processing and SELECT SINGLE
Similar to the processing of a SELECT-ENDSELECT loop, when calling multiple SELECT-SINGLE commands on a non-buffered table (check Data Dictionary -> Technical Info), you should do the following to improve performance:
o Use the SELECT into <itab> to buffer the necessary rows in an internal table, then
o sort the rows by the key fields, then
o use a READ TABLE WITH KEY ... BINARY SEARCH in place of the SELECT SINGLE command. Note that this only make sense when the table you are buffering is not too large (this decision must be made on a case by case basis).
9. READing single records of internal tables
When reading a single record in an internal table, the READ TABLE WITH KEY is not a direct READ. This means that if the data is not sorted according to the key, the system must sequentially read the table. Therefore, you should:
o SORT the table
o use READ TABLE WITH KEY BINARY SEARCH for better performance.
10. SORTing internal tables
When SORTing internal tables, specify the fields to SORTed.
SORT ITAB BY FLD1 FLD2.
is more efficient than
SORT ITAB.
11. Number of entries in an internal table
To find out how many entries are in an internal table use DESCRIBE.
DESCRIBE TABLE ITAB LINES CNTLNS.
is more efficient than
LOOP AT ITAB.
CNTLNS = CNTLNS + 1.
ENDLOOP.
12. Performance diagnosis
To diagnose performance problems, it is recommended to use the SAP transaction SE30, ABAP/4 Runtime Analysis. The utility allows statistical analysis of transactions and programs.
13. Nested SELECTs versus table views
Since releASE 4.0, OPEN SQL allows both inner and outer table joins. A nested SELECT loop may be used to accomplish the same concept. However, the performance of nested SELECT loops is very poor in comparison to a join. Hence, to improve performance by a factor of 25x and reduce network load, you should either create a view in the data dictionary then use this view to select data, or code the select using a join.
14. If nested SELECTs must be used
As mentioned previously, performance can be dramatically improved by using views instead of nested SELECTs, however, if this is not possible, then the following example of using an internal table in a nested SELECT can also improve performance by a factor of 5x:
Use this:
form select_good.
data: t_vbak like vbak occurs 0 with header line.
data: t_vbap like vbap occurs 0 with header line.
select * from vbak into table t_vbak up to 200 rows.
select * from vbap
for all entries in t_vbak
where vbeln = t_vbak-vbeln.
endselect.
endform.
Instead of this:
form select_bad.
select * from vbak up to 200 rows.
select * from vbap where vbeln = vbak-vbeln.
endselect.
endselect.
endform.
Although using "SELECT...FOR ALL ENTRIES IN..." is generally very fast, you should be aware of the three pitfalls of using it:
Firstly, SAP automatically removes any duplicates from the rest of the retrieved records. Therefore, if you wish to ensure that no qualifying records are discarded, the field list of the inner SELECT must be designed to ensure the retrieved records will contain no duplicates (normally, this would mean including in the list of retrieved fields all of those fields that comprise that table's primary key).
Secondly, if you were able to code "SELECT ... FROM <database table> FOR ALL ENTRIES IN TABLE <itab>" and the internal table <itab> is empty, then all rows from <database table> will be retrieved.
Thirdly, if the internal table supplying the selection criteria (i.e. internal table <itab> in the example "...FOR ALL ENTRIES IN TABLE <itab> ") contains a large number of entries, performance degradation may occur.
15. SELECT * versus SELECTing individual fields
In general, use a SELECT statement specifying a list of fields instead of a SELECT * to reduce network traffic and improve performance. For tables with only a few fields the improvements may be minor, but many SAP tables contain more than 50 fields when the program needs only a few. In the latter case, the performace gains can be substantial. For example:
Use:
select vbeln auart vbtyp from table vbak
into (vbak-vbeln, vbak-auart, vbak-vbtyp)
where ...
Instead of using:
select * from vbak where ...
16. Avoid unnecessary statements
There are a few cases where one command is better than two. For example:
Use:
append <tab_wa> to <tab>.
Instead of:
<tab> = <tab_wa>.
append <tab> (modify <tab>).
And also, use:
if not <tab>[] is initial.
Instead of:
describe table <tab> lines <line_counter>.
if <line_counter> > 0.
17. Copying or appending internal tables
Use this:
<tab2>[] = <tab1>[]. (if <tab2> is empty)
Instead of this:
loop at <tab1>.
append <tab1> to <tab2>.
endloop.
However, if <tab2> is not empty and should not be overwritten, then use:
append lines of <tab1> [from index1] [to index2] to <tab2>.
P.S : Please reward if you find this useful.. -
Performance tuning SELECTs: Can I force a TRUE db read every time?
Good day everyone!
I've been programming in ABAP for almost 6 years now, and I'd like to begin learning more about performance tuning with respect to database performance.
More specifically, I'm testing some things in a particular SELECT in a report program we have that is timing out in the foreground because of the SELECT. When I first run the program, the SELECT goes against the database, as we all know. Subsequent runs, however, use the buffered data, so the response is a lot quicker and doesn't really reflect that first, initial database read.
Am I correct in assuming that I should be testing my various approaches and collecting performance runtimes against that initial, "true" database read? If that's the case, is there any way I can force the system to actually read the database instead of the buffered data? For those experienced with this kind of performance analysis and tuning, what's the best approach for someone very new to this area such as myself?
Thank you,
DaveHi Dave and Rob,
Just my two cents (and yes, I know this is already answered, but..).
I think you might be confusing 2 things: one is SAP buffering, and another one is caching at other levels (database, operating system, etc).
From what I understood Rob was talking mainly about SAP buffering. In that context it is true that if there is a first execution that loads the buffers (for example, some not so small fully buffered tables) then that is an atypical execution, and should be discarded. In real life you will never have execution times like those, except maybe on the very first execution on a monday morning.
Another thing is database caching. If you execute a report twice with exactly the same parameters then you might not be actually making physical reads in the second execution. This second execution will be very fast, but that will not be simulating real life: no user wants a report to be fast the second time you execute it with exactly the same parameters.
To avoid this in Oracle you can empty the so-called SGA, but that is not so useful and it will probably not get you closer to what happens in the real life.
So what to do? In doubt, measure it several times, with different parameters, and probably exclude the extreme values.
Regards,
Rui Dantas -
I want to do 1Z0-054 11g Performance Tuning
I want to do 1Z0-054 11g Performance Tuning
for this first needed 1Z0-007 or 1Z0-047 or 1Z0-051, then 1Z0-052 & 1Z0-054
1) Which should be best in 1Z0-007 or 1Z0-047 or 1Z0-051
2) Recommended books... where will get course materials.
Thanks
Harsh857317 wrote:
I want to do 1Z0-054 11g Performance Tuning
for this first needed 1Z0-007 or 1Z0-047 or 1Z0-051, then 1Z0-052 & 1Z0-054
1) Which should be best in 1Z0-007 or 1Z0-047 or 1Z0-051
2) Recommended books... where will get course materials.
Thanks
HarshI always think it is useful to be precise about these things:
You can take the exam +1Z0-054 11g Performance Tuning+ whenever you like. Simply book it, turn up and take it.
In the event you pass you've passed the exam!!! If you don't it is an expensive re-take.
However .....
While you;ve passed the exam you would not be a 'Oracle Database 11g Performance Tuning Certified Expert ' until Oracle confirms to you that certification, and Oracle you have identified all the requriements wht Oracle have detailed here:-
http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=287
This indicates one of two additional pre-requisites to the exam pass:-
: Either (1) : A prior DBA 11g Certification.
: Or (2): Verification of your attendance to the Oracle Univerisity Oracle Database 11g: Performance Tuning training course.
Above you have sort of indicated the 11g DBA OCP credential requirements (omitting the mandatory training requirement) ( http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=198)
The journey to 'Oracle Database 11g Performance Tuning Certified Expert ' is a long one, and i suggest most people should look at the basics first. Begin by becoming familiar with http://certification.oracle.com.
If you wish to continue I would consider focusing on 1z0-051 as a first exam or possilby attempting to find suitable workforce development programme training (assuming oracle univerisity is too expensive). Please be aware IMHO Oracle are not responsible for WDP institutes and I suspect some are very bad.
https://workforce.oracle.com/pls/wdp/new_home.main
Maybe you are looking for
-
Need more fields to be displayed on a single page in BEx Web
HI All, I have developed a query in BEx Designer. This quer has 15 fields. When i execute the query in BEx Web, it displays all the results. The problem here is that it displays only 4 fields data on a single page. I need to click on the next button
-
HP Laserjet 1606dn and an HP Officejet Pro 8500
Both of these printers have USB and Ethernet networking available. I need to hook both of them up to USB AND ethernet simultaneously. My question is are both ports simultaneously active(i.e. can I hook one computer to the printer with USB and let eve
-
Why won't my imac email send unless I restart the whole computer?
why won't my imac email send unless I restart the whole computer?
-
Resizing issues with widget in iWeb
hi there, i'm creating a web page in iWeb...and would like to use the mobileme gallery widget to place some pics. I placed the widget on the page and choose my gallery and everything's fine, but I can only scale the box up to 320x260pixels and then i
-
I purchased Mountain Lion online from the App store but it won't download
I purchased Mountain Lion from the App Store, and after 2 days it still hasn't downloaded. The download appears to have frozen. Has anyone else had this problem?