SQL Performance tuning
Hello everybody,
I have a problem with an search application, users can search in different tables, those tables have an average off 15 million records. The users can add "search lines"
Here is an example off a search line typename = 'Project' attributename = 'name' value LIKE '%demo%'
When a user adds 3 or more search lines the application becomes slow.
Its running on a 10.2.0.10 and the hardware is not that important here, because it will change all the time because to search application will be installed on the intranet off different companies.
The problem will rise when alot off joins happen, I tried materialized views, its faster but the queries will be different all the time. So when alot off users will query and alot off materialized views are made it will maybe slow down the server.
Here is the example sql code that the application dynamically creates, does anyone have a suggestion to optimize this?
All the tables used in this query have indexes on all there columns, maybe to much indexes slows down the query?
SELECT q2.tbl_id
FROM
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Project'
AND attributename = 'name'
AND LOWER(VALUE) LIKE '%demo%')
q1,
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Section'
AND attributename = 'name'
AND LOWER(VALUE) LIKE '%section%')
q2,
(SELECT tbl_search_attstring.tbl_id
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Dir'
AND attributename = 'default label'
AND LOWER(VALUE) LIKE '%a%')
q3,
tbl_relationship rel
WHERE q1.tbl_id = rel.parent
AND q3.tbl_id = rel.parent
AND rel.child = q2.tbl_id
Thanks in advance!
Hi,
My suggestion is you should create your query dynamically by pl/sql.
On the other hand pls try this;
SELECT q2.tbl_id
FROM
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename in ('Project','Dir')
AND attributename in( 'name','default label')
AND LOWER(VALUE) LIKE '%demo%' or LOWER(VALUE) LIKE '%a%')
q13,
(SELECT tbl_search_attstring.tbl_id,
tbl_search_atttype.typename
FROM tbl_search_attstring,
tbl_search_atttype
WHERE tbl_search_attstring.typeid = tbl_search_atttype.typeid
AND typename = 'Section'
AND attributename = 'name'
AND LOWER(VALUE) LIKE '%section%')
q2,
tbl_relationship rel
WHERE q13.tbl_id = rel.parent
AND rel.child = q2.tbl_id
BR...
efe
Similar Messages
-
Help ! SQL Performance Tuning
Hi,
I am having following three sql statements. I am using Oracle 8i.
====================================================================================================================
Statement1 : Insert
Insert Into DBSchema.DstTableName( dstCol1, dstColP, dstColKey, dstCol2, dstCol3, dstCol4, dstCol5, dstCol6 )
( SELECT DbSchema.Seq.nextval, srColP, srColKey, srCol1, srCol2, nvl(srCol3,0), nvl(srCol4,0), SYSDATE
From
SrcTableName SRC
Where
srcColP IS NOT NULL AND
NOT EXISTS
(SELECT 1
From
DBSchema.DstTableName Dst
Where
SRC.srcColP = DST.dstColP AND SRC.srcColKey = DST.dstColKey )
====================================================================================================================
Statement2 : Update
Update DBSchema.DstTableName dst
SET ( dstCol1,dstCol2,dstCol3,dstCol4, dstCol5)
=
( SELECT srCol1, srCol2, nvl(srCol3,0), nvl(srCol4,0), SYSDATE
From
SrcTableName src
Where
src.srcColP = dst.dstColP AND SRC.srcColKey = DST.dstColKey
WHERE EXISTS (
SELECT
1
From
SrcTableName SRC
Where
SRC.srcColP = DST.dstColP AND SRC.srcColKey = DST.dstColKey ) ;
====================================================================================================================
Statement3 : Delete
Delete
FROM DBSchema.DstTableName DST
Where Exists (
SELECT
1
From
SrcTableName SRC
Where
src.srcColP = dst.dstColP )
AND NOT EXISTS
SELECT
1
From
SrcTableName SRC
Where
src.srcColP = dst.dstColP AND SRC.srcColKey = DST.dstColKey ) ;
====================================================================================================================
For the above three statement I have written the following procedure with cursor.
Equivalent Cursor:
PROCEDURE DEMOPROC
is
loop_Count integer := 0;
insert_Count integer := 0;
CURSOR c1
IS
SELECT src.srcCol1,
src.srcCol2,
src.srcCol3,
src.srcCol4,
src.srcCol5,
src.srcCol6,
src.srcCol7,
src.srcCol8,
src.srcCol9,
src.srcColKey,
src.srcColP
FROM
SrcTableName SRC
Where src.srcColP IS NOT NULL
AND NOT EXISTS
(SELECT 1
From
DBSchema.DstTableName Dst
Where
src.srcColP = DST.dstColP AND src.srcColKey = DST.dstColKey )
BEGIN
FOR r1 in c1 LOOP
Insert Into DBSchema.DstTableName( dstCol1, dstColP, dstColKey, dstCol2, dstCol3, dstCol4, dstCol5, dstCol6 )
values(DBSchema.Seq.nextval, r1.srcColP, r1.srcColKey, r1.srcCol1, r1.srcCol2, nvl(r1.srcCol3,0), nvl(r1.srcCol4,0), SYSDATE);
Update DBSchema.DstTableName dst
SET dst.dstCol1=r1.srcCol1 , dst.dstCol2=r1.srcCol2,
dst.dstCol3=nvl(r1.srcCol3,0),
dst.dstCol4=nvl(r1.srcCol4,0),
dst.dstCol5=SYSDATE
Where
r1.srcColP = dst.dstColP
AND
r1.srcColKey = DST.dstColKey ;
Delete
FROM DBSchema.DstTableName DST
Where
r1.srcColP = dst.dstColP ;
insert_Count := insert_Count + 1 ;
/* commit on a pre-defined interval */
if loop_Count > 999
then begin
commit;
loop_Count := 0;
end;
else loop_Count := loop_Count + 1;
end if;
end loop;
/* once the loop ends, commit and display the total number of records inserted */
commit;
dbms_output.put_line('total rows processed: '||TO_CHAR(insert_Count)); /*display insert count*/
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error '||TO_CHAR(SQLCODE)||': '||SQLERRM);
END;
====================================================================================================================
I am not sure whether this cursor is right or not, have to verify it.
In delete and insert statements there is same where not exist clause in the original statement so I included that in my cursor declaration but not sure whether update will work with that or not.
I have to use the three statements mentioned above for few source and destination tables and each are having many rows. How do i tune it ?
What else can be done to improve the performance with the the three statements mentioned above.
Any help will be highly appreciated.
Thanks !
Regards,Hi Tom,
Thanks for replying.
I tried three statement seperately.
As seen in my problem statement I am moving data from one table to another. Like this there are 50 tables. One of my procedure reads the source and destination table and creates these three statements dyanmically (Creates pl/sql block) and runs it.
As you have seen the three statements above, I am not able to write cursor properly for it. I was suggested by someone to write cursor, so i started. But as you can see my cursor won't satisfy all the "where not exists " and "where exists" conditions satifactorily.
I only tried insert in the cursor and compared it with the insert in the cursor. My procedure with cursor was slower than the the insert. But since i didn't tried the three things togather i.e. update, delete and insert, i guess theoritically write cursor will select data from the source table only once, which will improve performance. But i can't do that. The other way to solve this is writing to procedure, one having insert and delete in it (as the conditons are same in both) and the other having update statement in it. But this won't increase the performance to that extent i guess ?
Do you have any other solutions to write the 3 DML statements which I have written above.
Any help would be highly appreciated.
Thanks! -
SQL Performance tuning with wildcard Like condition
Hi,
I have performance issue with SQL query.
When I am using "where emp_name like '%im%' " query is taking longer time than when I use "where emp_name like 'im%' " .
With former condition query takes 40 sec , with later it takes around 1.5 sec.
Both returns almost same no. of rows. We have function based index created on emp_name column.
With wildcard at both ends query goes for full table scan.
Can any one please suggest way so that query responce time can be reduced.?
I even tried using hints but still it is going for full table scan instead of using index.>
Hi Mandark,
<I've rearranged your post>
When I am using "where emp_name like '%im%' " query is taking longer time than when I use "where emp_name like 'im%' " .
With wildcard at both ends query goes for full table scan.
I even tried using hints but still it is going for full table scan instead of using index.
With former condition query takes 40 sec , with later it takes around 1.5 sec.
Both returns almost same no. of rows. We have function based index created on emp_name column.You are never going to be able to speed things up with a double wild card - or even 1 wild card at the beginning
(unless you have some weird index reversing the string).
With the double wild-card, the system has to search through the string character by character to see
if there are any letter "i's" and then see if that letter is followed by an "m".
That's using your standard B-tree index (see below).
Can any one please suggest way so that query responce time can be reduced.?Yes, I think so - there is full-text indexing - see here:
http://www.dba-oracle.com/oracle_tips_like_sql_index.htm and
http://www.oracle-base.com/articles/9i/full-text-indexing-using-oracle-text-9i.php
AFAIK, it's an extra-cost option - but you can have fun finding out all that for yourself ;)
HTH,
Paul... -
SQL Performance Tuning process
Hi All,
Recently one interviewer asked me one question, saying user complain performance issue during query execution on Production Database, how will you start your analysis (step by step). Why there is performance issue. considering below point.
1. Production Database you do not have system table / Oracle Data Dictionary access. (Very limited acces, only you have table/view read permision).
2. Without dusturbing DBA.
I answered ,
1) by generating explain plan, to see what is the bottle neck for that query. (in any query opmization requred by using hint / query re write etc)
2) by looking when table / index was last analyze.
3) asking AWR report from DBA etc.
He was not satisfied. Request you all can you please help me on this by consedering all those constraints.
Thanks in Advance.
RK885137 wrote:
Hi All,
Recently one interviewer asked me one question, saying user complain performance issue during query execution on Production Database, how will you start your analysis (step by step). Why there is performance issue. considering below point.
1. Production Database you do not have system table / Oracle Data Dictionary access. (Very limited acces, only you have table/view read permision).
2. Without dusturbing DBA.
I answered ,
1) by generating explain plan, to see what is the bottle neck for that query. (in any query opmization requred by using hint / query re write etc)
2) by looking when table / index was last analyze.
3) asking AWR report from DBA etc.
He was not satisfied. Request you all can you please help me on this by consedering all those constraints.
Thanks in Advance.
RK
He asked you a question that you could not answer it specifically, everything is solved on the spot, your answers are quite normal to the question. You asked why He was not satisfied? What are the reasons?
Ramin Hashimzade -
Performance tuning in PL/SQL code
Hi,
I am working on already existing PL/SQL code which is written by someone else on validation and conversion of data from a temporary table to base table. It usually has 3.5 million rows. and the procedure takes arount 2.5 - 3 hrs to complete.
Can I enhance the PL/SQL code for better performance ? or, is this OK to take so long to process these many rows?
Thanks!
YoginiCan I enhance the PL/SQL code for better performance ? Probably you can enhance it.
or, is this OK to take so long to process these many rows? It should take a few minutes, not several hours.
But please provide some more details like your database version etc.
I suggest to TRACE the session that executes the PL/SQL code, with WAIT events, so you'll see where and on what time is spent, you'll identify your 'problem statements very quickly' (after you or your DBA have TKPROF'ed the trace file).
SQL> alter session set events '10046 trace name context forever, level 12';
SQL> execute your PL/SQL code here
SQL> exitWill give you a .trc file in your udump directory on the server.
http://www.oracle-base.com/articles/10g/SQLTrace10046TrcsessAndTkprof10g.php
Also this informative thread can give you more ideas:
HOW TO: Post a SQL statement tuning request - template posting
as well as doing a search on 10046 at AskTom, http://asktom.oracle.com will give you more examples.
and reading Oracle's Performance Tuning Guide: http://www.oracle.com/pls/db102/to_toc?pathname=server.102%2Fb14211%2Ftoc.htm&remark=portal+%28Getting+Started%29 -
We have to inverstigate about a reporting solution where things are getting slow (may be material, database design, network matters).
I have red a lot in MSDN and some books about performance tuning on SQL Server 2008 R2 (or other) but frankly, I feel a little lost in all that stuff
I'am looking for practical steps in order to do the tuning. Someone had like a recipe for that : a success story...
My (brain storm) Methodology should follow these steps:
Resource bottlenecks: CPU, memory, and I/O bottlenecks
tempdb bottlenecks
A slow-running user query : Missing indexes, statistics,...
Use performance counters : there are many, can one give us the list of the most important
how to do fine tuning about SQL Server configuration
SSRS, SSIS configuration ?
And do the recommandations.
Thanks
"there is no Royal Road to Mathematics, in other words, that I have only a very small head and must live with it..."
Edsger W. DijkstraHello,
There is no clear defined step which can be categorized as step by step to performance tuning.Your first goal is to find out cause or drill down to factor causing slowness of SQL server it can be poorly written query ,missing indexes,outdated stats.RAM crunch
CPU crunch so on and so forth.
I generally refer to below doc for SQL server tuning
http://technet.microsoft.com/en-us/library/dd672789(v=sql.100).aspx
For SSIS tuning i refer below doc.
http://technet.microsoft.com/library/Cc966529#ECAA
http://msdn.microsoft.com/en-us/library/ms137622(v=sql.105).aspx
When I face issue i generally look at wait stats ,wait stats give you idea about on what resource query was waiting.
--By Jonathan KehayiasSELECT TOP 10
wait_type ,
max_wait_time_ms wait_time_ms ,
signal_wait_time_ms ,
wait_time_ms - signal_wait_time_ms AS resource_wait_time_ms ,
100.0 * wait_time_ms / SUM(wait_time_ms) OVER ( )
AS percent_total_waits ,
100.0 * signal_wait_time_ms / SUM(signal_wait_time_ms) OVER ( )
AS percent_total_signal_waits ,
100.0 * ( wait_time_ms - signal_wait_time_ms )
/ SUM(wait_time_ms) OVER ( ) AS percent_total_resource_waits
FROM sys.dm_os_wait_stats
WHERE wait_time_ms > 0 -- remove zero wait_time
AND wait_type NOT IN -- filter out additional irrelevant waits
( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT',
'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT',
'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
'RESOURCE_QUEUE' )
ORDER BY wait_time_ms DESC
use below link to analyze wait stats
http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/
HTH
PS: for reporting services you can post in SSRS forum
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers -
Performance Tuning on PL/sql
Dear all,
I am not sure whether I am right to post this enquiry here, please correct me if I am wrong.
I have a sql stmt that run for a long time and caused a error of 'out of temp sapce' in oracle server. How can I tune my sql for better performance using command 'EXPLAIN PLAN'? How could I know the details information for those columns in the table 'PLAN_TABLE'?
Best Regards,
MariusRead the "Database Performance Tuning Guide and Reference" for your version of Oracle (available online, right here).
Richard -
HOW TO: Post a SQL statement tuning request - template posting
This post is not a question, but similar to Rob van Wijk's "When your query takes too long ..." post should help to improve the quality of the requests for SQL statement tuning here on OTN.
On the OTN forum very often tuning requests about single SQL statements are posted, but the information provided is rather limited, and therefore it's not that simple to provide a meaningful advice. Instead of writing the same requests for additional information over and over again I thought I put together a post that describes how a "useful" post for such a request should look like and what information it should cover.
I've also prepared very detailed step-by-step instructions how to obtain that information on my blog, which can be used to easily gather the required information. It also covers again the details how to post the information properly here, in particular how to use the \ tag to preserve formatting and get a fixed font output:
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
So again: This post here describes how a "useful" post should look like and what information it ideally covers. The blog post explains in detail how to obtain that information.
In the future, rather than requesting the same additional information and explaining how to obtain it, I'll simply refer to this HOW TO post and the corresponding blog post which describes in detail how to get that information.
*Very important:*
Use the \ tag to enclose any output that should have its formatting preserved as shown below.
So if you want to use fixed font formatting that preserves the spaces etc., do the following:
\ This preserves formatting
\And it will look like this:
This preserves formatting
. . .Your post should cover the following information:
1. The SQL and a short description of its purpose
2. The version of your database with 4-digits (e.g. 10.2.0.4)
3. Optimizer related parameters
4. The TIMING and AUTOTRACE output
5. The EXPLAIN PLAN output
6. The TKPROF output snippet that corresponds to your statement
7. If you're on 10g or later, the DBMS_XPLAN.DISPLAY_CURSOR output
The above mentioned blog post describes in detail how to obtain that information.
Your post should have a meaningful subject, e.g. "SQL statement tuning request", and the message body should look similar to the following:
*-- Start of template body --*
The following SQL statement has been identified to perform poorly. It currently takes up to 10 seconds to execute, but it's supposed to take a second at most.
This is the statement:
select
from
t_demo
where
type = 'VIEW'
order by
id;It should return data from a table in a specific order.
The version of the database is 11.1.0.7.
These are the parameters relevant to the optimizer:
SQL>
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.1.0.7
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL>
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL>
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL>
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL>
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL>
SQL> select
2 sname
3 , pname
4 , pval1
5 , pval2
6 from
7 sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 01-30-2009 16:25
SYSSTATS_INFO DSTOP 01-30-2009 16:25
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 494,397
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.Here is the output of EXPLAIN PLAN:
SQL> explain plan for
2 -- put your statement here
3 select
4 *
5 from
6 t_demo
7 where
8 type = 'VIEW'
9 order by
10 id;
Explained.
Elapsed: 00:00:00.01
SQL>
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 1390505571
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 60 | 0 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_DEMO | 1 | 60 | 0 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_DEMO | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("TYPE"='VIEW')
14 rows selected.Here is the output of SQL*Plus AUTOTRACE including the TIMING information:
SQL> rem Set the ARRAYSIZE according to your application
SQL> set autotrace traceonly arraysize 100
SQL> select
2 *
3 from
4 t_demo
5 where
6 type = 'VIEW'
7 order by
8 id;
149938 rows selected.
Elapsed: 00:00:02.21
Execution Plan
Plan hash value: 1390505571
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 60 | 0 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_DEMO | 1 | 60 | 0 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | IDX_DEMO | 1 | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("TYPE"='VIEW')
Statistics
0 recursive calls
0 db block gets
149101 consistent gets
800 physical reads
196 redo size
1077830 bytes sent via SQL*Net to client
16905 bytes received via SQL*Net from client
1501 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
149938 rows processed
SQL>
SQL> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing optionsThe TKPROF output for this statement looks like the following:
TKPROF: Release 11.1.0.7.0 - Production on Mo Feb 23 10:23:08 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Trace file: orcl11_ora_3376_mytrace1.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
select
from
t_demo
where
type = 'VIEW'
order by
id
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1501 0.53 1.36 800 149101 0 149938
total 1503 0.53 1.36 800 149101 0 149938
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 88
Rows Row Source Operation
149938 TABLE ACCESS BY INDEX ROWID T_DEMO (cr=149101 pr=800 pw=0 time=60042 us cost=0 size=60 card=1)
149938 INDEX RANGE SCAN IDX_DEMO (cr=1881 pr=1 pw=0 time=0 us cost=0 size=0 card=1)(object id 74895)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1501 0.00 0.00
db file sequential read 800 0.05 0.80
SQL*Net message from client 1501 0.00 0.69
********************************************************************************The DBMS_XPLAN.DISPLAY_CURSOR output:
SQL> -- put your statement here
SQL> -- use the GATHER_PLAN_STATISTICS hint
SQL> -- if you're not using STATISTICS_LEVEL = ALL
SQL> select /*+ gather_plan_statistics */
2 *
3 from
4 t_demo
5 where
6 type = 'VIEW'
7 order by
8 id;
149938 rows selected.
Elapsed: 00:00:02.21
SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
SQL_ID d4k5acu783vu8, child number 0
select /*+ gather_plan_statistics */ * from t_demo
where type = 'VIEW' order by id
Plan hash value: 1390505571
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
| 0 | SELECT STATEMENT | | 1 | | 149K|00:00:00.02 | 149K| 1183 |
| 1 | TABLE ACCESS BY INDEX ROWID| T_DEMO | 1 | 1 | 149K|00:00:00.02 | 149K| 1183 |
|* 2 | INDEX RANGE SCAN | IDX_DEMO | 1 | 1 | 149K|00:00:00.02 | 1880 | 383 |
Predicate Information (identified by operation id):
2 - access("TYPE"='VIEW')
20 rows selected.I'm looking forward for suggestions how to improve the performance of this statement.
*-- End of template body --*
I'm sure that if you follow these instructions and obtain the information described, post them using a proper formatting (don't forget about the \ tag) you'll receive meaningful advice very soon.
So, just to make sure you didn't miss this point:Use proper formatting!
If you think I missed something important in this sample post let me know so that I can improve it.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/Alex Nuijten wrote:
...you missed the proper formatting of the Autotrace section ;-)Alex,
can't reproduce, does it still look unformatted? Or are you simply kidding? :-)
Randolf
PS: Just noticed that it actually sometimes doesn't show the proper formatting although the code tags are there. Changing to the \ tag helped in this case, but it seems to be odd.
Edited by: Randolf Geist on Feb 23, 2009 11:28 AM
Odd behaviour of forum software -
[ADF-11.1.2] Proof of view performance tuning in oracle adf
Hello,
Take an example of : http://www.gebs.ro/blog/oracle/adf-view-object-performance-tuning-analysis/
It tells me perfectly how to tune VO to achieve performance, but how to see it working ?
For example: I Set Fetch size of 25, 'in Batch of' set to 1 or 26 I see following SQL Statement in Log
[1028] SELECT Company.COMPANY_ID, Company.CREATED_DATE, Company.CREATED_BY, Company.LAST_MODIFY_DATE, Company.LAST_MODIFY_BY, Company.NAME FROM COMPANY Companyas if it is fetching all the records from table at a time no matter what's the size of Batch. If I am seeing 50 records on UI at a time, then I would expect at least 2 SELECT statement fetching 26 records by each statement if I set Batch Size to 26... OR at least 50 SELECT statement for Batch size set to '1'.
Please tell me how to see view performance tuning working ? How one can say that setting batch size = '1' is bad for performance?Anandsagar,
why don't you just read up on http://download.oracle.com/docs/cd/E21764_01/core.1111/e10108/adf.htm#CIHHGADG
there are more factors influencing performance than just query. Btw, indexing your queries also helps to tune performance
Frank -
hi,
I have to do perofrmance for one program, it is taking 67000 secs in back ground for execution and 1000 secs for some varints .It is an ALV report.
please suggest me how to proced to change the code.Performance tuning for Data Selection Statement
<b>http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm</b>Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Just refer to these links...
performance
Performance
Performance Guide
performance issues...
Performance Tuning
Performance issues
performance tuning
performance tuning
You can go to the transaction SE30 to have the runtime analysis of your program.Also try the transaction SCI , which is SAP Code Inspector. -
Hello All,
We have created some reports using Interactive Reporting Studio. The volume of data in that Oracle database are huge and in some tables of the relational database are having above 3-4 crores rows individually. We have created the .oce connection file using the 'Oracle Net' option. Oracle client ver is 10g. We earlier created pivot, chart and report in those .bqy files but had to delete those where-ever possible to decrease the processing time for getting those report generated.
But deleting those from the file and retaining just the result section (the bare minimum part of the file) even not yet helped us out solving the performance issue fully. Still now, in some reports, system gives error message 'Out of Memory' at the time of processing those reports. The memory of the client PCs,wherefrom the reports are being generated are 1 - 1.5 GB. For some reports, even it takes 1-2 hours for saving the results after process. In some cases, the PCs gets hanged at the time of processing. When we extract the query of those reports in sql and run them in TOAD/SQL PLUS, they take not so much time like IR.
Would you please help us out in the aforesaid issue ASAP? Please share your views/tips/suggestions etc in respect of performance tuning for IR. All reply would be highly appreciated.
Regards,
RajSQL + & Toad are tools that send SQL and spool results; IR is a tool that sends a request to the database to run SQL and then fiddles with the results before the user is even told data has been received. You need to minimize the time spent by IR manipulating results into objects the user isn't even asking for.
When a request is made to the database, Hyperion will wait until all of the results have been received. Once ALL of the results have been received, then IR will make multiple passes to apply sorts, filters and computed items existing in the results section. For some unknown reason, those three steps are performed more inefficiently then they would be performed in a table section. Only after all of the computed items have been calculated, all filters applied and all sorts sorted, then IR will start to calculate any reports, charts and pivots. After all that is done, the report stops processing and the data has been "returned"
To increase performance, you need to fine tune your IR Services and your BQY docs. Replicate your DAS on your server - it can only transfer 2g before it dies, restarts and your requested document hangs. You can replicated the DAS multiple times and should do so to make sure there are enough resources available for any concurrent users to make necessary requests and have data delivered to them.
To tune your bqy documents...
1) Your Results section MUST be free of any sorts, filters, or computed items. Create a staging table and put any sorts or local filters there. Move as many of your computed items to your database request line and ask the database to make the calculation (either directly or through stored procedures) so you are not at the mercy of the client machine. Any computed items that cannot be moved to the request line, need to be put on your new staging table.
2) Ask the users to choose filters. Programmatically build dynamic filters based on what the user is looking for. The goal is to cast a net only as big as the user needs so you are not bringing back unnecessary data. Otherwise, you will bring your server and client machines to a grinding halt.
3) Halt any report pagination. Built your reports from their own tables and put a dummy filter on the table that forces 0 rows in the table until the report is invoked. Hyperion will paginate every report BEFORE it even tells the user it has results so this will prevent the user from waiting an hour while 1000s of pages are paginated across multiple reports
4) Halt any object rendering until request. Same as above - create a system programmically for the user to tell the bqy what they want so they are not waiting forever for a pivot and 2 reports to compile and paginate when they want just a chart.
5) Saved compressed documents
6) Unless this document can be run as a job, there should be NO results stored with the document but if you do save results with the document, store the calculations too so you at least don't have to wait for them to pass again.
7) Remove all duplicate images and keep the image file size small.
Hope this helps!
PS: I forgot to mention - aside from results sections, in documents where the results are NOT saved, additional table sections take up very, very, very small bits of file size and, as long as there are not excessively larger images the same is true for Reports, Pivots and Charts. Additionally, the impact of file size only matters when the user is requesting the document. The file size is never an issue when the user is processing the report because it has already been delivered to them and cached (in workspace and in the web client)
Edited by: user10899957 on Feb 10, 2009 6:07 AM -
hi,
i have developed a report program.its taking too much time to fetch the records.so what steps i have to consider to improve the performance. urgent plz.Hi,
Check this links
Performance tuning for Data Selection Statement & Others
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm
http://www.sapdevelopment.co.uk/perform/performhome.htm
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_Introduction.asp
1. Debugger
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
http://www.cba.nau.edu/haney-j/CIS497/Assignments/Debugging.doc
http://help.sap.com/saphelp_erp2005/helpdata/en/b3/d322540c3beb4ba53795784eebb680/frameset.htm
2. Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
3. SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
6. Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
7. Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
8. Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
http://sap.genieholdings.com/abap/performance.htm
http://www.dbis.ethz.ch/research/publications/19.pdf
Reward Points if it is Useful.
Thanks,
Manjunath MS -
Oracle Application Performance Tuning
Hello all,
Please forgive me if I am asking this in the wrong section.
I am a s/w engineer with 2 years of exp. I have been working as a performance tuning resource for Oracle EBS in my company. The work mostly involved SQL tuning and dealing with indexes etc. In this time I took up interest in DBA stuff and I completed my OCA in Oracle 10g. Now my comany is giving me an oppurtunity to move into an Application DBA team and I am totally confused about it.
Becoming an Apps DBA will mean that the effort I put into the certification in 10g will be of no use. There are other papers for Apps DBA certification. Should I stay put in performance tuning & wait for a Core DBA chance or shall I accept the Apps DBA oppurtunity.
Also, does my exp. in SQL performace tuning hold any value as such with out an exposure to DBA activities?
Kindly guide me in this regards as I am very confused at this juncture.
Regards,
JithinJithin wrote:
Hello bigdelboy , Thank you for your reply.
Yes, By oracle Apps DBA I meant Oracle EBS.
Clearing 1Z0-046 is an option. However, my doubt is will clearing both the exams Admin I and Admin II be of any help without having practical expericnce? The EBS DBA work involves support and patching, cloning, and new node setup etc.
Also, is my performance tuning experience going to help me in any way in the journey forward as a DBA/ EBS DBA?
Thank you for your valuable time.
Regards,
Jithin SarathThe way I read it is this.
People notice you are capable of understanding and performance tuning SQL. And you must have some knowledge of Oracle EBS. And in fact you also have a DBA OCA.
So there is a 98% + chance you can be made into a good Oracle Apps DBA (and core DBA). If I was in their position I'd want you on my team too; this is the way we used to bring on DBA's in the old days before certification and it has a lot of merit still. Okay you can only do limited tasks at first ... but the number of taks you can do will increase dramatically over a number of months.
I would imagine the Oracle Apps DBA will be delighted to have someone on board who can performance tune SQL which sometimes can sometimes seem more like an art than a science. The patching etc can be taught in small doses. If they are a good team they wont try to give you everything at once, they'll get you to learn one procedure at a time.
And remember, if in doubt ask; and if you dont understand ask again. And be safe rather than sorry on actions that could be critial.
If your worried about liinux: Linux Recipes For Oracle Dbas (Apress) might be a good book to read but could be expensive in India
Likewise: (Oracle Applications Dba Field Guide) may suit and even (Rman Recipes For Oracle Database 11 G: A Problem-solution Approach) may have some value if you need to get up to speed quickly in those areas.
These are not perfect but they sometimes consolidate the information neatly; however only buy if you feel they are cheap enough. You may well buy them and feel disappointed (These all happen to be by Apress who seem to have produced a number of good books ... they've also published some rubbish as well)
And go over the 2-day dba examples as well and linux install examples n OTN as well ... they are free compared to the books I mentioned.
Rgds -bigdelboy. -
Hi,
Our organization recently also encounter Oracle DB performance issue, whereby I am searching for information for us to going into diagnostic the problem and finding the root cause of this performance problem.
I had read through this old article where I found a number of points being highlighted was a good start for us, and just about this port, we would like to understand what in details of AIX 5.3 and asynch I/O turned-on had to do with Oracle DB performance, unfortunately the URL given in the article (possible due to the twas back in 2006), is no longer valid (http://www.oracle.com/technology/pub/articles/lewis_cbo.html), I tried that but the pages being redirected to (http://www.oracle.com/technetwork/articles/index.html).
May I know, would you able to provide me the new URL path for the mentiened AIX 5.3 asynch I/O turned-on related information?
Our environment:
1) OS AIX 5.3
2) Oracle DB 10g
(If I have the AWR Report, would it be better possition for you to help us?)
Thank you in advanced.
Best Regards,
KaiLoon Kok
Edited by: 991132 on Feb 28, 2013 11:19 PMHi,
performance tuning is a very large argument and the causes of degradation may be many.
There are several consideration that You have to do like:
- which kind of application is running on the DB
- which kind of optimizer is used
- the striping of the data files across the disks
- the dimension of the DB block
- the waits that DB encounters
- many other things like SORT_AREA_SIZE, HASH_AREA_SIZE, etc.
To begin Yr analysis use the utlbstat.sql and utlestat.sql scripts provided in the $ORACLE_HOM/rdbms/admin directory and check the conditions of the DB.
Increasing the SGA, like You've done, may not be the solution, somtimes it may instead decrease the overall performances.
Also check in wich case the DB is slow, ex. executing some particular procedure, excuting a particular query, ....
Try to identify the better the DB and then try to resubmit Yr problem.
Bye Max
null -
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.dbdan wrote:
Hi All,
I had given a task to tune oracle 10g database. I am really new in memory tuning although I had some SQL Tuning earlier. My server is in remote location and I can not login to Enterprise Manager GUI. I will be using SQL Developer or PL/SQL Developer for this. My application is web based application.
I have following queries with this respect:
- How should I start... Should I use tkprof or AWR.
- How to enable these tools.
- How to view its reports
- What should I check in these reports
- Will just increasing RAM improves performance or should we also increase Hard Disk?
- What is CPU Cost and I/O?
Please help.
Thanks & Regards.Here is something you might try as a starting point:
Capture the output of the following (to a table, send to Excel, or spool to a file):
SELECT
STAT_NAME,
VALUE
FROM
V$OSSTAT
ORDER BY
STAT_NAME;
SELECT
STAT_NAME,
VALUE
FROM
V$SYS_TIME_MODEL
ORDER BY
STAT_NAME;
SELECT
EVENT,
TOTAL_WAITS,
TOTAL_TIMEOUTS,
TIME_WAITED
FROM
V$SYSTEM_EVENT
WHERE
WAIT_CLASS != 'Idle'
ORDER BY
EVENT;Wait a known amount of time (5 minutes or 10 minutes)
Execute the above SQL statements again.
Subtract the starting values from the ending values, and post the results for any items where the difference is greater than 0. The Performance Tuning Guide (especially the 11g version) will help you understand what each item means.
To repeat what Ed stated, do not randomly change parameters (even if someone claims that they have successfully made the parameter change 100s of times).
You could also try a Statspack report, but it might be better to start with something which produces less than 70 pages of output.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.
Maybe you are looking for
-
Newbie Oracle Enterprise Manager 11g feature availability question
Newbie question here. I am new to Oracle 11g and new to this forum. I ran a search on this question and came up empty so hopefully there is a kind and knowlegable soul wo can help. I recall in earlier versions of Oracle Enterprise Manager for earlier
-
Need API for Java communicating with RS232
Dear All, I am a newbie to Java. I have a sensor connected to a microcontroller, and this microcontroller is also connected to a PC through the RS232 (COM 1). I have installed the Java Communications API "Javax.comm" on my PC and was able to run the
-
HFM11.1.1.3 in workspace communication error
hi, i have multiple hfm applications in server.But i created new appplication and when i try to access t from workspace i get communication error. All other applications are still working fine. I cannot see that application in register and delete opt
-
Anyway of creating a podcast and publish it via Mircosoft Computers?
I'm using a Pentium 4 PC and was wondering how do you publish a podcast via this PC rather than having to get a Mac? Also since I use Cubase SX 2, should that help me in creating the podcast? Pentium 4 Windows XP Pro
-
Im using oracle8, I want to write a query something like this, SELECT x,y,z from (subquery) UNION SELECT x,y,z from (subquery) UNION SELECT x,y,z from (subquery) But the subquery is that long that I cannot repeat it in every select statement. So i wa