Performance Tuning Issues: UNION and Stored Outlines
Hi,
I have two questions,
Firstly I have read this:
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_1016.htm#i35699
What I can understand is using UNION ALL is better than UNION.
The ALL in UNION ALL is logically valid because of this exclusivity. It allows the plan to be carried out without an expensive sort to rule out duplicate rows for the two halves of the query.
Can someone explain me the following sentences.
Secondly my Oracle Database 10g is on FIRST_ROWS_1, how can stored outlines help in reducing I/O cost and response time in general?Please explain.
Thank you,
Adith
Union ALL and Union
SQL> select 1, 2 from dual
union
select 1, 2 from dual;
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 6 (67)| 00:00:01 |
| 1 | SORT UNIQUE | | 2 | 6 (67)| 00:00:01 |
| 2 | UNION-ALL | | | | |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
| 4 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
11 rows selected.
SQL>select 1, 2 from dual
union all
select 1, 2 from dual;
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 (50)| 00:00:01 |
| 1 | UNION-ALL | | | | |
| 2 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | 2 (0)| 00:00:01 |
10 rows selected.
Adith
Similar Messages
-
Performance tuning in BPEL and ESB
Hi,
Any one can tell me how to do Performance tuning in BPEL and ESB.
How to create WEB SERVICES in BPELHi',
Performance tuning in BPEL and ESB.
***This is very big topic I can give you 2 points here
In BPEL we should avoid the use of duplicate variable, the best way to do this is, when ever we are creating a new variable
we need to ask can we reuse variable from inside the process, example when creating the input/output variable in Invoke activity
we need to check if we can use some existing variable instead of creating new.
All the DB related operation should be performed in 1 single composite.
How to create WEB SERVICES in BPEL
Not sure what you want to ask here, as BPEL is itself a webservice.
-Yatan -
How much performance is impacted if the Stored outline is used globally?
Hi,
One of the queries that we are having problem with and we are trying to use the Stored outline so that we freeze the execution plan. The vendor is telling us that it should set globally (ALTER SYSTEM not SESSION) , but we disagreed because this would have negative effect on our db performance. We ask to enable session only iusing LOGON trigger filtered by program/username nstead of system like below
Vendor preference: ALTER SYSTEM SET CREATE_STORED_OUTLINES=TRUE
We prefer: ALTER SESSION SET CREATE_STORED_OUTLINES=TRUE
BTW, we are on HP UX 10.2.0.3. Any recommendations or suggestions would be greatly appreciated. Thank you so much.
Rich.No Oracle version number.
No information as to the vendor or the product.
No information indicating why a stored outline might be of value in one or many cases.
And most importantly ... no evidence of testing to see if it really makes things better or worse.
Throw this into a test environment and validate your prejudices. There is no way we can possibly
know and there are no general rules when it comes to tuning other than the fact that only testing
on your hardware with your system has value. -
Hi folks,
I having a problem with performance tuning ... Below is a sample query
SELECT /*+ PARALLEL (K 4) */ DISTINCT ltrim(rtrim(ibc_item)), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE ltrim(rtrim(ibc_item)) NOT IN
select /*+ PARALLEL (II 4) */ DISTINCT ltrim(rtrim(THIRD_MAINKEY)) FROM BBB II
WHERE SECOND_MAINKEY = 3
UNION
SELECT /*+ PARALLEL (III 4) */ DISTINCT ltrim(rtrim(BLN_BUSINESS_LINE_NAME)) FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
The above query is having a cost of 460 Million. I tried creating index but oracle is not using index as a FT scan looks better. (I too feel FT scan is the best as 90% of the rows are used in the table)
After using the parallel hint the cost goes to 100 Million ....
Is there any way to decrease the cost ...
Thanks in advance for ur help !Be aware too Nalla, that the PARALLEL hint will rule out the use of an index if Oracle adheres to it.
This is what I would try:
SELECT /*+ PARALLEL (K 4) */ DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = TRIM(K.ibc_item))
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = TRIM(K.ibc_item))But I don't like this at all: TRIM(K.ibc_item), and you never need to use DISTINCT with NOT IN or NOT EXISTS.
Try this:
SELECT DISTINCT TRIM(ibc_item), substr(IBC_BUSINESS_CLASS, 1,1)
FROM AAA K
WHERE NOT EXISTS (
SELECT 1
FROM BBB II
WHERE SECOND_MAINKEY = 3
AND TRIM(THIRD_MAINKEY) = K.ibc_item
AND NOT EXISTS (
SELECT 1
FROM CCC III
WHERE BLN_BUSINESS_LINE = 3
AND TRIM(BLN_BUSINESS_LINE_NAME) = K.ibc_itemThis may not work though, since you may have whitespaces in K.ibc_item. -
Performance tuning: lite sessions and local ServletContext
I have been doing some research on iPlanet performance tuning. In our
current production environment (iAS6.0 SP1B, iWS4.1 SP2 on Solaris), since
we don't use clustering there should be a couple of performance improvements
we can make immediately:
1. Use lite sessions (<session-impl>lite</session-impl> in ias-web.xml) - I
believe that if you use lite sessions, the session data is stored in the kjs
process space as opposed to the kxs process space. This, of course, means
that if a kjs dies the user's on it will lose their session information but
it will provide a performance improvement by reducing kxs/kjs communication.
2. Use local ServletContexts (<distributable>false</distributable> in
web.xml) - This should cause the ServletContext to only be stored in the
originating JVM. So again, if a kjs dies, the user will lose their
ServletContext but again we will get a performance improvement by reducing
kxs/kjs communcation.
What I want to understand is how our load balancing configuration will
effect our production environment if we use this configuration. Right now
we use sticky load balancing on all our servlets but we don't have our JSPs
registered and therefore sticky load balancing cannot always be trusted to
return users to the iAS they came from. We make up for this by using
hardware load balancing that keeps the majority of our users sticky.
However, using lite sessions and local ServletContexts will require that a
user not only stick to an iAS, but to a specific kjs as well. Using sticky
load balancing would ensure that, but since we also rely on our hardware
load balancers, could they create a problem? If a user gets sent back to
the iAS they came from by our hardware load balancers, will the kxs process
be smart enough to return them to the kjs they came from? If so, then I
think that means that we can safely switch to lite sessions and local
ServletContexts, but if not, I think many users will lose their sessions.
Thanks,
LincPlease follow thru this link for your answers
http://developer.iplanet.com/viewsource/char_tuningias/index.jsp
Thanks
Shital Patel
Lincoln wrote:
I have been doing some research on iPlanet performance tuning. In our
current production environment (iAS6.0 SP1B, iWS4.1 SP2 on Solaris), since
we don't use clustering there should be a couple of performance improvements
we can make immediately:
1. Use lite sessions (<session-impl>lite</session-impl> in ias-web.xml) - I
believe that if you use lite sessions, the session data is stored in the kjs
process space as opposed to the kxs process space. This, of course, means
that if a kjs dies the user's on it will lose their session information but
it will provide a performance improvement by reducing kxs/kjs communication.
2. Use local ServletContexts (<distributable>false</distributable> in
web.xml) - This should cause the ServletContext to only be stored in the
originating JVM. So again, if a kjs dies, the user will lose their
ServletContext but again we will get a performance improvement by reducing
kxs/kjs communcation.
What I want to understand is how our load balancing configuration will
effect our production environment if we use this configuration. Right now
we use sticky load balancing on all our servlets but we don't have our JSPs
registered and therefore sticky load balancing cannot always be trusted to
return users to the iAS they came from. We make up for this by using
hardware load balancing that keeps the majority of our users sticky.
However, using lite sessions and local ServletContexts will require that a
user not only stick to an iAS, but to a specific kjs as well. Using sticky
load balancing would ensure that, but since we also rely on our hardware
load balancers, could they create a problem? If a user gets sent back to
the iAS they came from by our hardware load balancers, will the kxs process
be smart enough to return them to the kjs they came from? If so, then I
think that means that we can safely switch to lite sessions and local
ServletContexts, but if not, I think many users will lose their sessions.
Thanks,
Linc -
MS SQL Server 7 - Performance of Prepared Statements and Stored Procedures
Hello All,
Our team is currently tuning an application running on WL 5.1 SP 10 with a MS
SQL Server 7 DB that it accesses via the WebLogic jConnect drivers. The application
uses Prepared Statements for all types of database operations (selects, updates,
inserts, etc.) and we have noticed that a great deal of the DB host's resources
are consumed by the parsing of these statements. Our thought was to convert many
of these Prepared Statements to Stored Procedures with the idea that the parsing
overhead would be eliminated. In spite of all this, I have read that because
of the way that the jConnect drivers are implemented for MS SQL Server, Prepared
Statments are actually SLOWER than straight SQL because of the way that parameter
values are converted. Does this also apply to Stored Procedures??? If anyone
can give me an answer, it would be greatly appreciated.
Thanks in advance!Joseph Weinstein <[email protected]> wrote:
>
>
Matt wrote:
Hello All,
Our team is currently tuning an application running on WL 5.1 SP 10with a MS
SQL Server 7 DB that it accesses via the WebLogic jConnect drivers.The application
uses Prepared Statements for all types of database operations (selects,updates,
inserts, etc.) and we have noticed that a great deal of the DB host'sresources
are consumed by the parsing of these statements. Our thought was toconvert many
of these Prepared Statements to Stored Procedures with the idea thatthe parsing
overhead would be eliminated. In spite of all this, I have read thatbecause
of the way that the jConnect drivers are implemented for MS SQL Server,Prepared
Statments are actually SLOWER than straight SQL because of the waythat parameter
values are converted. Does this also apply to Stored Procedures???If anyone
can give me an answer, it would be greatly appreciated.
Thanks in advance!Hi. Stored procedures may help, but you can also try MS's new free type-4
driver,
which does use DBMS optimizations to make PreparedStatements run faster.
Joe
Thanks Joe! I also wanted to know if setting the statement cache (assuming that
this feature is available in WL 5.1 SP 10) will give a boost for both Prepared Statements
and stored procs called via Callable Statements. Pretty much all of the Prepared
Statements that we are replacing are executed from entity bean transactions.
Thanks again -
Performance tuning issues -- plz help
Hi Tuning gurus
this querry works fine for lesser number of rows
eg :--
where ROWNUM <= 10 )
where rnum >=1;
but takes lot of time as we increase rownum ..
eg :--
where ROWNUM <= 10000 )
where rnum >=9990;
results are posted below
pls suggest me
oracle version -Oracle Database 10g Enterprise Edition
Release 10.2.0.1.0 - Prod
os version red hat enterprise linux ES release 4
also statistics differ when we use table
and its views
results of view v$mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from v$mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.84
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
294 recursive calls
0 db block gets
8715 consistent gets
8669 physical reads
0 redo size
7060 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from v$mail;
Elapsed: 00:00:00.17
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
8 recursive calls
0 db block gets
2171 consistent gets
2057 physical reads
260 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
results of original table mail
[select * from
( select a.*, ROWNUM rnum from
( SELECT M.MAIL_ID, MAIL_FROM, M.SUBJECT
AS S1,CEIL(M.MAIL_SIZE) AS MAIL_SIZE,
TO_CHAR(MAIL_DATE,'dd Mon yyyy hh:mi:ss
am') AS MAIL_DATE1, M.ATTACHMENT_FLAG,
M.MAIL_TYPE_ID, M.PRIORITY_NO, M.TEXT,
COALESCE(M.MAIL_STATUS_VALUE,0),
0 as email_address,LOWER(M.MAIL_to) as
Mail_to, M.Cc, M.MAIL_DATE AS MAIL_DATE,
lower(subject) as subject,read_ipaddress,
read_datetime,Folder_Id,compose_type,
interc_count,history_id,pined_flag,
rank() over (order by mail_date desc)
as rnk from mail M WHERE M.USER_ID=6 AND M.FOLDER_ID =1) a
where ROWNUM <= 10000 )
where rnum >=9990;]
result :
11 rows selected.
Elapsed: 00:00:03.21
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=14735 Card=10000 B
ytes=142430000)
1 0 VIEW (Cost=14735 Card=10000 Bytes=142430000)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=14735 Card=14844 Bytes=211230120)
4 3 WINDOW (SORT) (Cost=14735 Card=14844 Bytes=9114216)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'MAIL' (TABLE) (C
ost=12805 Card=14844 Bytes=9114216)
6 5 INDEX (RANGE SCAN) OF 'FOLDER_USERID' (INDEX) (C
ost=43 Card=14844)
Statistics
1 recursive calls
119544 db block gets
8686 consistent gets
8648 physical reads
0 redo size
13510 bytes sent via SQL*Net to client
4084 bytes received via SQL*Net from client
41 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select count(*) from mail;
Elapsed: 00:00:00.34
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=494 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'FOLDER_USERID' (INDEX) (Cost=
494 Card=804661)
Statistics
1 recursive calls
0 db block gets
2183 consistent gets
2062 physical reads
72 redo size
352 bytes sent via SQL*Net to client
504 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Thanks n regards
ps : sorry i could not preserve the format plz
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBA
Message was edited by:
Cool_Jr.DBAJust to answer the OP's fundamental question:
The query starts off quick (rows between 1 and 10)
but gets increasingly slower as the start of the
window increases (eg to row 1000, 10,000, etc).
The original (unsorted) query would get first rows
very quickly, but each time you move the window, it
has to fetch and discard an increasing number of rows
before it finds the first one you want. So the time
taken is proportional to the rownumber you have
reached.
With Charles's correction (which is unavoidable), the
entire query has to be retrieved and sorted
before the rows you want can be returned. That's
horribly inefficient. This technique works for small
sets (eg 10 - 1000 rows) but I can't tell you how
wrong it is to process data in this way especially if
you are expecting lacs (that's 100,000s isn't
it) of rows returned. You are pounding your database
simply to give you the option of being able to go
back as well as forwards in your query results. The
time taken is proportional to the total number of
rows (so the time to get to the end of the entire set
is proportional to the square of the total
number of rows.
If you really need to page back and forth
through large sets, consider one of the following
options:
1) saving the set (eg as a materialised view or in a
temp table - and include "row number" as an indexed
column)
2) retrieve ALL the rowids into an array/collection
in a single pass, then go get 10 rows by rowid for
each page
3) assuming you can sort by a unique identifier, use
that (instead of rownumber) to remember the first row
in each page; use a range scan on the index on that
UID to get back the rows you want quickly (doing this
with a non-unique sort key is quite a bit harder)
Remember also that if someone else inserts into your
table while you are paging around, some of these
methods can give confusing results - because every
time you start a new query, you get a new
read-consistent point.
Anyway, try to redesign so you don't need to page
through lacs of rows....
HTH
Regards NigelYou are correct regarding the OP's original SQL statement that:
"the entire query has to be retrieved and sorted before the rows you want can be returned"
However, that is not the case with the SQL statement that I posted. The problem with the SQL statement I posted is that Oracle insists on performing full tablescans on the table. The following is a full test run with 2,000,000 rows in a table, including an analysis of the problem, and a method of working around the problem:
CREATE TABLE T1 (
MAIL_ID NUMBER(10),
USER_ID NUMBER(10),
FOLDER_ID NUMBER(10),
MAIL_DATE DATE,
PRIMARY KEY(MAIL_ID));
<br>
CREATE INDEX T1_USER_FOLDER ON T1(USER_ID,FOLDER_ID);
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID);
<br>
INSERT INTO
T1
SELECT
ROWNUM MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
INSERT INTO
T1
SELECT
ROWNUM+1000000 MAIL_ID,
DBMS_RANDOM.VALUE(1,30) USER_ID,
DBMS_RANDOM.VALUE(1,5) FOLDER_ID,
TRUNC(SYSDATE-365)+ROWNUM/10000 MAIL_DATE
FROM
DUAL
CONNECT BY
LEVEL<=1000000;
<br>
COMMIT;
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
|* 1 | HASH JOIN | | 1 | 8801 | 10 |00:00:15.62 | 13610 | 1010K| 1010K| 930K (0)|
|* 2 | VIEW | | 1 | 8801 | 10 |00:00:00.34 | 6805 | | | |
|* 3 | WINDOW SORT PUSHED RANK| | 1 | 8801 | 910 |00:00:00.34 | 6805 | 74752 | 74752 |65536 (0)|
|* 4 | TABLE ACCESS FULL | T1 | 1 | 8801 | 8630 |00:00:00.29 | 6805 | | | |
| 5 | TABLE ACCESS FULL | T1 | 1 | 2000K| 2000K|00:00:04.00 | 6805 | | | |
<br>
Predicate Information (identified by operation id):
1 - access("MAIL_ID"="M"."MAIL_ID")
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("MAIL_DATE") DESC )<=909)
4 - filter(("USER_ID"=6 AND "FOLDER_ID"=1))The above performed two tablescans of the T1 table and required 15.6 seconds to complete, which was not the desired result. Now, to create an index that will be helpful for the query, and provide Oracle an additional hint:
(http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html "Pagination in Getting Rows N Through M" shows a similar approach)
DROP INDEX T1_USER_FOLDER_MAIL;
<br>
CREATE INDEX T1_USER_FOLDER_MAIL ON T1(USER_ID,FOLDER_ID,MAIL_DATE DESC,MAIL_ID);
<br>
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',CASCADE=>TRUE)
<br>
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 900 AND 909) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.01 | 47 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.01 | 7 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 909 |00:00:00.01 | 7 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 910 |00:00:00.01 | 7 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=900 AND "RN"<=909))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=909)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.01 seconds.
SELECT /*+ ORDERED */
MI.MAIL_ID,
TO_CHAR(M.MAIL_DATE,'DD MON YYYY HH:MI:SS AM') AS MAIL_DATE1,
M.MAIL_DATE AS MAIL_DATE,
M.FOLDER_ID,
M.MAIL_ID,
M.USER_ID
FROM
(SELECT /*+ FIRST_ROWS(10) */
MAIL_ID
FROM
(SELECT
MAIL_ID,
ROW_NUMBER() OVER (ORDER BY MAIL_DATE DESC) RN
FROM
CUSTAPP.T1
WHERE
USER_ID=6
AND FOLDER_ID=1)
WHERE
RN BETWEEN 8600 AND 8609) MI,
CUSTAPP.T1 M
WHERE
MI.MAIL_ID=M.MAIL_ID;
<br>
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 1 | NESTED LOOPS | | 1 | 11 | 10 |00:00:00.11 | 81 | | | |
|* 2 | VIEW | | 1 | 11 | 10 |00:00:00.11 | 41 | | | |
|* 3 | WINDOW NOSORT STOPKEY | | 1 | 8711 | 8609 |00:00:00.09 | 41 | 267K| 267K| |
|* 4 | INDEX RANGE SCAN | T1_USER_FOLDER_MAIL | 1 | 8711 | 8610 |00:00:00.05 | 41 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T1 | 10 | 1 | 10 |00:00:00.01 | 40 | | | |
|* 6 | INDEX UNIQUE SCAN | SYS_C0023476 | 10 | 1 | 10 |00:00:00.01 | 30 | | | |
<br>
Predicate Information (identified by operation id):
2 - filter(("RN">=8600 AND "RN"<=8609))
3 - filter(ROW_NUMBER() OVER ( ORDER BY "T1"."SYS_NC00005$")<=8609)
4 - access("USER_ID"=6 AND "FOLDER_ID"=1)
6 - access("MAIL_ID"="M"."MAIL_ID")The above made use of both indexes, did and completed in 0.11 seconds.
As the above shows, it is possible to efficiently retrieve the desired records very rapidly without having to leave the cursor open.
If this SQL statement will be used in a web browser, it probably does not make sense to leave the cursor open. If the SQL statement will be used in an application that maintains state, and the user is expected to always page from the first row toward the last, then leaving the cursor open and reading rows as needed makes sense.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Performance tuning issue of 8.1.7's PL/SQL
My trouble sample code is list below,I know I can fix this proble easily at 9i,but,you know.
My procedure is called by receive a parameter,data_segmentseqno,its value maybe is 'segment1' or 'segment1,segment2,segment3'.In first case,procedure is work,it can find what I need,but,it will faile in case 2.
After I check it in DBAStudio's session,I found it was be parse to 'SELECT .. FROM .. WHERE E.SEGMENTSEQNO IN ( :1 )',oracle engine just think it has only one parameter,not three ones.SO,how should I do when I get a parameter include multi segment.
Somebody can help me,or the only way to solve this problem is use cursor instead use BULK COLLECTION.IN ORACLE 8.1.7.
create or replace package body RoundRobin is
procedure dispatchRoundRobin(
data_segmentseqno in varchar2
) is
type Cust_type is table of varchar2(18);
Cust_data Cust_type;
begin
/********** HERE IS MY TROUBLE *********
HOW SHOULD I DO FOR MULTI SEGMENTSEQNO
SELECT rowid BULK COLLECT INTO Cust_data
FROM dispatchedrecord e
where e.segmentseqno in ( data_segmentseqno ) ;
exception
when others then
dbms_output.put_line('Error'||sqlerrm);
end dispatchRoundRobin;Hello
You are using a single bind variable to represent multiple values. In this case you are asking oracle to see if e.segmentseqno is equal to 'segment1,segment2,segment3', which it isn't. What you need to do is either use separate bind variables for each value you want to test i.e.
WHERE e.segmentseqno IN (data_segmentseqno, data_segmentseqno2, data_segmentseqno3)Which isn't going to be very usefull unless you have a fixed number of values that are always used.
Another alternative would be to use dynamic SQL to form the where clause and put the values into the where clause directly
EXECUTE IMMEDIATE 'SELECT rowid FROM dispatchedrecord e
where e.segmentseqno in ('|| data_segmentseqno||')' BULK COLLECT INTO CustData ;But this isn't ideal either as you really should use bind variables for these values rather than litterals.
I'm not sure whether using a collection here for the list of segment values would help or not. I haven't used collections much in SQL statements, maybe someone else will have a better idea...
HTH
David -
Performance tuning - Gather stats and statID created
SQL> EXEC DBMS_STATS.CREATE_STAT_TABLE('HR', 'SAVED_STATS');
SQL> SELECT TABLE_NAME, NUM_ROWS, BLOCKS,EMPTY_BLOCKS, AVG_SPACE, USER_STATS, GLOBAL_STATS
2 FROM USER_TABLES
3 WHERE TABLE_NAME = 'MYCOUNTRIES';
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE
GLO
MYCOUNTRIES 0 0 0 0 NO
YES
SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>'HR',TABNAME=>'MYCOUNTRIES',ESTIMATE_PERCENT=>10,STATOWN=>'HR',STATTAB=>'SAVED_STATS',STATID=>'PREVIOUS1');
TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE USE GLO
MYCOUNTRIES 25 5 0 0 NO YES
SQL> select statid, type, count(*)
2 from saved_stats
3 group by statid, type;
STATID T COUNT(*)
PREVIOUS1 C 3
PREVIOUS1 T 1
Qn) Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a seperate statID name, so that a new version of stats is created/stored in a seperate statID.Are the statistics which are stored based on statid ? ie. everytime I re-gather stats should I be mentioning a >seperate statID name, so that a new version of stats is created/stored in a seperate statID.Yes.
From http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461
statid Identifier (optional) to associate with these statistics within stattab -
Performance tuning on BKPF and BSEG for my code.
Please provide alternative code for the following code so that processing is fast.
my select queries are as follows. It take a lot of time and system gets loaded when it is scheduled.
select BUKRS
BELNR
GJAHR
BLART
BLDAT
BUDAT
TCODE
XBLNR
STBLG
WAERS
KURSF
AWKEY
STGRD
into CORRESPONDING FIELDS OF TABLE
IT_BKP from bkpf where bukrs = p_bukrs
and gjahr in s_gjahr
AND BLART NE 'SA'
and budat in s_date.
select BELNR
KOART
SHKZG
MWSKZ
DMBTR
KTOSL
SGTXT
VBELN
HKONT
KUNNR
MATNR
MENGE
FROM BSEG INTO CORRESPONDING FIELDS OF TABLE it_bseg
for all entries in it_bkpf
where bukrs = it_bkpf-bukrs
and belnr = it_bkpf-belnr
and gjahr = it_bkpf-gjahr
and MWSKZ in s_MWSKZ .
Please help.Hi,
Declare internal table same fields as the you are selecting from table and
remove corresponding fields of and also check ur t_bkpf table is not initial
before using for all enteries.
foe e.g.
select BUKRS
BELNR
GJAHR
BLART
BLDAT
BUDAT
TCODE
XBLNR
STBLG
WAERS
KURSF
AWKEY
STGRD
into TABLE
IT_BKP from bkpf where bukrs = p_bukrs
and gjahr in s_gjahr
AND BLART NE 'SA'
and budat in s_date.
IF IT_BKPF[] IN NOT INITIAL.
select BELNR
KOART
SHKZG
MWSKZ
DMBTR
KTOSL
SGTXT
VBELN
HKONT
KUNNR
MATNR
MENGE
FROM BSEG INTO TABLE it_bseg
for all entries in it_bkpf
where bukrs = it_bkpf-bukrs
and belnr = it_bkpf-belnr
and gjahr = it_bkpf-gjahr
and MWSKZ in s_MWSKZ .
ENDIF.
reward if useful. -
Performance Tuning Issues ( How to Optimize this Code)
_How to Optimize this Code_
FORM MATL_CODE_DESC.
SELECT * FROM VBAK WHERE VKORG EQ SAL_ORG AND
VBELN IN VBELN AND
VTWEG IN DIS_CHN AND
SPART IN DIVISION AND
VKBUR IN SAL_OFF AND
VBTYP EQ 'C' AND
KUNNR IN KUNNR AND
ERDAT BETWEEN DAT_FROM AND DAT_TO.
SELECT * FROM VBAP WHERE VBELN EQ VBAK-VBELN AND
MATNR IN MATNR.
SELECT SINGLE * FROM MAKT WHERE MATNR EQ VBAP-MATNR.
IF SY-SUBRC EQ 0.
IF ( VBAP-NETWR EQ 0 AND VBAP-UEPOS NE 0 ).
IF ( VBAP-UEPVW NE 'B' AND VBAP-UEPVW NE 'C' ).
MOVE VBAP-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-FREE_MATL.
MOVE VBAP-KWMENG TO ITAB1-FREE_QTY.
MOVE VBAP-KLMENG TO ITAB1-KLMENG.
MOVE VBAP-VRKME TO ITAB1-FREE_UNIT.
MOVE VBAP-WAVWR TO ITAB1-FREE_VALUE.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAP-UEPOS TO ITAB1-UEPOS.
ENDIF.
ELSE.
MOVE VBAK-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAK-ERDAT TO ITAB1-SAL_ORD_DATE.
MOVE VBAK-KUNNR TO ITAB1-CUST_NUM.
MOVE VBAK-KNUMV TO ITAB1-KNUMV.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK EQ 'A' AND
KMPRS = 'X'.
IF SY-SUBRC EQ 0.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK IN ('C','D') AND
KMPRS = 'X' AND
KRECH IN ('A','B').
IF SY-SUBRC EQ 0.
IF KONV-KRECH EQ 'A'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = ( KONV-KBETR / 10 ).
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1 '%'
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ELSEIF KONV-KRECH EQ 'B'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = KONV-KBETR.
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ENDIF.
ELSE.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
CLEAR : G_KBETR, G_KSCHL,G_KBETR1.
MOVE VBAP-KWMENG TO ITAB1-QTY.
MOVE VBAP-VRKME TO ITAB1-QTY_UNIT.
IF VBAP-UMVKN NE 0.
ITAB1-KLMENG = ( VBAP-UMVKZ / VBAP-UMVKN ) * VBAP-KWMENG.
ENDIF.
IF ITAB1-KLMENG NE 0.
VBAP-NETWR = ( VBAP-NETWR / VBAP-KWMENG ).
MOVE VBAP-NETWR TO ITAB1-INV_PRICE.
ENDIF.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-MATNR.
MOVE MAKT-MAKTX TO ITAB1-MAKTX.
ENDIF.
SELECT SINGLE * FROM VBKD WHERE VBELN EQ VBAK-VBELN AND
BSARK NE 'DFUE'.
IF SY-SUBRC EQ 0.
ITAB1-INV_PRICE = ITAB1-INV_PRICE * VBKD-KURSK.
APPEND ITAB1.
CLEAR ITAB1.
ELSE.
CLEAR ITAB1.
ENDIF.
ENDIF.
ENDSELECT.
ENDSELECT.
ENDFORM. " MATL_CODE_DESCHi Vijay,
You could start by using INNER JOINS:
SELECT ......
FROM ( VBAK
INNER JOIN VBAP
ON VBAPVBELN = VBAKVBELN
INNER JOIN MAKT
ON MAKTMATNR = VBAPMATNR AND
MAKT~SPRAS = SYST-LANGU )
INTO TABLE itab
WHERE VBAK~VBELN IN VBELN
AND VBAK~VTWEG IN DIS_CHN
AND VBAK~SPART IN DIVISION
AND VBAK~VKBUR IN SAL_OFF
AND VBAK~VBTYP EQ 'C'
AND VBAK~KUNNR IN KUNNR
AND VBAK~ERDAT BETWEEN DAT_FROM AND DAT_TO
AND VBAP~NETWR EQ 0
AND VBAP~UEPOS NE 0
Regards,
John. -
Performance tuning issues........
Please guide me alternate option for below set of code:
LOOP AT ITAB1 WHERE DISC LT 0.
SELECT * FROM KONV WHERE KNUMV EQ ITAB1-KNUMV AND
KPOSN EQ ITAB1-POSNR AND
KSTEU EQ 'C'.
IF SY-SUBRC EQ 0.
ITAB1-FREE_INDI = 'Y'.
EXIT.
ENDIF.
ENDSELECT.
MODIFY ITAB1 TRANSPORTING FREE_INDI.
ENDLOOP.
*How to merge into one loop :
LOOP AT ITAB1.
IF ITAB1-FREE_MATL NE ''.
ITAB1-FREE_INDI = 'Y'.
MODIFY ITAB1.
GTEST = ITAB1-POSNR - 10.
READ TABLE ITAB1 WITH KEY SAL_ORD_NUM = ITAB1-SAL_ORD_NUM
POSNR = GTEST.
ITAB1-FREE_MATL = 'X'.
MODIFY ITAB1 TRANSPORTING FREE_MATL WHERE
SAL_ORD_NUM = ITAB1-SAL_ORD_NUM AND POSNR EQ GTEST.
CLEAR GTEST.
ENDIF.
ENDLOOP.
LOOP AT ITAB1 WHERE FREE_INDI EQ 'Y'.
IF ITAB1-UEPOS EQ G_UEPOS.
CLEAR ITAB1-FREE_INDI.
MODIFY ITAB1.
ENDIF.
MOVE ITAB1-UEPOS TO G_UEPOS.
ENDLOOP.
Thanx & Regrds.
Vijay..._How to Optimize this Code_
FORM MATL_CODE_DESC.
SELECT * FROM VBAK WHERE VKORG EQ SAL_ORG AND
VBELN IN VBELN AND
VTWEG IN DIS_CHN AND
SPART IN DIVISION AND
VKBUR IN SAL_OFF AND
VBTYP EQ 'C' AND
KUNNR IN KUNNR AND
ERDAT BETWEEN DAT_FROM AND DAT_TO.
SELECT * FROM VBAP WHERE VBELN EQ VBAK-VBELN AND
MATNR IN MATNR.
SELECT SINGLE * FROM MAKT WHERE MATNR EQ VBAP-MATNR.
IF SY-SUBRC EQ 0.
IF ( VBAP-NETWR EQ 0 AND VBAP-UEPOS NE 0 ).
IF ( VBAP-UEPVW NE 'B' AND VBAP-UEPVW NE 'C' ).
MOVE VBAP-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-FREE_MATL.
MOVE VBAP-KWMENG TO ITAB1-FREE_QTY.
MOVE VBAP-KLMENG TO ITAB1-KLMENG.
MOVE VBAP-VRKME TO ITAB1-FREE_UNIT.
MOVE VBAP-WAVWR TO ITAB1-FREE_VALUE.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAP-UEPOS TO ITAB1-UEPOS.
ENDIF.
ELSE.
MOVE VBAK-VBELN TO ITAB1-SAL_ORD_NUM.
MOVE VBAK-VTWEG TO ITAB1-VTWEG.
MOVE VBAK-ERDAT TO ITAB1-SAL_ORD_DATE.
MOVE VBAK-KUNNR TO ITAB1-CUST_NUM.
MOVE VBAK-KNUMV TO ITAB1-KNUMV.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK EQ 'A' AND
KMPRS = 'X'.
IF SY-SUBRC EQ 0.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
SELECT SINGLE * FROM KONV WHERE KNUMV EQ VBAK-KNUMV AND
KSTEU = 'C' AND
KHERK IN ('C','D') AND
KMPRS = 'X' AND
KRECH IN ('A','B').
IF SY-SUBRC EQ 0.
IF KONV-KRECH EQ 'A'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = ( KONV-KBETR / 10 ).
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1 '%'
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ELSEIF KONV-KRECH EQ 'B'.
MOVE : KONV-KSCHL TO G_KSCHL.
G_KBETR = KONV-KBETR.
MOVE G_KBETR TO G_KBETR1.
CONCATENATE G_KSCHL G_KBETR1
INTO ITAB1-REMARKS SEPARATED BY SPACE.
ENDIF.
ELSE.
ITAB1-REMARKS = 'Manual Price Change'.
ENDIF.
CLEAR : G_KBETR, G_KSCHL,G_KBETR1.
MOVE VBAP-KWMENG TO ITAB1-QTY.
MOVE VBAP-VRKME TO ITAB1-QTY_UNIT.
IF VBAP-UMVKN NE 0.
ITAB1-KLMENG = ( VBAP-UMVKZ / VBAP-UMVKN ) * VBAP-KWMENG.
ENDIF.
IF ITAB1-KLMENG NE 0.
VBAP-NETWR = ( VBAP-NETWR / VBAP-KWMENG ).
MOVE VBAP-NETWR TO ITAB1-INV_PRICE.
ENDIF.
MOVE VBAP-POSNR TO ITAB1-POSNR.
MOVE VBAP-MATNR TO ITAB1-MATNR.
MOVE MAKT-MAKTX TO ITAB1-MAKTX.
ENDIF.
SELECT SINGLE * FROM VBKD WHERE VBELN EQ VBAK-VBELN AND
BSARK NE 'DFUE'.
IF SY-SUBRC EQ 0.
ITAB1-INV_PRICE = ITAB1-INV_PRICE * VBKD-KURSK.
APPEND ITAB1.
CLEAR ITAB1.
ELSE.
CLEAR ITAB1.
ENDIF.
ENDIF.
ENDSELECT.
ENDSELECT.
ENDFORM. " MATL_CODE_DESC
Edited by: Vijay kumar on Jan 8, 2008 6:50 PM -
App. Server performance Tuning issue
Hi.
Platform:
iAS : Oracle Application server Version 10.1.2.0.2
Installation Type : Forms and Reports Services
OS : Windows server 2003 SP2
DB server : Oracle 10g ver.10.2.0.4.0 win.server 2003
programs developed with developer suite 10g.
Problem:
When the user login screen pop up in internet browser and user type the user id and password then click enter the iAS is taking between 12 to 15 seconds to connect to db, once is connected the application works fine but ..
Is there any way to speed up the first time connection?
I will appreciate whatever help.
Thanks in advanceCheck the order of connection mechanisms in your SQLNET.ORA file. If LDAP is first in the list, the login process will look for OID and if it finds one running, will look for your user there. If there is no OID running, it may wait for a timeout before moving on to the next listed connection mechanism. In general, if you're not using OID to authenticate to your database, you should make sure it's not first in the list in SQLNET.ORA.
TGF -
Performance tuning in oracle 10g
Hi Guys
i hope all are well,Have a nice day Today
i have discuss with some performance tuning issue
recently , i joined the new project that project improve the efficiency of the applicaton. in this environment oracle plsql langauage are used , so if i need to improve effiency of application what are the step are taken
and what are the way to go through the process improvement
kindly help megenerate statspack/AWR reports
HOW To Make TUNING request
https://forums.oracle.com/forums/thread.jspa?threadID=2174552#9360003 -
Profitability analysis activated in POS Interface, performane/tuning issue?
We are about to golive with a SAP retail system. All purchases made by customers in store are sent into SAP via and idoc via the so called POS (point of sales) Interface.
A reciept received via an idoc creates material docs, invoice docs, accounting documents, controlling documents, profit center docs and profit ability analysis documents.
Each day we recive all receipt from each store collected in one idoc per store.
When deactivate our profit ability analysis an average store are posted in about 40 seconds with sales from an average day. When we post and profit ability analysis are activated the average time per store are almost 75 seconds.
How can simple postings to profit ability analysis increase the posting time with almost 50 %? Is this a performance/tuning issue?
Best regards
Carl-Johan
Point will be assigned generously for info that leads to better performance!which CO document does the system creates : CCA document ?
on which cost centre ? PCA document ?
What is the CE category of the CE used for posting the variance ?
Maybe you are looking for
-
I have recived a new computer for christmas and dowloaded the new itunes on it but when i hook my iphone 4 up to the new computer with the new itunes it will only show up on the new devices the music i have on it will not show up in the new itunes li
-
Where can I get a mouse for Mac OS 9? New ones say they're for Mac OS X.
Any one know where I can get a new mouse for OS 9? the ones at the store all say you need 10.4...This computer does have a manually install USB port, anyone know what will work or where to get one?
-
excuse me i get a task to make a template for standard report i getting the xml file from the system and uploaded to bi publisher there are fields displayed correctly and others not shown however there are found in the xml file i cheeked it is there
-
[SOLVED]How to edit a row when VO uses executeWithParams ?
Hi, I have a table based on a View Object that uses a bind parameter in its where clause. This parameter is getting set via an ExecuteWithParams action I have in the page. This works fine. Now I want to be able to edit a row in this result set via a
-
Does anyone know how to set up a macbook pro so that other computers can join me and use my internet connection?