Performance of WITH query.
Hi All,
I have a WITH query (inline view) and when I run it against my pre-production database (millions of rows) it hangs and never comes back. However, when I run it on my development database (just a few rows), it comes back fairly quickly. So clearly, the number of rows is the issue here.
The question is what should I check on the Oracle database side to find out what is going on ? I have checked the PGA and the temporary segments (which seem to be fine) but I am failing to see what other components I should check.
What does the WITH command impact on the database level ?
Many thanks for your help.
Chiwatel wrote:
Hi All,
I have a WITH query (inline view)Actually a WITH query is called "Subquery Factoring"
http://docs.oracle.com/cd/B14117_01/server.101/b10759/statements_10002.htm#i2077142
and when I run it against my pre-production database (millions of rows) it hangs and never comes back. However, when I run it on my development database (just a few rows), it comes back fairly quickly. So clearly, the number of rows is the issue here.Yep, it would seem that way.
The question is what should I check on the Oracle database side to find out what is going on ? I have checked the PGA and the temporary segments (which seem to be fine) but I am failing to see what other components I should check.Read the two threads linked to in this FAQ: {message:id=9360003} and post relevant details.
What does the WITH command impact on the database level ?It can improve queries where the subquery is used more than once in the query, as well as simply making some queries easier to read for the developer.
The optimizer will decide whether it's best to materialize the results of the query as an internal temporary table to reference in the main query, or whether to embed the query as a sub query, depending on the statistics, cardinality, selectivity etc. as any other query is optimized.
Similar Messages
-
Performance problem with query on bkpf table
hi good morning all ,
i ahave a performance problem with a below query on bkpf table .
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat.
is ther any possibility to improve the performanece by using index .
plz help me ,
thanks in advance ,
regards ,
srinivashi,
if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
for ex:
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat
and bukrs in s_bukrs.
or
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
for all entries in itab
WHERE budat IN s_budat
and bukrs = itab-bukrs.
Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement. -
Performance issue with query when generated from an ODS
I am generating a query from an ODS. The run time is very high. How do I improve the performance of the query ?
Hi Baruah,
Steps:
1. Build the Secondary Index.
2. divide the data in to 2 ODS where Historical and Present data ODS's and then build a Multiprovider and build the query on multiprovider.
3. Build the Indexing on the Table level (ODS table level).
We cannot make much faster performance for the ODS's that too with huge data...
The above are very few of them...
Hope you understood ..
Regards,
Ravi Kanth -
11g
Hi there experts,
I have an issue with performance with a simple SQL which I thought cannot be tuned but just wanted to check with the experts here. We are running a query to get a persons ID based on his logged in email address from a Parties table which is huge (Millions of records). The query takes about 30 seconds to return a value. Was wondering is there a way to optimize this
The query is
{code}
select par.party_id
from parties party, users users
where
lower(party.email_address) = lower(:USER_EMAIL)
and party.system_reference = to_char(users.person_id)
and users.active_flag ='Yes';
{code}
The emails are stored in upper and lower, hence the lower functions
IS creating a function based index the only way?
Thanks,
RyanHi Everyone.
Thanks and apologies, first post on tuning as such. Here is the explain plan generated through SQL DEVELOPER. IT showed the output in XML
By the way, looks like the {code} tag does not work?
{code}
SELECT STATEMENT
84903
HASH JOIN
84903
Access Predicates
PARTY.ORIG_SYSTEM_REFERENCE=TO_CHAR(PERSON_ID)
TABLE ACCESS
PER_USERS
STORAGE FULL
1059
Access Predicates
AND
ACTIVE_FLAG='Y'
OR
OR
BUSINESS_GROUP_ID=0
BUSINESS_GROUP_ID=1
BUSINESS_GROUP_ID=DECODE(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID'),NULL,BUSINESS_GROUP_ID,TO_NUMBER(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID')))
Filter Predicates
AND
ACTIVE_FLAG='Y'
OR
OR
BUSINESS_GROUP_ID=0
BUSINESS_GROUP_ID=1
BUSINESS_GROUP_ID=DECODE(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID'),NULL,BUSINESS_GROUP_ID,TO_NUMBER(SYS_CONTEXT('FND_VPD_CTX','FND_ENTERPRISE_ID')))
TABLE ACCESS
HZ_PARTIES
STORAGE FULL
83843
Access Predicates
LOWER(PARTY.EMAIL_ADDRESS)='[email protected]'
Filter Predicates
LOWER(PARTY.EMAIL_ADDRESS)='[email protected]'
{code}
Purvesh, around 50% are 'Yes'
Thanks! -
Performance issues with query input variable selection in ODS
Hi everyone
We've upgraded from BW 3.0B to NW04s BI using SP12.
There is a problem encountered with input variable selection. This happens regardless of using BEx (new or old 3.x) or using RSRT. When using the F4 search help (or "Select from list" in BEx context) to list possible values, this takes forever for large ODS (containing millions of records).
Using ST01 and SM50 to trace the code in the same query, we see a difference here:
<u>NW04s BI SQL command</u>
SELECT
"P0000"."COMP_CODE" AS "0000000032" ,"T0000"."TXTMD" AS "0000000032_TXTMD"
FROM
( "/BI0/PCOMP_CODE" "P0000" ) LEFT OUTER JOIN "/BI0/TCOMP_CODE" "T0000" ON "P0000"."COMP_CODE" = "T0000
"."COMP_CODE"
WHERE
"P0000"."OBJVERS" = 'A' AND "P0000"."COMP_CODE" IN ( SELECT "O"."COMP_CODE" AS "KEY" FROM
"/BI0/APY_PP_C100" "O" )
ORDER BY
"P0000"."COMP_CODE" ASC#
<u>BW 3.0B SQL command:</u>
SELECT ROWNUM < 500 ....
In 3.0B, rownum is limited to 500 and this results in a speedy, though limited query. In the new NW04s BI, this renders the selection screen unusable as ABAP dumps for timing out will occur first due to the large data volume searched using sequential read.
It will not be feasible to create indexes for every single query selection parameter (issues with oerformance when loading, space required etc.). Is there a reason why SAP seems have fallen back on a less effective code for this?
I have tried to change the number of selected rows to <500 in BEx settings but one must reach a responsive screen in order to get to that setting and it is not always possible or saved for the next run.
Anyone with similar experience or can provide help on this?here is a reason why the F4 help on ODS was faster in BW 3.x.
In BW 3.x the ODS did not support the read mode "Only values in
InfoProvider". So If I compare the different SQL statements I propose
to change the F4 mode in the InfoProvider specific properties to
"About master data". This is the fastest F4 mode.
As an alternative you can define indexes on your ODS to speed up F4.
So would need a non-unique index on InfoObject 0COMP_CODE in your ODS
Check below for insights
https://forums.sdn.sap.com/click.jspa?searchID=6224682&messageID=2841493
Hope it Helps
Chetan
@CP.. -
Performance Problem with Query load
Hello,
after the upgrade to SPS 23, we have some problems with loading a Query. Before the Upgrade, the Query runs 1-3 minutes, and now more then 40 minutes.
Does anyone have an idea?
Regards
MarcoHi,
Suggest executing the Query in RSRT transaction by choosing the option ' Execute+Debugger' to further analyze where exactly the query is taking time.
Make sure choosing the appropriate 'Query Display' (List/BEx Analyzer/ HTML) option before Executing the query in the Debugger mode since the display option also effect the query run time.
Hope this info helps!
Bala Koppuravuri -
Performance problem with query
Hi,
I have a query.
SELECT
CDW_DIM_DATUM_VW.DATUM
,sum(CDW_FT031_DISCO_VW.RATE_CHARGEABLE_DURATION/60)
FROM
CDW_FT031_DISCO_VW,
CDW_DIM_DATUM_VW
WHERE CDW_DIM_DATUM_VW.DATUM >= to_date('20110901','yyyymmdd')
AND CDW_DIM_DATUM_VW.DATUM < to_date('20110911','yyyymmdd')
AND CDW_DIM_DATUM_VW.DATUM_KEY=CDW_FT031_DISCO_VW.DATUM_KEY
GROUP BY
CDW_DIM_DATUM_VW.DATUM
This query takes 2hrs on production normally which has to take 2 mts.
On checking Explain plan i understood that it is doing full table scan on cdw_ft031 which is the table used in view(cdw_ft031_disco_vw) which has got 30 lakh records.
I analyzed tables and indexes. I rebuilt all indexes . I used hints also but even then it is doing full table scan.
There are indexes created on columns refered in where clause.
Kindly help me.As the source are VIEWS your code is even not showing the full code. (Beside not showing the tables, indexes etc)
which has got 30 lakh recordsfrom wikipedia:
lakh is a unit in the Indian numbering system equal to one hundred thousandSo if you what that we do understand what your problem is, give us as much information as you have in a language the world understands.
I guess "CDW_DIM_DATUM_VW" is a "time dimension" so it has 1 row per date? Why is it a view?
Then the only join is to CDW_FT031_DISCO_VW where "FT" stands for "Fact"? (Why is it a views?)
What you want is that it picks the index (which it has on DATUM_KEY??) instead of a full table scan?
Is cdw_ft031 parititioned? (You often partition a fact via date, and you see "full table scan" which stand for "full partition scan".
So your fact has 3 Million rows? And the query takes 2 hours? My laptop would do that in less without any index.... there is sometime completely wrong.
I discourage the join of views, but based on what you gave us, hard to give advices. -
Hello Guys,
iam having performance problem with query .when i run the the query with intial variables its displaying report quickly but when i go drilling with filter values its taking 10 minutes to display report.can anybody suggest me possible solutions for the performance improvement.
Regards
PriyaHi Priya,
First, you have to check what is causing the performance issue. You can do this by running the query in transaction RSRT. Execute the query in debug mode with the option "Display Statistics Data". You can navigate the query as you would normally. After that, check the statistics information and see what causes the performance issue. My guess is that you need to build an aggregate.
If rhe Data Manager time is high (a large % of the total runtime) and the ratio of the number of records selected VS the number of records transfered is high (e.g. > 10), then try to build an aggregate to help on the performance. To check for aggregate suggestions, run RSRT again with the option "Display Aggregates Found". It will show you what characteristics and characteristics selections would help (note that the suggestion might not always be the optimal one).
If OLAP Data Transfer time is high, then try optimizing the query design (e.g. try reducing the amount of restricted KFs or try calculating some KFs during the data flow instead of calculating them in the query).
Hope this helps. -
Performance issues with respect scheme registration,select & insert query
I am facing performance issues with respect to schema registration,Select & insert query towards 10.2.0.3 version.It is taking around 45 minutes to register schema and it is taking around 5 min to insert a single document into xml db where as it was taking less than min to insert a single document into xml db of 9.2.0.6 version.Would like to know the issue and solution to resolve this issue.Please help me out on this as it is very urgent for me
Since it appears that this is an XML DB specific question, you're probably better off posting in the XML DB. The folks over there have much more experience with the ins and outs of that particular product.
Justin -
Performance problem with report query
Hi,
I am encountering a performance issue with a page returning a report.
I have a page that has a region which joins 2 tables. One table has about 220,00 rows, while the other contains roughly 60,000 rows. In the region source of the report region, the query includes join condition with local variables. For example, the page is page 70, and some join conditions are:
and a.id=:P70_ID
and a.name like :P70_NAME
I run the query that returns a large number of rows from sqlplus, and it takes less than 30 sec to complete.
When I run the page, the report took about 3 minutes to return.
In this case, :P70_NAME is initialized to '%' on the page.
I then tried to substitute variable value directly in the query:
and a.id=1000
and a.name like '%'
this time the report returned in about 30 sec.
I then tried another thing which specified the region as "PL/SQL Function returning sql query", and modified the region as follows:
l_sql := '.......';
l_sql := l_sql || 'and a.id=' || v('P70_ID')
and similar substituting :P70_NAME to v('P70_NAME') and append its value to the l_sql string.
The report query page also returned in 30 sec.
Is there any known performance issue with using the bind variable (:PXX_XXX) in the report region?If you are able.. flush the shared_pool, run your
report then query the v$sql_area or v$sql_text tables.
Or do a google query and look up Cary Milsap's piece on enabling extended trace .. there is your sure fire way of finding the problem sql. I am still learning htmldb but is there a way to alter session enable trace in some pre-query block? -
Performance issue with this query.
Hi Experts,
This query is fetching 500 records.
SELECT
RECIPIENT_ID ,FAX_STATUS
FROM
FAX_STAGE WHERE LOWER(FAX_STATUS) like 'moved to%'
Execution Plan
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 159K| 10M| 2170 (1)|
| 1 | TABLE ACCESS BY INDEX ROWID| FAX_STAGE | 159K| 10M| 2170 (1)|
| 2 | INDEX RANGE SCAN | INDX_FAX_STATUS_RAM | 28786 | | 123 (0)|
Note
- 'PLAN_TABLE' is old version
Statistics
1 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
937 bytes sent via SQL*Net to client
375 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
19 rows processed
Total number of records in the table.
SELECT COUNT(*) FROM FAX_STAGE--3679418
Distinct reccords are low for this column.
SELECT DISTINCT FAX_STATUS FROM FAX_STAGE;
Completed
BROKEN
Broken - New
moved to - America
MOVED to - Australia
Moved to Canada and australia
Functional based indexe on FAX_STAGE(LOWER(FAX_STATUS))
stats are upto date.
Still the cost is high
How to improve the performance of this query.
Please help me.
Thanks in advance.With no heavy activity on your fax_stage table a bitmap index might do better - see CREATE INDEX
I would try FTS (Full Table Scan) first as 6 vs. 3679418 is low cardinality for sure so using an index is not very helpful in this case (maybe too much Exadata oriented)
There's a lot of web pages where you can read: full table scans are not always evil and indexes are not always good or vice versa Ask Tom &quot;How to avoid the full table scan&quot;
Regards
Etbin -
Performance issues with pipelined table functions
I am testing pipelined table functions to be able to re-use the <font face="courier">base_query</font> function. Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? The <font face="courier">processor</font> function is from [url http://www.oracle-developer.net/display.php?id=429]improving performance with pipelined table functions .
Edit: The underlying query returns 500,000 rows in about 3 minutes. So there are are no performance issues with the query itself.
Many thanks in advance.
CREATE OR REPLACE PACKAGE pipeline_example
IS
TYPE resultset_typ IS REF CURSOR;
TYPE row_typ IS RECORD (colC VARCHAR2(200), colD VARCHAR2(200), colE VARCHAR2(200));
TYPE table_typ IS TABLE OF row_typ;
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ;
c_default_limit CONSTANT PLS_INTEGER := 100;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY);
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ);
END pipeline_example;
CREATE OR REPLACE PACKAGE BODY pipeline_example
IS
FUNCTION base_query (argA IN VARCHAR2, argB IN VARCHAR2)
RETURN resultset_typ
IS
o_resultset resultset_typ;
BEGIN
OPEN o_resultset FOR
SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB;
RETURN o_resultset;
END base_query;
FUNCTION processor (
p_source_data IN resultset_typ,
p_limit_size IN PLS_INTEGER DEFAULT c_default_limit)
RETURN table_typ
PIPELINED
PARALLEL_ENABLE(PARTITION p_source_data BY ANY)
IS
aa_source_data table_typ;-- := table_typ ();
BEGIN
LOOP
FETCH p_source_data
BULK COLLECT INTO aa_source_data
LIMIT p_limit_size;
EXIT WHEN aa_source_data.COUNT = 0;
/* Process the batch of (p_limit_size) records... */
FOR i IN 1 .. aa_source_data.COUNT
LOOP
PIPE ROW (aa_source_data (i));
END LOOP;
END LOOP;
CLOSE p_source_data;
RETURN;
END processor;
PROCEDURE with_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT /*+ PARALLEL(t, 5) */ colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN '1' END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM TABLE (processor (base_query (argA, argB),100)) t
GROUP BY colC
ORDER BY colC
END with_pipeline;
PROCEDURE no_pipeline (argA IN VARCHAR2,
argB IN VARCHAR2,
o_resultset OUT resultset_typ)
IS
BEGIN
OPEN o_resultset FOR
SELECT colC,
SUM (CASE WHEN colD > colE AND colE != '0' THEN colD / ColE END)de,
SUM (CASE WHEN colE > colD AND colD != '0' THEN colE / ColD END)ed,
SUM (CASE WHEN colD = colE AND colD != '0' THEN 1 END) de_one,
SUM (CASE WHEN colD = '0' OR colE = '0' THEN '0' END) de_zero
FROM (SELECT colC, colD, colE
FROM some_table
WHERE colA = ArgA AND colB = argB)
GROUP BY colC
ORDER BY colC;
END no_pipeline;
END pipeline_example;
ALTER PACKAGE pipeline_example COMPILE;Edited by: Earthlink on Nov 14, 2010 9:47 AM
Edited by: Earthlink on Nov 14, 2010 11:31 AM
Edited by: Earthlink on Nov 14, 2010 11:32 AM
Edited by: Earthlink on Nov 20, 2010 12:04 PM
Edited by: Earthlink on Nov 20, 2010 12:54 PMEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Performance Issues with large XML (1-1.5MB) files
Hi,
I'm using an XML Schema based Object relational storage for my XML documents which are typically 1-1.5 MB in size and having serious performance issues with XPath Query.
When I do XPath query against an element of SQLType varchar2, I get a good performance. But when I do a similar XPath query against an element of SQLType Collection (Varray of varchar2), I get a very ordinary performance.
I have also created indexes on extract() and analyzed my XMLType table and indexes, but I have no performance gain. Also, I have tried all sorts of storage options available for Collections ie. Varray's, Nested Tables, IOT's, LOB's, Inline, etc... and all these gave me same bad performance.
I even tried creating XMLType views based on XPath queries but the performance didn't improve much.
I guess I'm running out of options and patience as well.;)
I would appreciate any ideas/suggestions, please help.....
Thanks;
Ramakrishna ChintaAre you having similar symptoms as I am? http://discussions.apple.com/thread.jspa?threadID=2234792&tstart=0
-
Performance issue with HRALXSYNC report..
HI,
I'm facing performance issue with the HRALXSYNC report. As this is Standard report, Can any body suggest me how to optimize the standard report..
Thanks in advance.
Saleem Javed
Moderator message: Please Read before Posting in the Performance and Tuning Forum, also look for existing SAP notes and/or send a support message to SAP.
Edited by: Thomas Zloch on Aug 23, 2011 4:17 PMSreedhar,
Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
select vbeln lifnr from vbpa
appending corresponding fields of table lt_select
where vbeln in ct_vbeln
and posnr eq posnr_initial
and parvw eq 'SP'
and lifnr in it_spdnr.
I don't see any issue with this query. I give more info later -
Performance Issue with VL06O report
Hi,
We are having performance issue with VL06O report, when run with forwarding agent. It is taking about an hour with forwarding agent. The issue is with VBPA table and we found one OSS note, but it is for old versions. ours is ECC 5.0. Can anybody know the solution? If you guys need more information, please ask me.
Thanks,
SuryaSreedhar,
Thanks for you quick response. Indexes were not created for VBPA table. basis people tested by creating indexes and gave a report that it is taking more time with indexes than regular query optimizer. this is happening in the funtion forward_ag_selection.
select vbeln lifnr from vbpa
appending corresponding fields of table lt_select
where vbeln in ct_vbeln
and posnr eq posnr_initial
and parvw eq 'SP'
and lifnr in it_spdnr.
I don't see any issue with this query. I give more info later
Maybe you are looking for
-
Send a remuneration statement (paystub) as a PDF attachment by e-mail
Hi, Does anybody send to their employees remuneration statement (paystub) as a PDF attachment by e-mail? Could you please share the method that you using for it and advice why you choose this method? I need to analyse at least two options: u2022
-
hi every one , i use sql server agent to make a job which delete some rows from my table at 12:00 AM this job works on (Mydatabase1) but when i make another job to another database(Mydatabase2) i get this error from ViewJob HIstory : Myjob2,Error,1,F
-
Abap Proxy: ICM_HTTP_FAILED_ERROR
Hi We are trying to connect the Ecc6 to XI using the ABAP Proxy: we have done the following settins in Ecc6 Created the following RFC 1. LCRSAPRFC---->Pointing to SAP XI 2. SAPSLDAPI---->Pointing to SAP XI maintained the SLDAPICUST TC Pointing to SLD
-
Esteemed Gurus: We have a scenario where a sale is made to a 3rd party customer in country A, The materials are supplied by another company code, in another country (country B) - a typical cross company sale with direct shipment. ECC will have a sal
-
Installing and Running Air application from webpage
Hello , I am trying to run the air application from web page and its working fine in my system. (http://localhost:8080/examples/test1.html) if i try to run the same air application from another system's webpage by pointing the url to my system's IP A