Query Performance Issue (First_rows hint)
Dear friends, I am running a query on default optimizer mode (all_rows) but its not picking the right execution plan nor the right index but if changing optimizer mode into first_rows, its running insteadly. Tables statistics for concern tables are upto date. Same query was running fine some days back.
Any idea about this behavior ?
My culprit query as follow:
SELECT *
FROM (SELECT *
FROM (SELECT /*+ parallel(h,10) */
MSISDN,
IMSI,
SUBSCRIBER_TYPE,
CASE
WHEN INSTR(IN_SERVICE_NAMES, 'MTCIN') = 0 THEN
WHEN INSTR(IN_SERVICE_NAMES, 'MTCIN') <> 0 THEN
SUBSTR(IN_SERVICE_NAMES,
INSTR(IN_SERVICE_NAMES, 'MTCIN') + 5,
2)
END MTC
FROM RDMBKPIPRDAPP.HLS_OUT H
WHERE LOAD_PROC_ID = '20112501'
AND LOADING_COUNT = '1'
AND SUBSCRIBER_TYPE = 'Prepaid')) A
INNER JOIN (SELECT /*+ fu parallel(i,10) */
MSISDN, IN_ID, LTRIM(IN_ID, '0') IN_INID
FROM RDMBKPIPRDAPP.IN_OUT I
WHERE LOAD_PROC_ID = '20112501'
) B ON A.MSISDN = B.MSISDN
WHERE A.MOC <> B.IN_ID
Regards
Irfan Ahmad
Irfan_Ahmad wrote:
Dear friends, I am running a query on default optimizer mode (all_rows) but its not picking the right execution plan nor the right index but if changing optimizer mode into first_rows, its running insteadly. Tables statistics for concern tables are upto date. Same query was running fine some days back.
Any idea about this behavior ?
What is the "right" plan, what is the "wrong" plan, and what method do you use to convince yourself that the "right" plan IS the right plan ?
Please use the 'code' tags when reporting the execution plans so that they are easily readable. (See comments at end of note).
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
Similar Messages
-
Hi Gurus,
I m woking on performance tuning at the moment and wants some tips
regarding the Query performance tuning,if anyone can helpme in that
rfrence.
the thing is that i have got an idea about the system and now the
issues r with DB space, Abap Dumps, problem in free space in table
space, no number range buffering,cubes r using too many aggrigates,
large IC,Large ODS, and many others.
So my questionis that is anyone can tell me that how to resolve the
issues with the Large master data tables,and large ODS,and one more
importnat issue is KPI´s exceding there refrence valuses so any idea
how to deal with them.
waiting for the valuable responces.
thanks In advance
Redards
AmitHi Amit
For Query performance issue u can go for :-
Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
secondly i wud suggest u is use CKF in place of formulaes if any in the query
other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary join to ur MD and thus dec query perfo
be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
use filters if possible
if u follow these m sure ur query perfo will inc
Assign points if applicable
Thanks
puneet -
SQL query performance issues.
Hi All,
I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
SQL query performance issues.
Following is the tkprof file.
CURSOR_ID:76 LENGTH:2383 ADDRESS:f6b40ab0 HASH_VALUE:2459471753 OPTIMIZER_GOAL:ALL_ROWS USER_ID:443 (APPS)
insert into cos_temp(
TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
INVOICE_NUMBER, EXT_SALES, EXT_COS,
GROSS_PROFIT, ACCT_DATE,
SHIPMENT_TYPE,
FROM_ORGANIZATION_ID,
FROM_ORGANIZATION_CODE)
select a.trx_date,
g.segment5 dept,
g.segment4 prd,
m.segment1 part,
d.customer_number customer,
b.quantity_invoiced units,
-- substr(a.sales_order,1,6) order#,
substr(ltrim(b.interface_line_attribute1),1,10) order#,
a.trx_number invoice,
(b.quantity_invoiced * b.unit_selling_price) sales,
(b.quantity_invoiced * nvl(price.operand,0)) cos,
(b.quantity_invoiced * b.unit_selling_price) -
(b.quantity_invoiced * nvl(price.operand,0)) profit,
to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
'DRP',
l.ship_from_org_id,
p.organization_code
from ra_customers d,
gl_code_combinations g,
mtl_system_items m,
ra_cust_trx_line_gl_dist c,
ra_customer_trx_lines b,
ra_customer_trx_all a,
apps.oe_order_lines l,
apps.HR_ORGANIZATION_INFORMATION i,
apps.MTL_INTERCOMPANY_PARAMETERS inter,
apps.HZ_CUST_SITE_USES_ALL site,
apps.qp_list_lines_v price,
apps.mtl_parameters p
where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
and a.batch_source_id = 1001 -- Sales order shipped other OU
and a.complete_flag = 'Y'
and a.customer_trx_id = b.customer_trx_id
and b.customer_trx_line_id = c.customer_trx_line_id
and a.sold_to_customer_id = d.customer_id
and b.inventory_item_id = m.inventory_item_id
and m.organization_id
= decode(substr(g.segment4,1,2),'01',5004,'03',5004,
'02',5003,'00',5001,5002)
and nvl(m.item_type,'0') <> '111'
and c.code_combination_id = g.code_combination_id+0
and l.line_id = b.interface_line_attribute6
and i.organization_id = l.ship_from_org_id
and p.organization_id = l.ship_from_org_id
and i.org_information3 <> '5108'
and inter.ship_organization_id = i.org_information3
and inter.sell_organization_id = '5108'
and inter.customer_site_id = site.site_use_id
and site.price_list_id = price.list_header_id
and product_attr_value = to_char(m.inventory_item_id)
call count cpu elapsed disk query current rows misses
Parse 1 0.47 0.56 11 197 0 0 1
Execute 1 3733.40 3739.40 34893 519962154 11 188 0
total 2 3733.87 3739.97 34904 519962351 11 188 1
| Rows Row Source Operation
| ------------ ---------------------------------------------------
| 188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
| 741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
| 741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
| 741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
| 741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
| 3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
| 3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
| 3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
| 1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
| 1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
| 486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
| 486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
| 486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
| 75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
| 486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
| 486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
| 486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
| 1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
| 2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
| 1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
| 1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
| 3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
| 3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
| 3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
| 3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
| 3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
| 3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
| 741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
| 6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
Regards
Ashish| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
Please post explain plan and optimizer* parameter settings. -
Query Performance Issue based on the same Multiprovider
Hi All,
I am facing a performance issue with one of the BEx Query 1 based on a particular Multiprovider.
I have done all the kind of testing and came o a conclusion that the OLAP: Data Transfer time for the query is the max at around 820 Secs.
The surprising part is another Query 2 based on the same multiprovider runs absolutely fine without any hassles with OLAP : Data Transfer time with merely 3 Sec.
Another, surprise is that Query 2 has even more Restricted Key Figures and Calculated Key Figures but it runs smoothly.
Both the queries use cache memory.
Please suggest a solution to this.
Thanks.Hi Rupesh,
There is no much difference between the 2 queries.
Query 1 - which has performance issue has the same filters as that of Query 2 - which is working fine.
The only difference is that the selection of Fiscal Week is in the Default Values Pane for Query 2 whereas its in Characteristics Restrictions in Query 1. I doubt whether this setting will have any effect on the performance.
Both restriction i.e Fiscal Week restriction is based on Customer exit and the rows include Hierarchy for both the queries.
I assume creating aggregates on hierarchy for the Cube can be a solution. -
Query Performance Issues on a cube sized 64GB.
Hi,
We have a non-time based cube whose size is 64 GB . Effectively, I can't use time dimension for partitioning. The transaction table has ~ 850 million records. We have 20+ dimensions among which 2 of the dimensions have 50 million records.
I have equally distributed the fact table records among 60 partitions. Each partition size is around 900 MB.
The processing of the cube is not an issue as it completes in 3.5 hours. The issue is with the query performance of the cube.
When an MDX query is submitted, unfortunately, in majority of the cases the storage engine has to scan all the partitions (as our cube is not time dependent and we can't find a suitable dimension that will fit the bill to partition measure group based
on it.)
I'm aware of the cache warming and usage based aggregation(UBO) techniques.
However, the cube is available for users to perform adhoc queries and hence the benefits of cache warming and UBO may cease to contribute to the performance gain as there is a high probability that each user may look at the data from different perspectives
(especially when we have 20 + dimensions) as day(s) progress.
Also, we have 15 + average calculations (calculated measures) in the cube. So, the storage engine sends all the granular data that the formula engine might have requested (could be millions of rows) and then perform the average calculation.
A look at the profiler suggested that considerable amount of time has been spent by storage engine to gather the records (from 60 partitions).
FYI - Our server has RAM 32 GB and 8 cores and it is exclusive to Analysis Services.
I would appreciate comments from anyone who has worked on a large cube that is not time dependent and the steps they took to improve the adhoc query performance for the users.
Thanks
CoolPHello CoolP,
Here is a good articles regarding how to tuning query performance in SSAS, please see:
Analysis Services Query Performance Top 10 Best Practices:
http://technet.microsoft.com/en-us/library/cc966527.aspx
Hope you can find some helpful clues to tuning your SSAS Server query performance. Moreover, there are two ways to improve the query response time for an increasing number of end-users:
Adding more power to the existing server (scale up)
Distributing the load among several small servers (scale out)
For detail information, please see:
http://technet.microsoft.com/en-us/library/cc966449.aspx
Regards,
Elvis Long
TechNet Community Support -
Query Performance Issue (help)
I'm having issues w/ huge performance issues on the following. The sub intersect query lists duplicates in table1 and table2...and deletes those results from table2. But, the dups criteria is not looking at all fields, only those in the subquery....
DELETE FROM isw.accounts2
WHERE id_user||''||SYSTEM_ID||''||NM_DATABASE IN (
SELECT id_user||''||SYSTEM_ID||''||NM_DATABASE
FROM (
SELECT id_user, domain_name, system_name, user_description,
user_dn, fl_system_user, dt_user_created,
dt_user_modified, pw_changed, user_disabled,
user_locked, pw_neverexpired, pw_expired,
pw_locked, cd_geid, user_type, nm_database,
cd_altname, fl_lob, cd_account_sid, system_id
FROM isw.accounts -- accounts
WHERE SYSTEM_ID IN (SELECT SYSTEM_ID FROM SYSTEMS
WHERE FL_LOB = 'type' AND
FL_SYSTEM_TYPE = 'Syst')
INTERSECT
SELECT id_user, domain_name, system_name, user_description,
user_dn, fl_system_user, dt_user_created,
dt_user_modified, pw_changed, user_disabled,
user_locked, pw_neverexpired, pw_expired,
pw_locked, cd_geid, user_type, nm_database,
cd_altname, fl_lob, cd_account_sid, system_id
FROM isw.accounts2 --accounts_temp
WHERE SYSTEM_ID IN (SELECT SYSTEM_ID FROM SYSTEMS
WHERE FL_LOB = 'type'
AND FL_SYSTEM_TYPE = 'syst')
)Edited by: Topher34 on Oct 24, 2008 12:00 PM
Edited by: Topher34 on Oct 24, 2008 12:01 PMPLAN_TABLE_OUTPUT
Plan hash value: 2030965500
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 623 | 2269 (7)| 00:00:28 |
|* 1 | FILTER | | | | | |
|* 2 | HASH JOIN SEMI | | 1 | 623 | 236 (2)| 00:00:03 |
| 3 | TABLE ACCESS FULL | ACCOUNTS_BAX2 | 1 | 603 | 222 (1)| 00:00:03 |
|* 4 | TABLE ACCESS FULL | SYSTEMS | 15 | 300 | 14 (8)| 00:00:01 |
| 5 | VIEW | | 1 | 117 | 2032 (7)| 00:00:25 |
| 6 | INTERSECTION | | | | | |
| 7 | SORT UNIQUE | | 2145 | 418K| | |
|* 8 | HASH JOIN | | 2145 | 418K| 1793 (8)| 00:00:22 |
|* 9 | TABLE ACCESS FULL| SYSTEMS | 15 | 300 | 14 (8)| 00:00:01 |
|* 10 | TABLE ACCESS FULL| ACCOUNTS_BAX | 2269 | 398K| 1779 (8)| 00:00:22 |
| 11 | SORT UNIQUE | | 1 | 588 | | |
|* 12 | HASH JOIN | | 1 | 588 | 236 (2)| 00:00:03 |
|* 13 | TABLE ACCESS FULL| ACCOUNTS_BAX2 | 1 | 568 | 222 (1)| 00:00:03 |
|* 14 | TABLE ACCESS FULL| SYSTEMS | 15 | 300 | 14 (8)| 00:00:01 |
Edited by: Topher34 on Oct 27, 2008 8:08 AM -
SELECT query performance issue
Hello experts!!!
I am facing the performance issue in the below SELECT query. Its taking long time to execute this query.
Please suggest how can i improve the performance of this query.
SELECT MBLNR MATNR LIFNR MENGE WERKS BUKRS LGORT BWART INTO CORRESPONDING FIELDS OF TABLE IT_MSEG
FROM MSEG
WHERE MATNR IN S_MATNR
AND LIFNR IN S_LIFNR
AND WERKS IN S_WERKS
AND BUKRS IN S_BUKRS
AND XAUTO = ''
AND BWART IN ('541' , '542' , '543' , '544', '105' , '106').
Thanks in advance.
Regards
AnkurHi Ankur,
the MSEG index for material is
Index MSEG~M
MANDT
MATNR
WERKS
LGORT
BWART
SOBKZ
It could be used very efficient if you supply values for MATNR, WERKS and LGORT.
There is no index on LIFNR. IKf you want the data for specific vendor(s), you should select from EKKO first, ir has index Index EKKO~1
MANDT
LIFNR
EKORG
EKGRP
BEDAT
You can JOIN EKKO and EKBE to get the BSEG key fields GJAHR BELNR BUZEI directly.
I don't know your details but I think you can get all you need from EKKO and EKBE. You may also consider EKPO as is has a material index Index EKPO~1
MANDT
MATNR
WERKS
BSTYP
LOEKZ
ELIKZ
MATKL
Do you really need the (much bigger) MSEG?
Regards,
Clemens -
Oracle Forms6i Query Performance issue - Urgent
Hi All,
I'm using oracle forms6i and Oracle DB 9i.
I'm facing the performance issue in query forms.
In detail block form taking long time to load the data.
Form contains 2 non data blocks
1.HDR - 3 input parameters
2.DETAILS - Grid - Details
HDR input fields
1.Company Code
2.Company ACccount No
3.Customer Name
Details Grid is displayed the details.
Here there are 2 tables involved
1.Table1 - 1 crore records
2.Table2 - 4 crore records
In form procedure one cursor bulid and fetch is done directly and assign the values to form block fields.
Below i've pasted the query
SELECT
t1.entry_dt,
t2.authoriser_code,
t1.company_code,
t1.company_ac_no
initcap(t1.customer_name) cust_name,
t2.agreement_no
t1.customer_id
FROM
table1 t1,
table2 t2
WHERE
(t2.trans_no = t1.trans_no or t2.temp_trans_no = t1.trans_no)
AND t1.company_code = nvl(:hdr.l_company_code,t1.company_code)
AND t1.company_ac_no = nvl(:hdr.l_company_ac_no,t1.company_ac_no)
AND lower(t1.customer_name) LIKE lower(nvl('%'||:hdr.l_customer_name||'%' ,t1.customer_name))
GROUP BY
t2.authoriser_code,
t1.company_code,
t1.company_ac_no,
t1.customer_name,
t2.agreement_no,
t1.customer_id;
Where Clause Analysis
1.Condition 1 OR operator (In table2 two different columbs are compared with one column in table)
2.Like Operator
3.All the columns has index but not used properly always full table scan
4.NVL chk
5.If i run the qry in backend means coming little fast,front end very slow
Input Parameter - Query retrival data - limit
Only compnay code means record count will be 50 - 500 records -
Only compnay code and comp ac number means record count will be 1-5
Only compnay code,omp ac number and customer name means record count will be 1 - 5 records
I have tried following ways
1.Split the query using UNIOIN (OR clause seaparted) - Nested loops COST 850 , Nested loops COST 750 - index by row id - cost is 160 ,index by row id - cost is 152 full table access.................................
2.Dynamic SQL build - 'DBMS_SQL.DEFINE COLUMN .....
3.Given onlu one input parameter - Nested loops COST 780 , Nested loops COST 780 - index by row id - cost is 148 ,index by row id - cost is 152 full table access.................................
Still im facing the same issue.
Please help me out on this.
Thanks and Regards,
Oracle1001Sudhakar P wrote:
the below query its take more than one minute while updating the records through pro*c.
Execute 562238 161.03 174.15 7 3932677 2274833 562238Hi Sudhakar,
If the database is capable of executing 562,238 update statements in one minute, then that's pretty good, don't you think.
Your real problem is in the application code which probably looks something like this in pseudocode:
for i in (some set containing 562,238 rows)
loop
<your update statement with all the bind variables>
end loop;If you transform your code to do a single update statement, you'll gain a lot of seconds.
Regards,
Rob. -
Building a new Cube Vs Restricted Key figure in Query - Performance issue
Hi,
I have a requirement to create a OPEX restricted key figure in Query. The problem is that the key figure should be restricted to about 30 GL Accounts and almost 300 Cost centers.
I do not know if this might cause performance issue in the query. At the moment, I am thinking of creating a new OPEX cube and load only those 30 GL Accounts, 300 cost centers and Amount. and include OPEX in multiprovider in order to get OPEX
amount in the report.
whats the best solution - creating OPEX restricted key figure or OPEX cube ?
thanks,
BhatI think you should go for cube as all the restrcited key figure are calculated at OLAP runtime so it will definitely affect the query performance.There are a lot of costcenter for which you have to restrict it so definitely during the runtime of query it will take a lot of time to fetch tha data from infoprovider.Its better that you create a cube with the restrictions and include it in MP.It will definitely save a lot of time during query execution
Edited by: shyam agarwal on Feb 29, 2012 10:26 AM -
Query Performance issue in Oracle Forms
Hi All,
I am using oracle 9i DB and forms 6i.
In query form ,qry took long time to load the data into form.
There are two tables used here.
1 table(A) contains 5 crore records another table(B) has 2 crore records.
The recods fetching range 1-500 records.
Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
But DBA team dont want to create index on table A.Because of table space problem.
If create the index on main table (A) ,then performance overhead in production.
Concurrent user capacity is 1500.
Is there any alternative methods to handle this problem.
Regards,
RS1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
http://en.wikipedia.org/wiki/Crore
I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
Justin -
Help to rewrite the query --performance issue
Hi ,
Please help to rewrite the query since it's performance is not good.Especially second inline query(CASE statements are therein select caluse ..)is taking more cost.
Database Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bi
SELECT *
FROM
(SELECT q.*,
COUNT(*) OVER() AS record_count,
ROWNUM AS row_num
FROM
(SELECT ExName.examiner_code,
examiner_name,
:v_year,
:v_month,
count_fb,
NVL(count_entered_fb, 0) count_entered_fb,
NVL(count_sent_fb, 0) count_sent_fb,
NVL(count_edited_fb, 0) count_edited_fb,
NVL(count_complete_fb, 0) count_complete_fb,
NVL(count_withibcardiff_fb, 0) count_withibcardiff_fb
FROM
(SELECT examiner_code,
COUNT(*) AS count_fb
FROM
(SELECT
examiner_code,
paper_code,
assessment_school
FROM
( SELECT DISTINCT ce.examiner_code,
ce.paper_code,
ce.assessment_school
FROM
(SELECT
DISTINCT assessment_school,
paper_code,
examiner_code
FROM candidate_examiner_allocation cea
WHERE cea.element = 'Moderation of IA'
AND cea.year = :v_year
AND cea.month = :v_month
) ce,
subject_group sg,
subject_component sc
WHERE (:v_padded_examiner_code IS NULL
OR ce.examiner_code = :v_padded_examiner_code)
AND (:v_subject_group IS NULL
OR sg.group_number = :v_subject_group)
AND sg.year = :v_year
AND sg.month = :v_month
AND sc.year = :v_year
AND sc.month = :v_month
AND sc.paper_code = ce.paper_code
AND sc.subject = sg.subject
AND sc.lvl = sg.lvl
AND (:v_subject IS NULL
OR sc.subject = :v_subject)
AND (:v_lvl IS NULL
OR sc.lvl = :v_lvl)
) ea
GROUP BY examiner_code
) ExName,
(SELECT examiner_code,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'ENTERED'
THEN 1
ELSE NULL
END) AS count_entered_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'SENT'
THEN 1
ELSE NULL
END) AS count_sent_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'EDITED'
THEN 1
ELSE NULL
END) AS count_edited_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'COMPLETE'
THEN 1
ELSE NULL
END) AS count_complete_fb,
COUNT(
CASE
WHEN UPPER(wfi.status) = 'WITH IBCARDIFF'
THEN 1
ELSE NULL
END) AS count_withibcardiff_fb
FROM ia_instances ia1,
workflow_instance wfi
WHERE wfi.instance_id = ia1.workflow_instance_id
AND ia1.year = :v_year
AND ia1.month = :v_month
GROUP BY ia1.year,
ia1.month,
examiner_code
) iaF,
(SELECT person_code,
title
|| ' '
|| firstname
|| ' '
|| lastname AS examiner_name
FROM person
WHERE :v_examiner_name IS NULL
OR UPPER(title
|| ' '
|| firstname
|| ' '
|| lastname) LIKE :v_search_examiner_name
) P
WHERE ExName.examiner_code = iaF.examiner_code (+)
AND ExName.examiner_code = p.person_code
ORDER BY ExName.examiner_code
) q
) rc
WHERE row_num >= :v_start_row
AND row_num <= (:v_start_row+(:v_max_row-1));explain plan
line 1: SQLPLUS Command Skipped: set linesize 130
line 2: SQLPLUS Command Skipped: set pagesize 0
PLAN_TABLE_OUTPUT
Plan hash value: 1581970599
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 276 | | 2187 (6)| 00:00:34 |
|* 1 | FILTER | | | | | | |
|* 2 | VIEW | | 1 | 276 | | 2187 (6)| 00:00:34 |
| 3 | WINDOW BUFFER | | 1 | 250 | | 2187 (6)| 00:00:34 |
| 4 | COUNT | | | | | | |
| 5 | VIEW | | 1 | 250 | | 2187 (6)| 00:00:34 |
| 6 | SORT ORDER BY | | 1 | 119 | | 2187 (6)| 00:00:34 |
| 7 | NESTED LOOPS | | 1 | 119 | | 2186 (6)| 00:00:34 |
|* 8 | HASH JOIN OUTER | | 1 | 92 | | 2185 (6)| 00:00:34 |
| 9 | VIEW | | 1 | 20 | | 51 (4)| 00:00:01 |
| 10 | SORT GROUP BY | | 1 | 7 | | 51 (4)| 00:00:01 |
| 11 | VIEW | | 1 | 7 | | 51 (4)| 00:00:01 |
| 12 | SORT UNIQUE | | 1 | 127 | | 51 (4)| 00:00:01 |
| 13 | NESTED LOOPS | | 1 | 127 | | 50 (2)| 00:00:01 |
|* 14 | HASH JOIN | | 1 | 68 | | 44 (3)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID| SUBJECT_COMPONENT | 13 | 520 | | 40 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | SUBJECT_COMPONENT_ASSESS_TYPE | 1059 | | | 9 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | SUBJECT_GROUP_PK | 41 | 1148 | | 3 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | CEA_AUTOMATIC_ALLOCATION_STATS | 5 | 295 | | 6 (0)| 00:00:01 |
| 19 | VIEW | | 679 | 48888 | | 2133 (6)| 00:00:33 |
| 20 | SORT GROUP BY | | 679 | 25123 | | 2133 (6)| 00:00:33 |
|* 21 | HASH JOIN | | 52408 | 1893K| 1744K| 2126 (6)| 00:00:33 |
| 22 | TABLE ACCESS BY INDEX ROWID | IA_INSTANCES | 52408 | 1125K| | 688 (1)| 00:00:11 |
|* 23 | INDEX RANGE SCAN | IND_IA_INSTANCES | 49077 | | | 137 (2)| 00:00:03 |
| 24 | TABLE ACCESS FULL | WORKFLOW_INSTANCE | 1075K| 15M| | 960 (7)| 00:00:15 |
|* 25 | TABLE ACCESS BY INDEX ROWID | PERSON | 1 | 27 | | 1 (0)| 00:00:01 |
|* 26 | INDEX UNIQUE SCAN | PERSON_PK | 1 | | | 0 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter(TO_NUMBER(:V_START_ROW)<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1))
2 - filter("ROW_NUM">=TO_NUMBER(:V_START_ROW) AND "ROW_NUM"<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1))
8 - access("EXNAME"."EXAMINER_CODE"="IAF"."EXAMINER_CODE"(+))
14 - access("SC"."SUBJECT"="SG"."SUBJECT" AND "SC"."LVL"="SG"."LVL")
15 - filter((:V_SUBJECT IS NULL OR "SC"."SUBJECT"=:V_SUBJECT) AND ("SC"."LVL"=:V_LVL OR :V_LVL IS NULL))
16 - access("SC"."YEAR"=TO_NUMBER(:V_YEAR) AND "SC"."MONTH"=:V_MONTH)
17 - access("SG"."YEAR"=TO_NUMBER(:V_YEAR) AND "SG"."MONTH"=:V_MONTH)
filter(:V_SUBJECT_GROUP IS NULL OR "SG"."GROUP_NUMBER"=TO_NUMBER(:V_SUBJECT_GROUP))
18 - access("CEA"."YEAR"=TO_NUMBER(:V_YEAR) AND "CEA"."MONTH"=:V_MONTH AND "SC"."PAPER_CODE"="PAPER_CODE" AND
"CEA"."ELEMENT"='Moderation of IA')
filter("CEA"."ELEMENT"='Moderation of IA' AND (:V_PADDED_EXAMINER_CODE IS NULL OR
"EXAMINER_CODE"=:V_PADDED_EXAMINER_CODE))
21 - access("WFI"."INSTANCE_ID"="IA1"."WORKFLOW_INSTANCE_ID")
23 - access("IA1"."YEAR"=TO_NUMBER(:V_YEAR) AND "IA1"."MONTH"=:V_MONTH)
25 - filter(:V_EXAMINER_NAME IS NULL OR UPPER("TITLE"||' '||"FIRSTNAME"||' '||"LASTNAME") LIKE :V_SEARCH_EXAMINER_NAME)
26 - access("EXNAME"."EXAMINER_CODE"="PERSON_CODE")
53 rows selectedHi,
please find the below rigjt explan paln.
PLAN_TABLE_OUTPUT
SQL_ID 2ct41vyyzqyh7, child number 0
SELECT * FROM (SELECT q.*, COUNT(*) OVER() AS record_count, ROWNUM AS row_num FROM (SELECT
ExName.examiner_code, examiner_name, :v_year, :v_month, count_fb, NVL(count_entered_fb,
0) count_entered_fb, NVL(count_sent_fb, 0) count_sent_fb, NVL(count_edited_fb, 0) count_edited_fb,
NVL(count_complete_fb, 0) count_complete_fb, NVL(count_withibcardiff_fb, 0) count_withibcardiff_fb FROM
(SELECT examiner_code, COUNT(*) AS count_fb FROM (SELECT
examiner_code, paper_code, assessment_school FROM ( SELECT DISTINCT
ce.examiner_code, ce.paper_code, ce.assessment_school FROM (SELECT
DISTINCT assessment_school,
paper_code, examiner
Plan hash value: 651311258
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | | 2785 (100)| |
|* 1 | FILTER | | | | | | |
|* 2 | VIEW | | 4 | 1104 | | 2785 (7)| 00:00:43 |
| 3 | WINDOW BUFFER | | 4 | 1000 | | 2785 (7)| 00:00:43 |
| 4 | COUNT | | | | | | |
| 5 | VIEW | | 4 | 1000 | | 2785 (7)| 00:00:43 |
| 6 | NESTED LOOPS | | 4 | 476 | | 2785 (7)| 00:00:43 |
| 7 | MERGE JOIN OUTER | | 4 | 368 | | 2781 (7)| 00:00:43 |
| 8 | VIEW | | 4 | 80 | | 72 (3)| 00:00:02 |
| 9 | SORT GROUP BY | | 4 | 28 | | 72 (3)| 00:00:02 |
| 10 | VIEW | | 4 | 28 | | 72 (3)| 00:00:02 |
| 11 | SORT UNIQUE | | 4 | 508 | | 72 (3)| 00:00:02 |
| 12 | NESTED LOOPS | | 4 | 508 | | 71 (2)| 00:00:02 |
|* 13 | HASH JOIN | | 1 | 68 | | 44 (3)| 00:00:01 |
|* 14 | TABLE ACCESS BY INDEX ROWID| SUBJECT_COMPONENT | 13 | 520 | | 40 (0)| 00:00:01 |
|* 15 | INDEX RANGE SCAN | SUBJECT_COMPONENT_ASSESS_TYPE | 1059 | | | 9 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | SUBJECT_GROUP_PK | 41 | 1148 | | 3 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | CEA_AUTOMATIC_ALLOCATION_STATS | 30 | 1770 | | 27 (0)| 00:00:01 |
|* 18 | SORT JOIN | | 576 | 41472 | | 2709 (7)| 00:00:42 |
| 19 | VIEW | | 576 | 41472 | | 2708 (7)| 00:00:42 |
| 20 | SORT GROUP BY | | 576 | 21312 | | 2708 (7)| 00:00:42 |
|* 21 | HASH JOIN | | 52408 | 1893K| 1744K| 2701 (7)| 00:00:41 |
|* 22 | TABLE ACCESS FULL | IA_INSTANCES | 52408 | 1125K| | 1263 (6)| 00:00:20 |
| 23 | TABLE ACCESS FULL | WORKFLOW_INSTANCE | 1075K| 15M| | 960 (7)| 00:00:15 |
|* 24 | TABLE ACCESS BY INDEX ROWID | PERSON | 1 | 27 | | 1 (0)| 00:00:01 |
|* 25 | INDEX UNIQUE SCAN | PERSON_PK | 1 | | | 0 (0)| |
Predicate Information (identified by operation id):
1 - filter(TO_NUMBER(:V_START_ROW)<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1))
2 - filter(("ROW_NUM">=TO_NUMBER(:V_START_ROW) AND "ROW_NUM"<=TO_NUMBER(:V_START_ROW)+(TO_NUMBER(:V_MAX_ROW)-1)))
13 - access("SC"."SUBJECT"="SG"."SUBJECT" AND "SC"."LVL"="SG"."LVL")
14 - filter(((:V_SUBJECT IS NULL OR "SC"."SUBJECT"=:V_SUBJECT) AND ("SC"."LVL"=:V_LVL OR :V_LVL IS NULL)))
15 - access("SC"."YEAR"=TO_NUMBER(:V_YEAR) AND "SC"."MONTH"=:V_MONTH)
16 - access("SG"."YEAR"=TO_NUMBER(:V_YEAR) AND "SG"."MONTH"=:V_MONTH)
filter((:V_SUBJECT_GROUP IS NULL OR "SG"."GROUP_NUMBER"=TO_NUMBER(:V_SUBJECT_GROUP)))
17 - access("CEA"."YEAR"=TO_NUMBER(:V_YEAR) AND "CEA"."MONTH"=:V_MONTH AND "SC"."PAPER_CODE"="PAPER_CODE" AND
"CEA"."ELEMENT"='Moderation of IA')
filter(("CEA"."ELEMENT"='Moderation of IA' AND (:V_PADDED_EXAMINER_CODE IS NULL OR
"EXAMINER_CODE"=:V_PADDED_EXAMINER_CODE)))
18 - access("EXNAME"."EXAMINER_CODE"="IAF"."EXAMINER_CODE")
filter("EXNAME"."EXAMINER_CODE"="IAF"."EXAMINER_CODE")
21 - access("WFI"."INSTANCE_ID"="IA1"."WORKFLOW_INSTANCE_ID")
22 - filter(("IA1"."MONTH"=:V_MONTH AND "IA1"."YEAR"=TO_NUMBER(:V_YEAR)))
24 - filter((:V_EXAMINER_NAME IS NULL OR UPPER("TITLE"||' '||"FIRSTNAME"||' '||"LASTNAME") LIKE :V_SEARCH_EXAMINER_NAME))
25 - access("EXNAME"."EXAMINER_CODE"="PERSON_CODE")
66 rows selected -
Query performance issues - Poor cardinality estimate?
Hi,
I have a query which is taking far longer than estimated by the explain plan (estimate 1min, query still running after several hours).
Plan hash value: 3287246760
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 195 | 3795 (1)| 00:00:46 |
| 1 | VIEW | | 1 | 195 | 3795 (1)| 00:00:46 |
| 2 | WINDOW SORT | | 1 | 151 | 3795 (1)| 00:00:46 |
| 3 | VIEW | | 1 | 151 | 3794 (1)| 00:00:46 |
| 4 | SORT UNIQUE | | 1 | 147 | 3794 (1)| 00:00:46 |
| 5 | WINDOW BUFFER | | 1 | 147 | 3794 (1)| 00:00:46 |
| 6 | SORT GROUP BY PIVOT | | 1 | 147 | 3794 (1)| 00:00:46 |
| 7 | NESTED LOOPS | | | | | |
| 8 | NESTED LOOPS | | 1 | 147 | 3793 (1)| 00:00:46 |
| 9 | NESTED LOOPS | | 3 | 297 | 1503 (1)| 00:00:19 |
|* 10 | HASH JOIN | | 238 | 15470 | 75 (7)| 00:00:01 |
| 11 | MAT_VIEW ACCESS FULL | VENTILATION | 17994 | 404K| 35 (0)| 00:00:01 |
| 12 | VIEW | | 17994 | 738K| 39 (11)| 00:00:01 |
| 13 | SORT UNIQUE | | 17994 | 702K| 39 (11)| 00:00:01 |
| 14 | WINDOW SORT | | 17994 | 702K| 39 (11)| 00:00:01 |
|* 15 | VIEW | | 17994 | 702K| 37 (6)| 00:00:01 |
| 16 | WINDOW SORT | | 17994 | 632K| 37 (6)| 00:00:01 |
| 17 | MAT_VIEW ACCESS FULL | VENTILATION | 17994 | 632K| 35 (0)| 00:00:01 |
| 18 | INLIST ITERATOR | | | | | |
|* 19 | TABLE ACCESS BY INDEX ROWID| LABEVENTS | 1 | 34 | 6 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | LABEVENTS_O5 | 5 | | 3 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | CHARTEVENTS_O5 | 4937 | | 12 (0)| 00:00:01 |
|* 22 | TABLE ACCESS BY INDEX ROWID | CHARTEVENTS | 1 | 48 | 763 (0)| 00:00:10 |
Predicate Information (identified by operation id):
10 - access("ICUS"."SUBJECT_ID"="FVGT48H"."SUBJECT_ID" AND
SYS_EXTRACT_UTC("FVGT48H"."BEGIN_TIME")=SYS_EXTRACT_UTC("ICUS"."BEGIN_TIME"))
15 - filter((INTERNAL_FUNCTION("END_TIME")-INTERNAL_FUNCTION("BEGIN_TIME"))DAY(9) TO
SECOND(9)>INTERVAL'+02 00:00:00' DAY(2) TO SECOND(0))
19 - filter(SYS_EXTRACT_UTC("LE"."CHARTTIME")>=SYS_EXTRACT_UTC("FVGT48H"."BEGIN_TIME") AND
SYS_EXTRACT_UTC("LE"."CHARTTIME")<=SYS_EXTRACT_UTC("FVGT48H"."END_TIME"))
20 - access("ICUS"."ICUSTAY_ID"="LE"."ICUSTAY_ID" AND ("LE"."ITEMID"=50013 OR
"LE"."ITEMID"=50019))
filter("LE"."ICUSTAY_ID" IS NOT NULL)
21 - access("LE"."ICUSTAY_ID"="CE"."ICUSTAY_ID")I tried removing the nested loops using the NO_USE_NL hints, which give the following plan:
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 195 | | 22789 (1)| 00:04:34 |
| 1 | VIEW | | 1 | 195 | | 22789 (1)| 00:04:34 |
| 2 | WINDOW SORT | | 1 | 151 | | 22789 (1)| 00:04:34 |
| 3 | VIEW | | 1 | 151 | | 22788 (1)| 00:04:34 |
| 4 | SORT UNIQUE | | 1 | 147 | | 22788 (1)| 00:04:34 |
| 5 | WINDOW BUFFER | | 1 | 147 | | 22788 (1)| 00:04:34 |
| 6 | SORT GROUP BY PIVOT | | 1 | 147 | | 22788 (1)| 00:04:34 |
|* 7 | HASH JOIN | | 1 | 147 | | 22787 (1)| 00:04:34 |
| 8 | VIEW | | 17994 | 738K| | 39 (11)| 00:00:01 |
| 9 | SORT UNIQUE | | 17994 | 702K| | 39 (11)| 00:00:01 |
| 10 | WINDOW SORT | | 17994 | 702K| | 39 (11)| 00:00:01 |
|* 11 | VIEW | | 17994 | 702K| | 37 (6)| 00:00:01 |
| 12 | WINDOW SORT | | 17994 | 632K| | 37 (6)| 00:00:01 |
| 13 | MAT_VIEW ACCESS FULL | VENTILATION | 17994 | 632K| | 35 (0)| 00:00:01 |
|* 14 | HASH JOIN | | 11873 | 1217K| 5800K| 22747 (1)| 00:04:33 |
|* 15 | HASH JOIN | | 86060 | 4790K| | 16141 (2)| 00:03:14 |
| 16 | MAT_VIEW ACCESS FULL | VENTILATION | 17994 | 404K| | 35 (0)| 00:00:01 |
|* 17 | TABLE ACCESS FULL | LABEVENTS | 176K| 5869K| | 16105 (2)| 00:03:14 |
| 18 | INLIST ITERATOR | | | | | | |
| 19 | TABLE ACCESS BY INDEX ROWID| CHARTEVENTS | 104K| 4911K| | 6024 (1)| 00:01:13 |
|* 20 | INDEX RANGE SCAN | CHARTEVENTS_O4 | 104K| | | 220 (1)| 00:00:03 |
Predicate Information (identified by operation id):
7 - access("ICUS"."SUBJECT_ID"="FVGT48H"."SUBJECT_ID" AND
SYS_EXTRACT_UTC("FVGT48H"."BEGIN_TIME")=SYS_EXTRACT_UTC("ICUS"."BEGIN_TIME"))
filter(SYS_EXTRACT_UTC("LE"."CHARTTIME")>=SYS_EXTRACT_UTC("FVGT48H"."BEGIN_TIME") AND
SYS_EXTRACT_UTC("LE"."CHARTTIME")<=SYS_EXTRACT_UTC("FVGT48H"."END_TIME"))
11 - filter((INTERNAL_FUNCTION("END_TIME")-INTERNAL_FUNCTION("BEGIN_TIME"))DAY(9) TO
SECOND(9)>INTERVAL'+02 00:00:00' DAY(2) TO SECOND(0))
14 - access("LE"."ICUSTAY_ID"="CE"."ICUSTAY_ID")
filter(SYS_EXTRACT_UTC("CHARTTIME")<SYS_EXTRACT_UTC("LE"."CHARTTIME"))
15 - access("ICUS"."ICUSTAY_ID"="LE"."ICUSTAY_ID")
17 - filter("LE"."ICUSTAY_ID" IS NOT NULL AND ("LE"."ITEMID"=50013 OR "LE"."ITEMID"=50019))
20 - access("CE"."ITEMID"=185 OR "CE"."ITEMID"=186 OR "CE"."ITEMID"=190 OR "CE"."ITEMID"=3420)The cardinality estimate looks way off to me - I'm expecting several thousand rows. I have up-to-date statistics.
Can anyone help?
Thanks,
DanWITH chf_patients AS (
-- Exclude patients with CHF by ICD9 code
select subject_id,
hadm_id
from mimic2v26.icd9
where code in ('398.91','402.01','402.91','428.0','428.0', '428.1', '404.13', '404.93', '428.9', '404.91')
, icustays AS (
/* Our ICU Stay population */
SELECT *
FROM MIMIC2V26.ICUSTAY_DETAIL
WHERE ICUSTAY_AGE_GROUP = 'adult'
AND SUBJECT_ID NOT IN (select subject_id from chf_patients)
-- AND SUBJECT_ID < 50
--select * from icustays;
-- Combine ventilation periods separated by < 48 hours.
, combine_ventilation as (
select subject_id,
icustay_id,
begin_time,
-- end_time as end_first_vent,
-- lead(begin_time,1) over (partition by icustay_id order by begin_time) as next_begin_time,
-- lead(begin_time,1) over (partition by icustay_id order by begin_time) - begin_time as time_to_next,
case when (lead(begin_time,1) over (partition by icustay_id order by begin_time) - begin_time) < interval '2' day
then lead(end_time,1) over (partition by icustay_id order by begin_time)
else end_time end as end_time
from mimic2devel.ventilation
--select * from combine_ventilation;
--select * from combine_ventilation where end_of_ventilation != end_time;
-- Get the first ventilation period which is > 48 hours.
, first_vent_gt_48hrs as (
select distinct subject_id,
first_value(begin_time) over (partition by subject_id order by begin_time) as begin_time,
first_value(end_time) over (partition by subject_id order by begin_time) as end_time
from combine_ventilation where end_time - begin_time > interval '48' hour
--select * from first_vent_gt_48hrs;
-- Find the ICU stay when it occurred
, icustay_first_vent_gt_48hrs as (
select fvgt48h.subject_id,
icus.icustay_id,
fvgt48h.begin_time,
fvgt48h.end_time
from first_vent_gt_48hrs fvgt48h
join mimic2devel.ventilation icus on icus.subject_id = fvgt48h.subject_id and fvgt48h.begin_time = icus.begin_time
--select /*+gather_plan_statistics*/ * from icustay_first_vent_gt_48hrs;
, pao2_fio2_during_ventilation as (
select /*+ NO_USE_NL(le ifvgt48h) */
le.subject_id,
le.hadm_id,
le.icustay_id,
charttime,
case when itemid = 50019 then 'PAO2'
when itemid = 50013 then 'FIO2'
end as item_type,
-- Some FIO2s are fractional instead of percentage
case when itemid = 50013 and valuenum > 1 then round(valuenum / 100,2)
else round(valuenum,2)
end as valuenum
from mimic2v26.labevents le
join icustay_first_vent_gt_48hrs ifvgt48h on ifvgt48h.icustay_id = le.icustay_id and le.charttime between ifvgt48h.begin_time and ifvgt48h.end_time
where le.itemid = 50019 or le.itemid = 50013
--select * from pao2_fio2_during_ventilation;
-- Check that FIO2s have valid range
, vent_data_pivot as (
select * from (
select subject_id, hadm_id, icustay_id, charttime, item_type, valuenum from pao2_fio2_during_ventilation)
pivot ( max(valuenum) as valuenum for item_type in ('FIO2' as fio2, 'PAO2' as pao2) )
--select * from vent_data_pivot;
-- Fill in prior FIO2 from chartevents
, get_prior_fio2s as (
select /*+ NO_USE_NL(vdp ce) */
distinct
vdp.subject_id,
vdp.hadm_id,
vdp.icustay_id,
vdp.charttime as pao2_charttime,
vdp.fio2_valuenum,
vdp.pao2_valuenum,
-- ce.itemid,
-- ce.charttime as chart_charttime,
-- ce.value1num,
first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) as most_recent_fio2_raw,
case when first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) > 1
then round(first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) / 100,2)
else round(first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc),2)
end as most_recent_fio2,
first_value(ce.charttime) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) as most_recent_fio2_charttime,
vdp.charttime - first_value(ce.charttime) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) as time_since_fio2,
-- first_value(ce.charttime) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) as most_recent_charttime
case when first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) > 1
then round(vdp.pao2_valuenum/(first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) / 100),2)
else round(vdp.pao2_valuenum/(first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc)),2)
end as pf_ratio,
case when first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) > 1
then
case when vdp.pao2_valuenum/(first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc) / 100) < 200 then 1 else 0 end
else
case when vdp.pao2_valuenum/(first_value(ce.value1num) over (partition by ce.icustay_id, vdp.charttime order by ce.charttime desc)) < 200 then 1 else 0 end
end as pf_ratio_below_thresh
from vent_data_pivot vdp
join mimic2v26.chartevents ce on vdp.icustay_id = ce.icustay_id and ce.charttime < vdp.charttime
where itemid in (190,3420,186,185)
--select * from get_prior_fio2s order by icustay_id, charttime;
, pf_data as (
select subject_id,
hadm_id,
icustay_id,
pao2_charttime,
lead(pao2_charttime) over (partition by icustay_id order by pao2_charttime) as next_pao2_charttime,
fio2_valuenum,
pao2_valuenum,
lead(pao2_valuenum) over (partition by icustay_id order by pao2_charttime) as next_pao2_valuenum,
most_recent_fio2_raw,
most_recent_fio2,
most_recent_fio2_charttime,
time_since_fio2,
pf_ratio,
lead(pf_ratio) over (partition by icustay_id order by pao2_charttime) as next_pf_ratio,
pf_ratio_below_thresh,
lead(pf_ratio_below_thresh) over (partition by icustay_id order by pao2_charttime) as next_pf_ratio_below_thresh
from get_prior_fio2s
select * from pf_data;Table structure is available here:
http://mimic.physionet.org/schema/latest/
Can I still get a TKPROF if the query doesn't complete? I'll have a go and post the results shortly.
Thanks,
Dan -
BSIS Query performance issue.
HI,
our BSIS table contains almost 10 millions records, now during check printing process when we are quering BSIS table for getting cost center from the line item.
it is taking almost 30-60 seconds which is slowing the check printing process.
SELECT single * FROM BSAK
WHERE BUKRS = COMP AND GJAHR = YR AND AUGBL = PDOC AND BELNR <> PDOC.
SELECT SINGLE * FROM BSIS WHERE
BELNR = BSAK-BELNR AND BUKRS = COMP AND GJAHR = YR
AND ( HKONT LIKE 'A%' OR BLART = 'RE' ).
kostl = bsis-kostl.
for getting cost center first it picks the FI document number from BSAK using Payment document number. and then getting cost center from bsis.
Is there any alternative way of getting cost center of document skiping BSIS.
Thanks,hi,
if you are firing this query regularly then create intex on bsis table.
performance can improve by creating index only.
regards,
raj -
Query Performance Issue.
Experts,
Need help on tuning the below query, which is taking much time to execute. Below is the Query and Explain Plan given:
Query:
SELECT TO_CHAR(IDENTIFIER), 0, 0, 0, 0, 0
FROM BCACCT
WHERE IDENTIFIER NOT IN (SELECT ACCTID FROM BCCURRENTBALANCESNAPSHOT)
AND CURBAL != 0
AND CURBAL IS NOT NULL;
Explain Plan:
PLAN_TABLE_OUTPUT
Plan hash value: 1403483523
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 282K| 5796K| 208M(3)|999:59:59 |
|* 1 | FILTER | | | |
| |
|* 2 | TABLE ACCESS FULL| BCACCT | 282K| 5796K| 7051(2)| 00:02:07 |
|* 3 | TABLE ACCESS FULL| BCCURRENTBALANCESNAPSHOT | 1 | 17 | 748(3)| 00:00:14 |
Predicate Information (identified by operation id):
1 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "BCCURRENTBALANCESNAPSHOT"
"BCCURRENTBALANCESNAPSHOT" WHERE LNNVL("ACCTID"<>:B1)))
2 - filter("CURBAL"<>0 AND "CURBAL" IS NOT NULL)
3 - filter(LNNVL("ACCTID"<>:B1))
18 rows selected.
The total records in the table are
SELECT count(*) FROM BCCURRENTBALANCESNAPSHOT -----1281981
SELECT count(*) from BCACCT ----1281981
Please let me know what else is required to solve the issue.
Regards
Oracle User
Edited by: oracle user on Feb 15, 2013 7:05 AMsb92075 wrote:
neither index above will be used.False. The optimizer can at least use the composite index. If acctid is not null, then both indexes are used.
My guess is that you are saying this because you saw some red flags, such as the function call on one column and the fact that nulls are not normally indexed. However, the index on IDENTIFIER can be used in the predicate (where no function call is used), even though it might not be used for the projection. Also, since one of the columns in the index must not be null (CURBAL), we know that all selectable rows in BCACCT are indexed. Finally, this is not just a thought experiment--I actually built the tables and indexes and tested the query in an Oracle database, so that my answer could be relied upon. -
Query Performance Issue - Max time spent on DB read
Hi Experts,
We have a big query which is having two levels and it is pulling data from 10 underlying cubes which have different sales organizations in each. Due to design of the query and amount of data in cubes the maximum time is being spent on DB read. Now we have checked the trace of the query execution and we have seen that the system goes and check , say a Cube A even when it does not contain data for the sales org that is requested by the user. Please note that we have mandatory selection for Sales org on the Level 1 query.
In order to resolve this issue we have maintained entry of SALESORG in the RRKMULTIPROVHINT table, but on running the trace again we are seeing that for Level 2 query the system is still querying the cubes which do not contain the data for the sales org requested by the user there by increasing the run time of the query many folds. Can you please help us out if you have faced similar issue?
Regards,
Prashant ShahI think your Query data selection is expecting data from all the cubes. You have to analyze right from the Query definition and the variables selection to know further.
Please put "H" for your MP in RSDIPROP t-code and choose H in Query read mode in RSRT-->Properties.
Maybe you are looking for
-
Using my USB, my ipod touch 2 is not recognized in Windows Vista or i tunes. If I connect it to my wifes Xp, ( where it was before) , it still charges up ( i tunes has been uninstalled. Any help would be appreciated. I have reinstalled itunes.
-
Content Server Migration to Windows 2088 Server
Hi, All our systems are on windows 2008 server. Since during time of Implementation CS was not supported on Wind 2K8 we had to install it on a separate hardware (Win 2K3). Now that it is supported we need to migrate our server from this separate hard
-
Problem while sending Smartform through Fax
Hi Folks, I am sending a Smartform through fax by setting the essential Control Parameters and Output Options while calling the function module of the Smartform. In SOST I get the status message 710(Message transferred to node FAX(...) ) and later in
-
Clicking on a link often causes the link to open behind the window with the linik.
Clicking on a link often causes the link to open behind the window with the link. This occurs most frequently when only one or two windows are open.
-
Need help asap with this issue! On Sunday I photographed using a friend's camera (Canon 70D) with a 16gb memory card. I always upload my images on my iMac using a card reader. As I uploaded these images on Tuesday, all my raw files became destroyed i