Query construction issues...
So what i'm trying to do is output a single row for each person_id. Additionally, if a person_id has a type_id of 1 then i would like a column to output 'YES' otherwise 'NO' and the same goes for type_id of 0.
Oracle version: 10.2G
Content of TableX
PERSON_ID NAME TYPE_ID
200839 Bob 0
200839 Bob 1
200839 Bob
200874 Chris 1
200811 Mike 1
200893 James 0
200877 Nick
200877 Nick 1
200001 Terry
Desired Output:
PERSON_ID NAME ID_0 ID_1
200839 Bob YES YES
200874 Chris NO YES
200811 Mike NO YES
200893 James YES NO
200877 Nick NO YES
200001 Terry NO NO
Current Query
with xtable as
(select 200839 person_id, 'Bob' name, 0 type_id from dual union all
select 200839, 'Bob',1 from dual union all
select 200839, 'Bob',null from dual union all
select 200874, 'Chris',1 from dual union all
select 200811, 'Mike',1 from dual union all
select 200893, 'James',0 from dual union all
select 200877, 'Nick',null from dual union all
select 200877, 'Nick',1 from dual union all
select 200001, 'Terry',null from dual)
select
person_id,
name,
case
when type_id = 0 then
'YES'
else
'NO'
end as id_0,
case
when type_id = 1 then
'YES'
else
'NO'
end as id_1
from
select person_id,
name,
type_id,
count(*) over (partition by person_id) as total
from xtable
)I'm pretty much stuck at the query above.
Edited by: user652714 on May 5, 2010 12:31 PM
Once you use the case/decode, you get one row for each Id, name and type combination. You'll then need to group them by ID, Name to get the data output as you need.
1 with xtable as
2 (select 200839 person_id, 'Bob' name, 0 type_id from dual union all
3 select 200839, 'Bob',1 from dual union all
4 select 200839, 'Bob',null from dual union all
5 select 200874, 'Chris',1 from dual union all
6 select 200811, 'Mike',1 from dual union all
7 select 200893, 'James',0 from dual union all
8 select 200877, 'Nick',null from dual union all
9 select 200877, 'Nick',1 from dual union all
10 select 200001, 'Terry',null from dual)
11 select person_id,
12 name,
13 max(decode(type_id,0,'YES')) T0,
14 max(decode(type_id,1,'YES')) T1
15 from xtable
16* group by person_id, name
sql> /
PERSON_ID NAME T0 T1
200874 Chris YES
200877 Nick YES
200001 Terry
200893 James YES
200839 Bob YES YES
200811 Mike YESUsing NVL.... to the above query, you can get it in the required format that you need. Of course, you can use case instead of decode too.
1 with xtable as
2 (select 200839 person_id, 'Bob' name, 0 type_id from dual union all
3 select 200839, 'Bob',1 from dual union all
4 select 200839, 'Bob',null from dual union all
5 select 200874, 'Chris',1 from dual union all
6 select 200811, 'Mike',1 from dual union all
7 select 200893, 'James',0 from dual union all
8 select 200877, 'Nick',null from dual union all
9 select 200877, 'Nick',1 from dual union all
10 select 200001, 'Terry',null from dual)
11 select person_id,
12 name,
13 nvl(max(decode(type_id,0,'YES')),'NO') T0,
14 nvl(max(decode(type_id,1,'YES')),'NO') T1
15 from xtable
16* group by person_id, name
sql> /
PERSON_ID NAME T0 T1
200874 Chris NO YES
200877 Nick NO YES
200001 Terry NO NO
200893 James YES NO
200839 Bob YES YES
200811 Mike NO YES
Similar Messages
-
Query construction issue...
I have a parameter in my proc called xParam and I'm trying to incorporate it into one of my cursors. Here is what my current cursor looks like:
select *
from tablex
where id in (1,2) and xParam = 'ALL' or
id = 1 and xParam = 'FIRST' or
id = 2 and xParam = 'SECOND'
and com_sec = 3So basically what i'm trying to do is have the cursor include all rows from tablex where id's are 1 and 2 if the value passed into xParam is 'ALL', include all rows where id is equal to 1 when the value passed into xParam is 'FIRST', etc.
Oracle: 10g
Here's some sample data to give a better idea:
--tablex
ID COM_SEC FNAME LNAME
1 3 Bob Johnson
2 3 John Smith
1 2 Chris Carter
1 2 Bill Curtis
2 3 David Lee
2 2 Brett NicksSo if the value passed for xParam is 'ALL' I want to include the entire data from the table, if the value passed for the parameter is 'FIRST' i want the following result set:
ID COM_SEC FNAME LNAME
1 3 Bob Johnson
--Result set if xParam is 'SECOND'
2 3 John Smith
2 3 David LeeSo my question is my query built correctly to accomplish this? Will the 'or' statements in the cursor work the way that i need them to (aka to get the result set desired). The reason I ask is b/c the or statements don't seem to be working correct in my actual proc the way i want them to.
Edited by: user652714 on Dec 29, 2009 9:15 AMuser652714 wrote:
I have a parameter in my proc called xParam and I'm trying to incorporate it into one of my cursors. Here is what my current cursor looks like:Discover parenthesis. They work same way like in math ;)
select *
from tablex
where xParam = 'ALL'
or (id = 1 and xParam = 'FIRST')
or (id = 2 and xParam = 'SECOND')
/Or use CASE:
select *
from tablex
where id = case xParam
when 'ALL' then id
when 'FIRST' then 1
when 'SECOND' then 2
end
/SY.
Edited by: Solomon Yakobson on Dec 29, 2009 9:19 AM -
Hi Gurus,
I m woking on performance tuning at the moment and wants some tips
regarding the Query performance tuning,if anyone can helpme in that
rfrence.
the thing is that i have got an idea about the system and now the
issues r with DB space, Abap Dumps, problem in free space in table
space, no number range buffering,cubes r using too many aggrigates,
large IC,Large ODS, and many others.
So my questionis that is anyone can tell me that how to resolve the
issues with the Large master data tables,and large ODS,and one more
importnat issue is KPI´s exceding there refrence valuses so any idea
how to deal with them.
waiting for the valuable responces.
thanks In advance
Redards
AmitHi Amit
For Query performance issue u can go for :-
Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
secondly i wud suggest u is use CKF in place of formulaes if any in the query
other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary join to ur MD and thus dec query perfo
be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
use filters if possible
if u follow these m sure ur query perfo will inc
Assign points if applicable
Thanks
puneet -
SQL query performance issues.
Hi All,
I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
SQL query performance issues.
Following is the tkprof file.
CURSOR_ID:76 LENGTH:2383 ADDRESS:f6b40ab0 HASH_VALUE:2459471753 OPTIMIZER_GOAL:ALL_ROWS USER_ID:443 (APPS)
insert into cos_temp(
TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
INVOICE_NUMBER, EXT_SALES, EXT_COS,
GROSS_PROFIT, ACCT_DATE,
SHIPMENT_TYPE,
FROM_ORGANIZATION_ID,
FROM_ORGANIZATION_CODE)
select a.trx_date,
g.segment5 dept,
g.segment4 prd,
m.segment1 part,
d.customer_number customer,
b.quantity_invoiced units,
-- substr(a.sales_order,1,6) order#,
substr(ltrim(b.interface_line_attribute1),1,10) order#,
a.trx_number invoice,
(b.quantity_invoiced * b.unit_selling_price) sales,
(b.quantity_invoiced * nvl(price.operand,0)) cos,
(b.quantity_invoiced * b.unit_selling_price) -
(b.quantity_invoiced * nvl(price.operand,0)) profit,
to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
'DRP',
l.ship_from_org_id,
p.organization_code
from ra_customers d,
gl_code_combinations g,
mtl_system_items m,
ra_cust_trx_line_gl_dist c,
ra_customer_trx_lines b,
ra_customer_trx_all a,
apps.oe_order_lines l,
apps.HR_ORGANIZATION_INFORMATION i,
apps.MTL_INTERCOMPANY_PARAMETERS inter,
apps.HZ_CUST_SITE_USES_ALL site,
apps.qp_list_lines_v price,
apps.mtl_parameters p
where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
and a.batch_source_id = 1001 -- Sales order shipped other OU
and a.complete_flag = 'Y'
and a.customer_trx_id = b.customer_trx_id
and b.customer_trx_line_id = c.customer_trx_line_id
and a.sold_to_customer_id = d.customer_id
and b.inventory_item_id = m.inventory_item_id
and m.organization_id
= decode(substr(g.segment4,1,2),'01',5004,'03',5004,
'02',5003,'00',5001,5002)
and nvl(m.item_type,'0') <> '111'
and c.code_combination_id = g.code_combination_id+0
and l.line_id = b.interface_line_attribute6
and i.organization_id = l.ship_from_org_id
and p.organization_id = l.ship_from_org_id
and i.org_information3 <> '5108'
and inter.ship_organization_id = i.org_information3
and inter.sell_organization_id = '5108'
and inter.customer_site_id = site.site_use_id
and site.price_list_id = price.list_header_id
and product_attr_value = to_char(m.inventory_item_id)
call count cpu elapsed disk query current rows misses
Parse 1 0.47 0.56 11 197 0 0 1
Execute 1 3733.40 3739.40 34893 519962154 11 188 0
total 2 3733.87 3739.97 34904 519962351 11 188 1
| Rows Row Source Operation
| ------------ ---------------------------------------------------
| 188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
| 741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
| 741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
| 741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
| 741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
| 3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
| 3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
| 3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
| 1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
| 1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
| 486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
| 486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
| 486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
| 75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
| 486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
| 486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
| 486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
| 1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
| 2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
| 1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
| 1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
| 3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
| 3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
| 3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
| 3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
| 3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
| 3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
| 741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
| 6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
Regards
Ashish| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
Please post explain plan and optimizer* parameter settings. -
I'm using SQL Server 2008 R2 (10.50.4033) and I'm troubleshooting an issue that a select query against a specific view is taking more than 30 seconds consistently. The issue just starts happening this week and there is no mass changes in data.
The problem only occur if the query is issued from an IIS application but not from SSMS. One thing I noticed is that sys.dm_exec_cached_plans is returning 2 Parse Tree rows for the view - one created when the select query is issued
1st time from the IIS application and another one created when the same select query is issued 1st time from SSMS. The usecounts of the Parse Tree row for the view (the IIS one) is increasing whenever the select query is issued. The
usecounts of the Parse Tree row for the view (the SSMS one) does not increase when the select query is issued again.
There seems to be a correlation between the slowness of the query and the increasing of the usecounts of the Parse Tree row for the view.
I don't know why there is 2 Parse Tree rows for the view. There is also 2 Compiled Plan rows for the select query.
What does the Parse Tree row mean especially the usecounts column?>> The issue just starts happening this week and there is no mass changes in data.
There might be a mass changes in the execution plan for several reason without mass changes in data
If you have the old version and a way to check the old execution plan, and compare to the new one, that this should be your starting point. In most cases you don't have this option and we need to monitor from scratch.
>> The problem only occur if the query is issued from an IIS application but not from SSMS.
This mean that we know exactly what is the different and you can compare both execution plan. once you do it, you will find that they are no the same. But this is very common issue and we can know that it is a result of different SETting while connecting
from different application. SSMS is an external app like any app that you develop in Visual studio but the SSMS dose not use the Dot.Net default options.
Please check this link, to find the full explanation and solutions:
http://www.sommarskog.se/query-plan-mysteries.html
Take a look at sys.dm_exec_sessions for your ASP.Net application and for your SSMS session.
If you need more specific help, then we need more information and less stories :-)
We need to see the DDL+DML+Query and both execution plans
>> What does the Parse Tree row mean
I am not sure what you mean but the parse tree represents the logical steps necessary to execute the query that has been requested. you can check this tutorial about the execution plan: https://www.simple-talk.com/sql/performance/execution-plan-basics/ or
this one: http://www.developer.com/db/understanding-a-sql-server-query-execution-plan.html
>> regarding the usecount column or any other column check this link:
https://msdn.microsoft.com/en-us/library/ms187404.aspx?f=255&MSPPError=-2147217396.
Ronen Ariely
[Personal Site] [Blog] [Facebook] -
40357-invalid string in example record query not issued
hello experts,
i am using forms 10g.in query mode i face that error 40357-invalid string in example record query not issued.
i used these code in key-next-item trigger
PROCEDURE KN_FOR_QUERY IS
BEGIN
IF :global.navigation = 'D' AND :global.mode = 'M'
THEN
IF NAME_IN(:SYSTEM.CURSOR_ITEM) IS NOT NULL
THEN
:global.temp_div_code:= :po_m.po_div_code;
:global.temp_po_num := :po_m.po_num;
:global.temp_po_ex_work := :PUR_DELV_D.DELV_EX_WORK;
:global.temp_modi_num:= :po_m.po_modi_num;
IF GET_BLOCK_PROPERTY(:SYSTEM.CURSOR_BLOCK,QUERY_HITS)=0
THEN
-- message('1---'||:SYSTEM.CURSOR_BLOCK);
-- message('2---'||:SYSTEM.CURSOR_BLOCK);
GO_BLOCK(:SYSTEM.CURSOR_BLOCK);
CLEAR_BLOCK(no_validate);
EXECUTE_QUERY;
-- ELSE
-- NEXT_ITEM;
END IF;
-- ELSE
-- mess(GET_ITEM_PROPERTY(:SYSTEM.CURSOR_ITEM,PROMPT_TEXT)||' Must Be Entered For Query...');
END IF;
ELSIF :global.navigation = 'D' and :global.mode = 'Q'
THEN
IF NAME_IN(:SYSTEM.CURSOR_ITEM) IS NOT NULL
THEN
MESS('Press Execute query button');
go_item('tools.execute_query');
ELSE
mess(GET_ITEM_PROPERTY(:SYSTEM.CURSOR_ITEM,PROMPT_TEXT)||' Must Be Entered For Query...');
END IF;
ELSIF :global.navigation = 'D' and :global.mode = 'A'
THEN
IF NAME_IN(:SYSTEM.CURSOR_ITEM) IS NOT NULL
THEN
NEXT_ITEM;
ELSE
mess(GET_ITEM_PROPERTY(:SYSTEM.CURSOR_ITEM,PROMPT_TEXT)||' Must Be Entered...');
END IF;
END IF;
END;
Thanks
RaviHi Ravi
u may need to debug to find out where and when the error exist pls note the following
Error Message: FRM-40357: Invalid string in example record. Query not issued.Error Cause:In query mode, you entered an invalid ALPHA or CHAR value in the example record.
Action:Correct the entry and retry the query. Level: >25
Type: Errorpls verifying that u r entering 1 char for global variable to be assigned so any number between 2single coat is considered a character not s number .
Amatu Allah -
Query Throttling issue after increasing List view threshold in connected webpart
Dear SharePointers,
I have connnected listview webpart deployed on site. Parent webpart
has 16 fields and over 3,000 listitems exists in list. Child webpart has 7 fields and more than 30,000 Items. The code works well in other sites in same web app and it also works well in other systems. We tested same in other system by increasing/decreasing
data but still we are not able to replicate issue. I know that this issue could also encounter if fields are not indexed in the list. Are there any other pointers which can use to debug this issue.
Milan Chauhan
Regards,
Milan Chauhan
LinkedIn
|
Twitter | Blog
| EmailHi,
According to your post, a Query Throttling issue occurred after increasing list view threshold in connected web part.
By default, this limit is set to 5,000 items for regular users and 20,000 items for users in an administrator role.
More information is here:
http://msdn.microsoft.com/en-us/library/ff798465.aspx
Please try to enable the Daily Time Window for Large Queries:
1.Go to Central Administration-> Application Management -> Manage web applications.
2.Select the web application that contains the large list.
3.Click General Settings, and then click Resource Throttling.
4.Under Daily Time Window for Large Queries, click to select Enable a daily time window for large queries.
5.Set a start time and duration when most of your users will not be working.
6.Click OK.
Best Regards
Dennis Guo
TechNet Community Support -
Query Performace Issue-Usage of SAP_DROP_EMPTY_FPARTITIONS Program
Hi Experts,
We are facing query peroformance issue in our BW Production System. Queries on the Sales Multiprovider are taking lot of time to run. We need to tune the query perofrmace.
We need to drop the empty partitions at the database level. Have anyone of you used the program SAP_DROP_EMPTY_FPARTITIONS to drop the empty partitions ? If Yes, Please provide me with details of your experience of using this program. Please let me know, whether there are any disadvantages using this program in the Production System.
Kindly treat this as an urgent requirement.
Your help will be appreciated....
Thanks,
ShalakaHi Shwetha,
I think that pgm drops if partition contains no records(in DEL_CNT)
or if partitions requid is not in dimtab (DEL_DIM)
Hope it helps!
(and don't forget to reward the answer, if you want !)
Bye,
Roberto -
SQl query construction in VO issue
All,
I want to create a VO with a query like this
select columns from table1, table 2 where table1.column1 = table2.column Then i have to create Different View Criteria on them, so end query would be
select columns from table1, table 2 where table1.column1 = table2.column where VC query This is not be a proper sql statement and will thorw error ? How do we achieve this ?
thnks
Jdv 11.1.1.5Hi,
When you construct a query and add a vc to it, the resultant query (generated at RT) would be looking something like this.
select * from (<your existing query>) -- Is QRSLT
where (<your vc where clause)So, when you set setNestedSelectForFullSql to false, the QRSLT is removed and actual query would be used (something like)
<your existing query>) where (<your vc where clause)Can you debug and see what query is being executed ? (by setting -Djbo.debugoutput=console).
-Arun -
Query Rewrite ISSUE (ANSI JOINS do not work, traditional join works ) 11gR2
For some types of queries constructed with ANSI JOINS, materialized views are not being used.
This is currently increasing time on various reports since we cannot control the way the queries are generated(Tableau Application generates and runs queries against the STAR Schema).
Have tried to debug this behavior using DBMS_MVIEW.EXPLAIN_REWRITE and mv_capabilities function without any success.
The database is configured for query rewrite: REWRITE INTEGRITY, QUERY REWRITE ENABLED and other settings are in place.
Have successfully reproduced the issue using SH Sample schema:
Q1 and Q2 are logically identical the only difference between them being the type of join used:
Q1: ANSI JOIN
Q2: Traditional join
Below is an example that can be validated on SH sample schema.
Any help on this will be highly appreciated.
-- Q1: the query is generated by an app and needs to be rewritten with materialized view
SELECT cntr.country_subregion, cust.cust_year_of_birth, COUNT(DISTINCT cust.cust_first_name)
FROM customers cust
INNER JOIN countries cntr
ON cust.country_id = cntr.country_id
GROUP BY cntr.country_subregion, cust_year_of_birth;
-- Q2: the query with traditional join is rewritten with materialized view
SELECT cntr.country_subregion, cust.cust_year_of_birth, COUNT(DISTINCT cust.cust_first_name)
FROM customers cust
INNER JOIN countries cntr
ON cust.country_id = cntr.country_id
GROUP BY cntr.country_subregion, cust_year_of_birth;Tested both queries with the following materialized views:
CREATE MATERIALIZED VIEW MVIEW_TEST_1
ENABLE QUERY REWRITE
AS
SELECT cntr.country_subregion, cust.cust_year_of_birth, COUNT(DISTINCT cust.cust_first_name)
FROM customers cust
INNER JOIN countries cntr
ON cust.country_id = cntr.country_id
GROUP BY cntr.country_subregion, cust_year_of_birth;
CREATE MATERIALIZED VIEW MVIEW_TEST_2
ENABLE QUERY REWRITE
AS
SELECT cntr.country_subregion, cust.cust_year_of_birth, COUNT(DISTINCT cust.cust_first_name)
FROM customers cust, countries cntr
WHERE cust.country_id = cntr.country_id
GROUP BY cntr.country_subregion, cust_year_of_birth;Explain Plans showing that Q1 does not use materialized view and Q2 uses materialized view
SET AUTOTRACE TRACEONLY
--Q1 does not use MVIEW_TEST_1
SQL> SELECT cntr.country_subregion, cust.cust_year_of_birth, COUNT(DISTINCT cust.cust_first_name)
FROM customers cust
INNER JOIN countries cntr
ON cust.country_id = cntr.country_id
GROUP BY cntr.country_subregion, cust_year_of_birth; 2 3 4 5
511 rows selected.
Execution Plan
Plan hash value: 1218164197
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 425 | 12325 | | 916 (1)| 00:00:11 |
| 1 | HASH GROUP BY | | 425 | 12325 | | 916 (1)| 00:00:11 |
| 2 | VIEW | VM_NWVW_1 | 55500 | 1571K| | 916 (1)| 00:00:11 |
| 3 | HASH GROUP BY | | 55500 | 1842K| 2408K| 916 (1)| 00:00:11 |
|* 4 | HASH JOIN | | 55500 | 1842K| | 409 (1)| 00:00:05 |
| 5 | TABLE ACCESS FULL| COUNTRIES | 23 | 414 | | 3 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL| CUSTOMERS | 55500 | 867K| | 405 (1)| 00:00:05 |
--Q2 uses MVIEW_TEST_2
SQL> SELECT cntr.country_subregion, cust.cust_year_of_birth, COUNT(DISTINCT cust.cust_first_name)
FROM customers cust, countries cntr
WHERE cust.country_id = cntr.country_id
GROUP BY cntr.country_subregion, cust_year_of_birth; 2 3 4
511 rows selected.
Execution Plan
Plan hash value: 2126022771
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 511 | 21973 | 3 (0)| 00:00:01 |
| 1 | MAT_VIEW REWRITE ACCESS FULL| MVIEW_TEST_2 | 511 | 21973 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------Database version 11gR1 (Tested also on 11gR2)
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - ProductionThanks for the formatting tips.
Just found an Oracle Bug which explains the above behavior.
Unfortunately the bug will be fixed only in 12.1 Release so as a workaround will try to use traditional joins.
For those who have metalink access see [Bug 10145667 : ERRORS TRYING TO REWRITE QUERY WITH EXACT TEXT MATCH TO MVIEW] -
I am trying to see the log file in Manage sessions for the sql query in Answers. I see that if we run the same report multiple times, the sql query is showing up only the first time. Second time if I run it is not showing up. If I do a brand new report with diff columns picked it is giving me the sql then. Where do I set this option to show the sql query everytime I run a report even if it is the same report run multiple times. Is this caching issue?
It shouldn't.... Have you unchecked the "Cache" on the physical layer for this table? If you go onto the Advanced tab, is the option "Bypass the Oracle BI cache" checked?
-
BW 3.5, BEx Query designer issue with text of the characteristics
Hi All,
We are currently using BEx 3.5 Query Designer to design the queries. We have one of the ODS on which we are querying for.
We have 3 different types of Customer in this ODS. 0Customer, 0BBP_CUSTOMER and 0GN_CUSTOMER. The problem is when we open this ODS in Query designer we see their text name on the left hand side column where it shows data fields as the same text name Customer.
Now the our Power users have raised the issue that it is very confusing even when there are 3 different technical names for these characteristics.
2 Questions I have.
1) Why is something like this happening? is it some issue with the Patch or something. We will be migrating to new BI soon but in the mean time if I could resolve it that will be the best. Or does it even get resolved with new BI???
2) what is the way in which we can resolve it?
Thanks in advance and points will be given generously.HI BI Consul!
Things like this happen, when it is called customer it could be different customers, 0customer is the standard R/3 customer and 0BBP_customer is objects from CRM and most likely 0GN_Customer might be customer from different system.
Sure, power users should be told which customer is customer and you could also change the discription of the object, to match the definition on the other system. You could also just create z object and replace the existing confusing object with something meaningful to users.
thanks.
Wond -
Hi,
Facing Database performance issue while runing overnight batches.
Generate tfprof output for that batch and found some sql query which is having high elapsed time. Could any one please let me know what is the issue for this. It will also be great help if anyone suggest what need to be done as per tuning of this sql queries so as to get better responce time.
Waiting for your reply.
Effected SQL List:
INSERT INTO INVTRNEE (TRANS_SESSION, TRANS_SEQUENCE, TRANS_ORG_CHILD,
TRANS_PRD_CHILD, TRANS_TRN_CODE, TRANS_TYPE_CODE, TRANS_DATE, INV_MRPT_CODE,
INV_DRPT_CODE, TRANS_CURR_CODE, PROC_SOURCE, TRANS_REF, TRANS_REF2,
TRANS_QTY, TRANS_RETL, TRANS_COST, TRANS_VAT, TRANS_POS_EXT_TOTAL,
INNER_PK_TECH_KEY, TRANS_INNERS, TRANS_EACHES, TRANS_UOM, TRANS_WEIGHT,
TRANS_WEIGHT_UOM )
VALUES
(:B22 , :B1 , :B2 , :B3 , :B4 , :B5 , :B21 , :B6 , :B7 , :B8 , :B20 , :B19 ,
NULL, :B9 , :B10 , :B11 , 0.0, :B12 , :B13 , :B14 , :B15 , :B16 , :B17 ,
:B18 )
call count cpu elapsed disk query current rows
Parse 722 0.09 0.04 0 0 0 0
Execute 1060 7.96 83.01 11442 21598 88401 149973
Fetch 0 0.00 0.00 0 0 0 0
total 1782 8.05 83.06 11442 21598 88401 149973
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
UPDATE /*+ ROWID(TRFDTLEE) */TRFDTLEE SET TRF_STATUS = :B2
WHERE
ROWID = :B1
call count cpu elapsed disk query current rows
Parse 635 0.03 0.01 0 0 0 0
Execute 49902 14.48 271.25 41803 80704 355837 49902
Fetch 0 0.00 0.00 0 0 0 0
total 50537 14.51 271.27 41803 80704 355837 49902
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
DECLARE
var_trans_session invtrnee.trans_session%TYPE;
BEGIN
-- ADDED BY SHANKAR ON 08/29/97
-- GET THE NEXT AVAILABLE TRANS_SESSION
bastkey('trans_session',0,var_trans_session,'T');
-- MAS001
uk_trfbapuo_auto(var_trans_session,'UPLOAD','T',300);
-- MAS001 end
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 24191.23 24028.57 8172196 10533885 187888 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 24191.23 24028.57 8172196 10533885 187888 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
SELECT INNER_PK_TECH_KEY
FROM
PRDPCDEE WHERE PRD_LVL_CHILD = :B1 AND LOOSE_PACK_FLAG = 'T'
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 56081 1.90 2.03 0 0 0 0
Fetch 56081 11.07 458.58 53792 246017 0 56081
total 112163 12.98 460.61 53792 246017 0 56081
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
******************First off, be aware of the assumptions I'm making. The SQL you presented above strongly suggests (to me at least) that you have cursor for loops. If that's the case, you need to review what their purpose is and look to convert them into single statement DML commands. For example if you have something like this
DECLARE
ln_Count NUMBER;
ln_SomeValue NUMBER;
BEGIN
FOR lcr_Row IN ( SELECT pk_id,col1,col2 FROM some_table)
LOOP
SELECT
COUNT(*)
INTO
ln_COunt
FROM
target_table
WHERE
pk_id = lcr_Row.pk_id;
IF ln_Count = 0 THEN
SELECT
some_value
INTO
ln_SomeValue
FROM
some_other_table
WHERE
pk_id = lcr_Row.col1
INSERT
INTO
target_table
( pk_id,
some_other_value,
col2
VALUES
( lcr_Row.col1,
ln_SomeValue,
lcr_Row.col2
ELSE
UPDATE
target_table
SET
some_other_value = ln_SomeValue
WHERE
pk_id = lcr_Row.col1;
END IF;
END LOOP;
END; it could be rewritten as
DECLARE
BEGIN
MERGE INTO target_table b
USING ( SELECT
a.pk_id,
a.col2,
b.some_value
FROM
some_table a,
some_other_table b
WHERE
b.pk_id = a.col1
) e
ON (b.pk_id = e.pk_id)
WHEN MATCHED THEN
UPDATE SET b.some_other_value = e.some_value
WHEN NOT MATCHED THEN
INSERT ( b.pk_id,
b.col2,
b.some_other_value)
VALUES( b.pk_id,
b.col2,
b.some_value);
END;It's going to take a bit of analysis and work but the fastest and most scalable way to approach processing data is to use SQL rather than PL/SQL. PL/SQL data processing i.e. cursor loops should be an option of last resort.
HTH
David -
Oracle 11g on Linux : Query Optimization issue
Hi guru,
I am facing one query optimization related problem in group by query
Table (10 million Records)
Product(ProductId number,ProductName varchar(100),CategoryId VARCHAR2(38),SubCategoryId VARCHAR2(38))
Index
create index idxCategory on Product (CategoryId,SubCategoryId)
Query1:To find product count for all CategoryId and SubCategoryId
select CategoryId,SubCategoryId,count(*) from Product group by CategoryId,SubCategoryId
Above query is not using index idxCategory and doing table scan which is very costly.
When I fire Query2: select count(*) from Product group by CategoryId,SubCategoryId
then it is properly using index idxCategory and very fast.
Even I specified hint in Query1 but it is not using hint.
Can anybody suggest why oracle is not using index in Query1 and what should I do so that Query1 will use Index.
Thanks in advance.user644199 wrote:
I am facing one query optimization related problem in group by query
Query1:To find product count for all CategoryId and SubCategoryId
select CategoryId,SubCategoryId,count(*) from Product group by CategoryId,SubCategoryId
Above query is not using index idxCategory and doing table scan which is very costly.
When I fire Query2: select count(*) from Product group by CategoryId,SubCategoryId
then it is properly using index idxCategory and very fast.
Even I specified hint in Query1 but it is not using hint.
Can anybody suggest why oracle is not using index in Query1 and what should I do so that Query1 will use Index.The most obvious reason that the table needs to be visited would be that the columns "CategoryId" / "SubCategoryId" can be NULL but then this should apply to both queries. You could try the following to check the NULL issue:
select CategoryId,SubCategoryId,count(*) from Product where CategoryId is not null and SubCategoryId is not null group by CategoryId,SubCategoryId
Does this query use the index?
Can you show us the hint you've used to force the index usage and the EXPLAIN PLAN output of the two queries including the "Predicate Information" section? Use DBMS_XPLAN.DISPLAY to get a proper output, and use the \ tag before and after when posting here to format it using fixed font. Use the "Quote" button in the message editor to see how I used the \ tag here.
Are above queries representing the actual queries used or did you omit some predicates etc. for simplicity?
By the way, VARCHAR2(38) and ...ID as name, are these columns storing number values?
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Hi,
When a test user execute a query he is getting Insuffucient athorization
In RSECADMIN i executed as a test user and got the log
Please update me how to resolve the issue
No Sufficient Authorization for This Subselection (SUBNR)
Following CHANMIDs Are Affected:
17 ( 0TCAKYFNM )
ThanksHi there,
Regarding the 0TCAKYFNM "containing" information of all key figures, that's why I put the "", because this object doesn't contain anything, it is simply an InfoObject standard that SAP has with the flag marked in authorization relevant in transaction RSD1. This object exists for assigning key figures authorization used through the transaction RSECADMIN.
Even without the user notice, this object is checked against all the queries over all the InfoProviders.
If you go to transaction RSD1 for the InfoObject 0TCAKYFNM click on display on the Business Explorer tab you'll see the option AuthorizationRelevant marked. This will force the check for key figures assignment for all the queries against the authorization grnated through this object assigned to the user.
So if in this transaction RSD1 you change on the tab Business Explorer for this object 0TCAKYFNM and uncheck the AuthorizationRelevant, none of the key figures will be checked against the authorizations the users have to the InfoObject 0TCAKYFNM (since this InfoObject will no longer be authorization relevant).
If you leave the flag AuthorizationRelevant for the InfoObject 0TCAKYFNM as it is (marked), then in RSECADMIN as you did before, instead of granting * values (all values, all key figures) for the 0TCAKYFNM, you granted for instance the value 0QUANTITY and assign this authorization to the users, if this user executed a query with only the 0AMOUNT key figure he/she would receive lack of authorization, since he/she was only authorized to see values for 0QUANTITY key figure. So resuming, the values you assign for the InfoObject 0TCAKYFNM will be the key figures the user is authorized to see the query, and if you assing * it will be all the key figures (all values)
Hope this helps,
Regards,
Diogo.
Maybe you are looking for
-
Help! Data recovery needed after lapse of thought... Suggestions please
All was going well with my Mini Server, but apart from the boot partition, I had no back-up plan in place which made me uncomfortable. So in seek of a performance upgrade and a back-up solution I ordered a larger 750GB HDD for bay 1 and a SSD for bay
-
Had my Ipad mini for about a week. At first, the internet connection over my Airport/time capsule( which I bought a couple months ago, so it should be the newest generation) appeared to be fine although, I noticed that the signal bar was always a
-
ABAP Web Service client proxy - generation problems
Hi! I tried to create an ABAP web service client proxy for many different web services - for instance some of that listed at <a href="http://www.xmethods.net/">http://www.xmethods.net/</a> . Almost on 99% of the WSDLs the client proxy generation fail
-
Jumpstart of Solaris 10 halts when "Discovering additional network..."
Installing Solaris 10 using jumpstart from Solaris 9 server, using JASS 4.2 (Solaris Security Toolkit 4.2.0) However, the install just hangs after the following printout: Discovering additional network configuration... I tried snoop on the install se
-
Does anyone have an issue with their "open existing file" search function? If I create a document using pages, numbers, or keynote, I can search for it using finder or spotlight but not with the search feature under "open existing file" window of iWo