Issue regarding rownum in sql query
Hi All,
When I'm running the query below
select 'OP',
'ORG_CODE_PROVIDER',
rownum as ranking,
x.ORG_CODE_PROVIDER,
z.description,
x.value_count,
round(x.value_count / 200432, 4) * 100 as value_pct,
NULL as BATCH_KEY,
'9BED55A4328EFD71E040D20A143245E3' as BATCH_SET_KEY,
'OVERALL',
'OVERALL'
from (select ORG_CODE_PROVIDER, count(*) as value_count
from STAGING_TST.OP t
group by ORG_CODE_PROVIDER
order by count(*) desc, 1 asc) x,
(select code, description from ref_hd.MV_ORG_CODE_PROVIDER) z
where z.code(+) = x.ORG_CODE_PROVIDER
and rownum <= 10
it is showing me results based on the rownum of block x.
But when I try to insert these records in a table like
insert into QA_TST.OP_STAGE_COL_VAL_FREQ
select 'OP',
'ORG_CODE_PROVIDER',
rownum as ranking,
x.ORG_CODE_PROVIDER,
z.description,
x.value_count,
round(x.value_count / 200432, 4) * 100 as value_pct,
NULL as BATCH_KEY,
'9BED55A4328EFD71E040D20A143245E3' as BATCH_SET_KEY,
'OVERALL',
'OVERALL'
from (select ORG_CODE_PROVIDER, count(*) as value_count
from STAGING_TST.OP t
group by ORG_CODE_PROVIDER
order by count(*) desc, 1 asc) x,
(select code, description from ref_hd.MV_ORG_CODE_PROVIDER) z
where z.code(+) = x.ORG_CODE_PROVIDER
and rownum <= 10
On querying the table I'm getting totally different result based on the rownum governed by block y.
I could not able to understand why is it happening. Why oracle is not inserting the records that it is showing in select query.
Moreover, how can I fix this issue and get the desired result.
Thanks
Tarun
Hi,
Whenever you post any code, indent it so that how it looks on the screen reflects what it is doing. In particular, make it easy to see what are the sub-queries. Whenever you post formatted text (such as query results as well as code) on this site, type these 6 characters:
\(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing.
I originally posted an inaccurate answer becuase I couldn't understand your unformatted code.
How ROWNUM is assigned in a join depends on how the optimizer chooses to perform the join. If you want consistent results, then do the join first (in a sub-query), use ORDER BY clause in that sub-query, and use ROWNUM only in the parent query, which should not include a join.
The analytic ROW_NUMBER function is a lot more powerful and versatile than ROWNUM. You might look into using it (though the extra power may not be needed in this particular problem).
Edited by: Frank Kulash on Feb 10, 2011 11:29 AM
Similar Messages
-
How can I use the Rownum/Customized SQL query in a Mapping?
Hi,
* I need to use a Rownum for populating one of the target field? How to create a mapping with Rownum?
* How can I use an Dual table in OWB mapping?
* Can I write Customized SQL query in OWB? How can I achieve this in a Mapping?
Thanks in Advance
KishanHi Niels,
As I'm sure you know, the conundrum is that Reports doesn't know how many total pages there will be in the report until it is all done formatting, which is too late for your needs. So, one classical solution to this problem is to run the report twice, storing the total number of pages in the database using a format trigger, and throwing away the output from the first run when you don't know the total number of pages.
Alternatively, you could define a report layout so that the number of pages in the output is completely predictable based upon, say, the number of rows in the main query. E.g., set a limit of one, two, ... rows per page, and then you'll know how many pages there will be simply because you can count the rows in a separate query.
Hope this helps...
regards,
Stewart -
SQL Query Performance needed.
Hi All,
I am getting performance issue with my below sql query. When I fired it, It is taking 823.438 seconds, but when I ran query in, it is taking 8.578 seconds, and query after in is taking 7.579 seconds.
SELECT BAL.L_ID, BAL.L_TYPE, BAL.L_NAME, BAL.NATURAL_ACCOUNT,
BAL.LOCATION, BAL.PRODUCT, BAL.INTERCOMPANY, BAL.FUTURE1, BAL.FUTURE2, BAL.CURRENCY, BAL.AMOUNT_PTD, BAL.AMOUNT_YTD, BAL.CREATION_DATE,
BAL.CREATED_BY, BAL.LAST_UPDATE_DATE, BAL.LAST_UPDATED_BY, BAL.STATUS, BAL.ANET_STATUS, BAL.COG_STATUS, BAL.comb_id, BAL.MESSAGE,
SEG.SEGMENT_DESCRIPTION FROM ACC_SEGMENTS_V_TST SEG , ACC_BALANCE_STG BAL where BAL.NATURAL_ACCOUNT = SEG.SEGMENT_VALUE AND SEG.SEGMENT_COLUMN = 'SEGMENT99' AND BAL.ACCOUNTING_PERIOD = 'MAY-10' and BAL.comb_id
in
(select comb_id from
(select comb_id, rownum r from
(select distinct(comb_id),LAST_UPDATE_DATE from ACC_BALANCE_STG where accounting_period='MAY-10' order by LAST_UPDATE_DATE )
where rownum <=100) where r >0)
Please help me in fine tuning above. I am using Oracle 10g database. There are total of 8000 records. Let me know if any other info required.
Thanks in advance.In recent versions of Oracle an EXISTS predicate should produce the same execution plan as the corresponding IN clause.
Follow the advice in the tuning threads as suggested by SomeoneElse.
It looks to me like you could avoid the double pass on ACC_BALANCE_STG by using an analytical function like ROW_NUMBER() and then joining to ACC_SEGMENTS_V_TST SEG, maybe using subquery refactoring to make it look nicer.
e.g. something like (untested)
WITH subq_bal as
((SELECT *
FROM (SELECT BAL.L_ID, BAL.L_TYPE, BAL.L_NAME, BAL.NATURAL_ACCOUNT,
BAL.LOCATION, BAL.PRODUCT, BAL.INTERCOMPANY, BAL.FUTURE1, BAL.FUTURE2,
BAL.CURRENCY, BAL.AMOUNT_PTD, BAL.AMOUNT_YTD, BAL.CREATION_DATE,
BAL.CREATED_BY, BAL.LAST_UPDATE_DATE, BAL.LAST_UPDATED_BY, BAL.STATUS, BAL.ANET_STATUS,
BAL.COG_STATUS, BAL.comb_id, BAL.MESSAGE,
ROW_NUMBER() OVER (ORDER BY LAST_UPDATE_DATE) rn
FROM acc_balance_stg
WHERE accounting_period='MAY-10')
WHERE rn <= 100)
SELECT *
FROM subq_bal bal
, acc_Segments_v_tst seg
where BAL.NATURAL_ACCOUNT = SEG.SEGMENT_VALUE
AND SEG.SEGMENT_COLUMN = 'SEGMENT99';However, the parentheses you use around comb_id make me question what your intention is here in the subquery?
Do you have multiple rows in ACC_BALANCE_STG for the same comb_id and last_update_date?
If so you may want to do a MAX on last_update_date, group by comb_id before doing the analytic restriction.
Edited by: DomBrooks on Jun 16, 2010 5:56 PM -
Hai ,
I am having doubt regarding NOT EXISTS sql query,
I want to select rows from table1 where column2 < column3 and column1 value should not exists in another two tables.
When i tried with NOT IN clause i got the answer but i didnt get when i tried with NOT EXISTS.....can anyone give me the answer....
select * from TABLE1 where mark2< mark3 AND
name NOT IN (select name from TABLE2 UNION
select name from TABLE3);Your query can be re-written in the following way by using not exists clause:
SELECT *
FROM table1 t1
WHERE mark2 < mark3
AND NOT EXISTS (
SELECT 1
FROM table2
WHERE name = t1.name)
AND NOT EXISTS (
SELECT 1
FROM table3
WHERE name = t1.name)Go through http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:442029737684
on when to use NOT IN and NOT EXISTS.
Hope this is helpful.
- RK -
Oracle Sql Query issue Running on Different DB Version
Hello All,
I have come into situation where we are pruning sql queries on different DB version of Oracle and have performance issue. Let me tell you in brief and i really appreciate for your prompt response as its very imperative stuff.
I have a query which is running on a DB of version 7.3.4 and it takes around 30 mins where as the same query when run on 8i it takes 15sec., its a huge difference. I have run the statistics to analyze on 7.3 and its comparatively very high. Question here is, the sql query trys to select data from same schema table and 2 tables from another DB using DB link and 2 other tables from another DB using DB link.So,how can we optimize this stuff and achieve this run as same time as 8i DB in 7.3. Hope i am clear about my question, Eagerly waiting for your replies.
Thanks in Advance.
Message was edited by:
Ram8Difficult to be sure without any more detailed information, but I suspect that O7 is in effect copying the remote tables to local temp space, then joining; 8i is factoring out a better query to send to the remote DBs, which does as much work as possible on the remote DB before shipping remaining rows back to local.
You should be able to use EXPLAIN PLAN to identify what SQL is being shipped to the remote DB, If you can't (and it's been quite a while since I tried DB links or O7) then get the remote DBs to yourself, and set SQL_TRACE on for the remote instances. Execute the query and then examine the remote trace files, And don't forget to turn off the tracing when you're done.
Of course it could just be that the CBO got better,,,
HTH - if not, post your query and plans for the local db, and the remote queries.
Regards Nigel -
Where rownum=2 in my sql query is not working . Why ??
Hello,
i am using Oracle 11g .Referring scott schema's emp table.
I just issued this sql query with the intention to get second highest salary of a employee. But i am unable to understand why my query fails ?
select rownum,empno,ename,sal from (select empno,ename,sal from emp order by sal desc) where rownum=2;This query is returning no rows . Can you tell why this query is returning no rows ?888953 wrote:
Because you can use ROWNUM only to limit the number of returned records, not to return a specific record.
So only this has sense (n any number):
ROWNUM <= n
or
ROWNUM = 1 (which is equal to ROWNUM <= 1)
Anything else will not return a row.As i said ,
select * from (select rownum rn,empno,ename,sal from (select empno,ename,sal from emp order by sal desc)) where rn=2;this query is working fine.So rownm can be used to return a specific record . Please rectify me if i am wrong . -
APEX 4.1, SQL Query(Updateable report), Validation issue.
Hi,
I am using APEX 4.1.
I have SQL Query(Updateable report), we have created validation for the columns in this report.
The validations are working properly only for the first row of the report on submitting, the remaining rows are not getting validated.
If we check mark the rows it will get validated, but we want the validation to happen without checkmarking, on all the rows on clicking submit button.
Can someone help me to fix this issue?
Thanks in advance.
Thanks & regards,
Ravi.Hi Ravi,
Welcome to Oracle Forums!
Please acquaint yourself with the FAQ and forum etiquette if you haven't already done so.
Always state
<ul>
<li>Apex Version</li>
<li>DB Version and edition</li>
<li>Web server used.I.e. EPG, OHS, ApexListner Standalone or with J2EE container</li>
<li>When asking about forms always state tabular form if it is a tabular form</li>
<li>When asking about reports always state Classic / IR</li>
<li>Always post code snippets enclosed in a pair of {code} tags as explained in FAQ</li>
</ul>
I am using APEX 4.1.I have SQL Query(Updateable report), we have created validation for the columns in this report.
The validations are working properly only for the first row of the report on submitting, the remaining rows are not getting validated.
If we check mark the rows it will get validated, but we want the validation to happen without checkmarking, on all the rows on clicking submit button.
Can someone help me to fix this issue?
>
Post your validation code with some explanations of what the g_fnn are.
Cheers, -
SQL query performance issues.
Hi All,
I worked on the query a month ago and the fix worked for me in test intance but failed in production. Following is the URL for the previous thread.
SQL query performance issues.
Following is the tkprof file.
CURSOR_ID:76 LENGTH:2383 ADDRESS:f6b40ab0 HASH_VALUE:2459471753 OPTIMIZER_GOAL:ALL_ROWS USER_ID:443 (APPS)
insert into cos_temp(
TRX_DATE, DEPT, PRODUCT_LINE, PART_NUMBER,
CUSTOMER_NUMBER, QUANTITY_SOLD, ORDER_NUMBER,
INVOICE_NUMBER, EXT_SALES, EXT_COS,
GROSS_PROFIT, ACCT_DATE,
SHIPMENT_TYPE,
FROM_ORGANIZATION_ID,
FROM_ORGANIZATION_CODE)
select a.trx_date,
g.segment5 dept,
g.segment4 prd,
m.segment1 part,
d.customer_number customer,
b.quantity_invoiced units,
-- substr(a.sales_order,1,6) order#,
substr(ltrim(b.interface_line_attribute1),1,10) order#,
a.trx_number invoice,
(b.quantity_invoiced * b.unit_selling_price) sales,
(b.quantity_invoiced * nvl(price.operand,0)) cos,
(b.quantity_invoiced * b.unit_selling_price) -
(b.quantity_invoiced * nvl(price.operand,0)) profit,
to_char(to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS'),'DD-MON-RR') acct_date,
'DRP',
l.ship_from_org_id,
p.organization_code
from ra_customers d,
gl_code_combinations g,
mtl_system_items m,
ra_cust_trx_line_gl_dist c,
ra_customer_trx_lines b,
ra_customer_trx_all a,
apps.oe_order_lines l,
apps.HR_ORGANIZATION_INFORMATION i,
apps.MTL_INTERCOMPANY_PARAMETERS inter,
apps.HZ_CUST_SITE_USES_ALL site,
apps.qp_list_lines_v price,
apps.mtl_parameters p
where a.trx_date between to_date('2010/02/01 00:00:00','yyyy/mm/dd HH24:MI:SS')
and to_date('2010/02/28 00:00:00','yyyy/mm/dd HH24:MI:SS')+0.9999
and a.batch_source_id = 1001 -- Sales order shipped other OU
and a.complete_flag = 'Y'
and a.customer_trx_id = b.customer_trx_id
and b.customer_trx_line_id = c.customer_trx_line_id
and a.sold_to_customer_id = d.customer_id
and b.inventory_item_id = m.inventory_item_id
and m.organization_id
= decode(substr(g.segment4,1,2),'01',5004,'03',5004,
'02',5003,'00',5001,5002)
and nvl(m.item_type,'0') <> '111'
and c.code_combination_id = g.code_combination_id+0
and l.line_id = b.interface_line_attribute6
and i.organization_id = l.ship_from_org_id
and p.organization_id = l.ship_from_org_id
and i.org_information3 <> '5108'
and inter.ship_organization_id = i.org_information3
and inter.sell_organization_id = '5108'
and inter.customer_site_id = site.site_use_id
and site.price_list_id = price.list_header_id
and product_attr_value = to_char(m.inventory_item_id)
call count cpu elapsed disk query current rows misses
Parse 1 0.47 0.56 11 197 0 0 1
Execute 1 3733.40 3739.40 34893 519962154 11 188 0
total 2 3733.87 3739.97 34904 519962351 11 188 1
| Rows Row Source Operation
| ------------ ---------------------------------------------------
| 188 HASH JOIN (cr=519962149 pr=34889 pw=0 time=2607.35)
| 741 .TABLE ACCESS BY INDEX ROWID QP_PRICING_ATTRIBUTES (cr=519939426 pr=34889 pw=0 time=2457.32)
| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)
| 741 ....NESTED LOOPS (cr=50042 pr=7230 pw=0 time=11.37)
| 741 .....NESTED LOOPS (cr=48558 pr=7229 pw=0 time=11.35)
| 741 ......NESTED LOOPS (cr=47815 pr=7223 pw=0 time=11.32)
| 3237 .......NESTED LOOPS (cr=41339 pr=7223 pw=0 time=12.42)
| 3237 ........NESTED LOOPS (cr=38100 pr=7223 pw=0 time=12.39)
| 3237 .........NESTED LOOPS (cr=28296 pr=7139 pw=0 time=12.29)
| 1027 ..........NESTED LOOPS (cr=17656 pr=4471 pw=0 time=3.81)
| 1027 ...........NESTED LOOPS (cr=13537 pr=4404 pw=0 time=3.30)
| 486 ............NESTED LOOPS (cr=10873 pr=4240 pw=0 time=0.04)
| 486 .............NESTED LOOPS (cr=10385 pr=4240 pw=0 time=0.03)
| 486 ..............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL (cr=9411 pr=4240 pw=0 time=0.02)
| 75253 ...............INDEX RANGE SCAN RA_CUSTOMER_TRX_N5 (cr=403 pr=285 pw=0 time=0.38)
| 486 ..............TABLE ACCESS BY INDEX ROWID HZ_CUST_ACCOUNTS (cr=974 pr=0 pw=0 time=0.01)
| 486 ...............INDEX UNIQUE SCAN HZ_CUST_ACCOUNTS_U1 (cr=488 pr=0 pw=0 time=0.01)
| 486 .............INDEX UNIQUE SCAN HZ_PARTIES_U1 (cr=488 pr=0 pw=0 time=0.01)
| 1027 ............TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL (cr=2664 pr=164 pw=0 time=1.95)
| 2063 .............INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2 (cr=1474 pr=28 pw=0 time=0.22)
| 1027 ...........TABLE ACCESS BY INDEX ROWID RA_CUST_TRX_LINE_GL_DIST_ALL (cr=4119 pr=67 pw=0 time=0.54)
| 1027 ............INDEX RANGE SCAN RA_CUST_TRX_LINE_GL_DIST_N1 (cr=3092 pr=31 pw=0 time=0.20)
| 3237 ..........TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS_B (cr=10640 pr=2668 pw=0 time=15.35)
| 3237 ...........INDEX RANGE SCAN MTL_SYSTEM_ITEMS_B_U1 (cr=2062 pr=40 pw=0 time=0.33)
| 3237 .........TABLE ACCESS BY INDEX ROWID OE_ORDER_LINES_ALL (cr=9804 pr=84 pw=0 time=0.77)
| 3237 ..........INDEX UNIQUE SCAN OE_ORDER_LINES_U1 (cr=6476 pr=47 pw=0 time=0.43)
| 3237 ........TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=3239 pr=0 pw=0 time=0.04)
| 3237 .........INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=2 pr=0 pw=0 time=0.01)
| 741 .......TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION (cr=6476 pr=0 pw=0 time=0.10)
| 6474 ........INDEX RANGE SCAN HR_ORGANIZATION_INFORMATIO_FK2 (cr=3239 pr=0 pw=0 time=0.03)Please help.
Regards
Ashish| 254644500 ..NESTED LOOPS (cr=519939265 pr=34777 pw=0 time=3819.67)
| 254643758 ...NESTED LOOPS (cr=8921833 pr=29939 pw=0 time=1274.41)There is no way the optimizer should choose to process that many rows using nested loops.
Either the statistics are not up to date, the data values are skewed or you have some optimizer parameter set to none default to force index access.
Please post explain plan and optimizer* parameter settings. -
Issue in creation of group in oim database through sql query.
hi guys,
i am trying to create a group in oim database through sql query:
insert into ugp(ugp_key,ugp_name,ugp_create,ugp_update,ugp_createby,ugp_updateby,)values(786,'dbrole','09-jul-12','09-jul-12',1,1);
it is inserting the group in ugp table but it is not showing in admin console.
After that i also tried with this query:
insert into gpp(ugp_key,gpp_ugp_key,gpp_write,gpp_delete,gpp_create,gpp_createby,gpp_update,gpp_updateby)values(786,1,1,1,'09-jul-12',1,'09-jul-12',1);
After that i tried with this query.but still no use.
and i also tried to assign a user to the group through query:
insert into usg(ugp_key,usr_key,usg_priority,usg_create,usg_update,usg_createby,usg_updateby)values(4,81,1,'09-jul-12','09-jul-12',1,1);
But still the same problem.it is inserting in db.but not listing in admin console.
thanks,
hanuman.Hanuman Thota wrote:
hi vladimir,
i didn't find this 'ugp_seq'.is this a table or column?where is it?
It is a sequence.
See here for details on oracle sequences:
http://www.techonthenet.com/oracle/sequences.php
Most of the OIM database schema is created with the following script, located in the RCU distribution:
$RCU_HOME/rcu/integration/oim/sql/xell.sql
there you'll find plenty of sequence creation directives like:
create sequence UGP_SEQ
increment by 1
start with 1
cache 20
to create a sequence, and
INSERT INTO UGP (UGP_KEY, UGP_NAME, UGP_UPDATEBY, UGP_UPDATE, UGP_CREATEBY, UGP_CREATE,UGP_ROWVER, UGP_DATA_LEVEL, UGP_ROLE_CATEGORY_KEY, UGP_ROLE_OWNER_KEY, UGP_DISPLAY_NAME, UGP_ROLENAME, UGP_DESCRIPTION, UGP_NAMESPACE)
VALUES (ugp_seq.nextval,'SYSTEM ADMINISTRATORS', sysadmUsrKey , SYSDATE,sysadmUsrKey , SYSDATE, hextoraw('0000000000000000'), 1, roleCategoryKey, sysadmUsrKey, 'SYSTEM ADMINISTRATORS', 'SYSTEM ADMINISTRATORS', 'System Administrator role for OIM', 'Default');
as a sequence usage example.
Regards,
Vladimir -
I am trying to see the log file in Manage sessions for the sql query in Answers. I see that if we run the same report multiple times, the sql query is showing up only the first time. Second time if I run it is not showing up. If I do a brand new report with diff columns picked it is giving me the sql then. Where do I set this option to show the sql query everytime I run a report even if it is the same report run multiple times. Is this caching issue?
It shouldn't.... Have you unchecked the "Cache" on the physical layer for this table? If you go onto the Advanced tab, is the option "Bypass the Oracle BI cache" checked?
-
Issue with SQL Query with Presentation Variable as Data Source in BI Publisher
Hello All
I have an issue with creating BIP report based on OBIEE reports which is done using direct SQL. There is this one report in OBIEE dashboard, which is written using direct SQL. To create the pixel perfect version of this report, I am creating BIP data model using SQL Query as data source. The physical query that is used to create OBIEE report has several presentation variables in its where clause.
select TILE4,max(APPTS), 'Top Count' from
SELECT c5 as division,nvl(DECODE (C2,0,0,(c1/c2)*100),0) AS APPTS,NTILE (4) OVER ( ORDER BY nvl(DECODE (C2,0,0,(c1/c2)*100),0)) AS TILE4,
c4 as dept,c6 as month FROM
select sum(case when T6736.TYPE = 'ATM' then T7608.COUNT end ) as c1,
sum(case when T6736.TYPE in ('Call Center', 'LSM') then T7608.CONFIRMED_COUNT end ) as c2,
T802.NAME_LEVEL_6 as c3,
T802.NAME_LEVEL_1 as c4,
T6172.CALENDARMONTHNAMEANDYEAR as c5,
T6172.CALENDARMONTHNUMBERINYEAR as c6,
T802.DEPT_CODE as c7
from
DW_date_DIM T6736 /* z_dim_date */ ,
DW_MONTH_DIM T6172 /* z_dim_month */ ,
DW_GEOS_DIM T802 /* z_dim_dept_geo_hierarchy */ ,
DW_Count_MONTH_AGG T7608 /* z_fact_Count_month_agg */
where ( T802.DEpt_CODE = T7608.DEPT_CODE and T802.NAME_LEVEL_1 = '@{PV_D}{RSD}'
and T802.CALENDARMONTHNAMEANDYEAR = 'July 2013'
and T6172.MONTH_KEY = T7608.MONTH_KEY and T6736.DATE_KEY = T7608.DATE_KEY
and (T6172.CALENDARMONTHNUMBERINYEAR between substr('@{Month_Start}',0,6) and substr('@{Month_END}',8,13))
and (T6736.TYPE in ('Call Center', 'LSM')) )
group by T802.DEPT_CODE, T802.NAME_LEVEL_6, T802.NAME_LEVEL_1, T6172.CALENDARMONTHNAMEANDYEAR, T6172.CALENDARMONTHNUMBERINYEAR
order by c4, c3, c6, c7, c5
))where tile4=3 group by tile4
When I try to view data after creating the data set, I get the following error:
Failed to load XML
XML Parsing Error: mismatched tag. Expected: . Location: http://172.20.17.142:9704/xmlpserver/servlet/xdo Line Number 2, Column 580:
Now when I remove those Presention variables (@{PV1}, @{PV2}) in the query with some hard coded values, it is working fine.
So I know it is the PV that's causing this error.
How can I work around it?
There is no way to create equivalent report without using the direct sql..
Thanks in advanceI have found a solution to this problem after some more investigation. PowerQuery does not support to use SQL statement as source for Teradata (possibly same for other sources as well). This is "by design" according to Microsoft. Hence the problem
is not because different PowerQuery versions as mentioned above. When designing the query in PowerQuery in Excel make sure to use the interface/navigation to create the query/select tables and NOT a SQL statement. The SQL statement as source works fine on
a client machine but not when scheduling it in Power BI in the cloud. I would like to see that the functionality within PowerQuery and Excel should be the same as in Power BI in the cloud. And at least when there is a difference it would be nice with documentation
or more descriptive errors.
//Jonas -
Hi,
Facing Database performance issue while runing overnight batches.
Generate tfprof output for that batch and found some sql query which is having high elapsed time. Could any one please let me know what is the issue for this. It will also be great help if anyone suggest what need to be done as per tuning of this sql queries so as to get better responce time.
Waiting for your reply.
Effected SQL List:
INSERT INTO INVTRNEE (TRANS_SESSION, TRANS_SEQUENCE, TRANS_ORG_CHILD,
TRANS_PRD_CHILD, TRANS_TRN_CODE, TRANS_TYPE_CODE, TRANS_DATE, INV_MRPT_CODE,
INV_DRPT_CODE, TRANS_CURR_CODE, PROC_SOURCE, TRANS_REF, TRANS_REF2,
TRANS_QTY, TRANS_RETL, TRANS_COST, TRANS_VAT, TRANS_POS_EXT_TOTAL,
INNER_PK_TECH_KEY, TRANS_INNERS, TRANS_EACHES, TRANS_UOM, TRANS_WEIGHT,
TRANS_WEIGHT_UOM )
VALUES
(:B22 , :B1 , :B2 , :B3 , :B4 , :B5 , :B21 , :B6 , :B7 , :B8 , :B20 , :B19 ,
NULL, :B9 , :B10 , :B11 , 0.0, :B12 , :B13 , :B14 , :B15 , :B16 , :B17 ,
:B18 )
call count cpu elapsed disk query current rows
Parse 722 0.09 0.04 0 0 0 0
Execute 1060 7.96 83.01 11442 21598 88401 149973
Fetch 0 0.00 0.00 0 0 0 0
total 1782 8.05 83.06 11442 21598 88401 149973
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
UPDATE /*+ ROWID(TRFDTLEE) */TRFDTLEE SET TRF_STATUS = :B2
WHERE
ROWID = :B1
call count cpu elapsed disk query current rows
Parse 635 0.03 0.01 0 0 0 0
Execute 49902 14.48 271.25 41803 80704 355837 49902
Fetch 0 0.00 0.00 0 0 0 0
total 50537 14.51 271.27 41803 80704 355837 49902
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
DECLARE
var_trans_session invtrnee.trans_session%TYPE;
BEGIN
-- ADDED BY SHANKAR ON 08/29/97
-- GET THE NEXT AVAILABLE TRANS_SESSION
bastkey('trans_session',0,var_trans_session,'T');
-- MAS001
uk_trfbapuo_auto(var_trans_session,'UPLOAD','T',300);
-- MAS001 end
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 24191.23 24028.57 8172196 10533885 187888 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 24191.23 24028.57 8172196 10533885 187888 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
SELECT INNER_PK_TECH_KEY
FROM
PRDPCDEE WHERE PRD_LVL_CHILD = :B1 AND LOOSE_PACK_FLAG = 'T'
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 0 0 0 0
Execute 56081 1.90 2.03 0 0 0 0
Fetch 56081 11.07 458.58 53792 246017 0 56081
total 112163 12.98 460.61 53792 246017 0 56081
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
******************First off, be aware of the assumptions I'm making. The SQL you presented above strongly suggests (to me at least) that you have cursor for loops. If that's the case, you need to review what their purpose is and look to convert them into single statement DML commands. For example if you have something like this
DECLARE
ln_Count NUMBER;
ln_SomeValue NUMBER;
BEGIN
FOR lcr_Row IN ( SELECT pk_id,col1,col2 FROM some_table)
LOOP
SELECT
COUNT(*)
INTO
ln_COunt
FROM
target_table
WHERE
pk_id = lcr_Row.pk_id;
IF ln_Count = 0 THEN
SELECT
some_value
INTO
ln_SomeValue
FROM
some_other_table
WHERE
pk_id = lcr_Row.col1
INSERT
INTO
target_table
( pk_id,
some_other_value,
col2
VALUES
( lcr_Row.col1,
ln_SomeValue,
lcr_Row.col2
ELSE
UPDATE
target_table
SET
some_other_value = ln_SomeValue
WHERE
pk_id = lcr_Row.col1;
END IF;
END LOOP;
END; it could be rewritten as
DECLARE
BEGIN
MERGE INTO target_table b
USING ( SELECT
a.pk_id,
a.col2,
b.some_value
FROM
some_table a,
some_other_table b
WHERE
b.pk_id = a.col1
) e
ON (b.pk_id = e.pk_id)
WHEN MATCHED THEN
UPDATE SET b.some_other_value = e.some_value
WHEN NOT MATCHED THEN
INSERT ( b.pk_id,
b.col2,
b.some_other_value)
VALUES( b.pk_id,
b.col2,
b.some_value);
END;It's going to take a bit of analysis and work but the fastest and most scalable way to approach processing data is to use SQL rather than PL/SQL. PL/SQL data processing i.e. cursor loops should be an option of last resort.
HTH
David -
Goods Issue - SQL Query will not sum
I have 3 Goods Issue documents with Document Total for each document 10,000 / 20,000 / 30,000 subsequently.
I want to make query so that it will display like below - where it shows BOTH each of 3 document value (10000/2000/30000) AND the sum of the 3 documents (60000) like below.
Doc 1 - 10,000
Doc 2 - 20,000
Doc 3 - 30,000
Total = 60,000
In addition, I would like the ability to choose date range. Basically, something like
SELECT Document_Total
FROM Goods_Issue table
WHERE the_document_date is between 1-SEP-2011 and 31-SEP-2011
AND the_reference_is _________________
I have tried many SQL queries, but it displayed either:
Doc 1 - 10,000
Doc 2 - 20,000
Doc 3 - 30,000
OR
Total = 60,000
Please help.@Hendry Wijaya @GordonDu @malhaar
Thanks for helps. The SQL query you provided solved 99% of the problem. I just need to make a tweak on the SQL so that it could display the total on the footer. See screenshot below - I uploaded the pics at imageshack as I don't find a way to attach pics in here.
[See here - Total_at_Footer|http://i129.photobucket.com/albums/p213/whitesnowbear/AAAA/Untitled-2.jpg]
Thanks a bunch. -
Issue in converting sql query to oracle
Hi Friends,
I have a sql query as follows,
Select Name From SysObjects Where XType = 'U' And Name = @NameI want to convert this to oracle. I have tried to convert this in someways. But I'm getting errors only. Please anyone help me to fix this issue.
Thanks,
RamMost Oracle dictionary views come in DBA_, ALL_ and USER_ flavours. Probably you don't have privileges to view DBA_OBJECTS, in which case try ALL_OBJECTS or <a gref="http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_2005.htm">USER_OBJECTS</a>.
Based on what I read, you might need something like
SELECT * FROM all_objects WHERE owner <> 'SYS' AND object_type = 'TABLE'btw a sure way to wind up Oracle professionals is to use "SQL" to mean another DBMS product and not Structured Query Language. As you may have noticed, we have SQL over here as well. -
Regarding Exceptions of a SQL query
Hi,
We have a problem regarding the sql query shown below:
select max(column1) into var1
from table1
where column2 = 100;
We don't have any rows in the table1 but we are not getting the "NO data exception" while executing the query in a procedure.
If we modify the query to
select column1 into var1
from table1
where column2 = 100;
then 'No data found' exception is raised.
Can you please let us the reason behind this.
Thanks in advance.
LakshmiCut from Oracle documentation
If a query with an aggregate function returns no rows or only rows with nulls for the argument to the aggregate function, the aggregate function returns null.
AVG
COUNT
GROUPING
MAX
MIN
STDDEV
Aggregate function don't return 'No datafound Exception" I believe....Correcte me if i am Worng
Ashok
Maybe you are looking for
-
Export to EXCEL 4m Discoverer Plus/Viewer doesnt retain Header Formatting ?
Hi, I am trying to export reports from Discoverer Plus and Viewer to XCEL but the Header's Formatting like Bold and Background Color etc are not being exported as seen in Plus. Could you please advice on how to make this happen. Thanks, VJ
-
Lately, my mac has been running slow, freezing, restarting, and this is new... I have a lot of things in imovie... so I presume it might be that, but this is ridiculous! It is constantlly beachballin'! How do I make my Mac perform like it did when
-
Paisa Round Off - GL Account for Warehouse wise
Dear All, Please guide me how to configure the Paisa Round Off - GL Account determination for Warehouse wise ( for Different Warehouse ). Thanks in Advance. - Rajesh
-
Put logo in ABAP list/smallest font when sending to spool
Hi Gurus, I have a requirement. I did a ABAP list, not ALV, classic one, I also send it to spool and then convert to pdf and download to pc. Now our customer wants a logo for it, also as the report is long, when print it or downloading it to pdf, som
-
I created one menu Item using XML file .My doubt is where we will put that XML file in our project