Query rewrite for COUNT(DISTINCT)
Hi,
I am having fact table with different dimension keys.
CREATE TABLE FACT
TIME_SKEY NUMBER
REGION_SKEY NUMBER,
AC_SKEY NUMBER
I need to take COUNT(DISTINCT(AC_SKEY) for TIME_SKEY and REGION_SKEY. There are oracle dimension defined for time and region which are using TIME_SKEY and REGION_SKEY. I have created MV with query rewrite with COUNT(DISTINCT) but it is not using dimension if I am using any other level and MV can't be fast refreshed as it was build using COUNT(DISTINCT).
CREATE MATERIALIZED VIEW AC_MV
NOCACHE
NOLOGGING
NOCOMPRESS
NOPARALLEL
BUILD IMMEDIATE
REFRESH COMPLETE ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT
TIME_SKEY ,
REGION_SKEY,
COUNT (DISTINCTAC_SKEY)
FROM FACT
GROUP BY TIME_SKEY, REGION_SKEY;
Query used to retrieve data is as below
SELECT TIME_SKEY, COUNT(DISTINCT AC_SKEY) OVER (PARTITION BY TIME_SKEY) UNIQ_AC, COUNT(DISTINCT AC_SKEY) OVER () UNIQ_AC1
FROM FACT;
There can be other queries based on time / region dimension.
Can you please provide help in solving above issue?
Thanks,
Pritesh
What version of the Oracle database?
Similar Messages
-
I need to display totals for Count Distinct measures. I want to display these above a table view.
We have done this before by creating hidden columns with level-based measures for totals and then displaying the first row of these hidden columns in a narrative view above the table. We have also used MAX(RSUM()) within requests, sometimes.
These solutions won't work, because I need Count Distinct() measures (so simple sums and counts will give inaccurate results) and I may navigate to the request with filters at different levels (so LBMs won't work, either).
The only solution I can think of is to have LBMs for each level and have duplicate dashboards that differ only in which variation of this request with which level's LBMs are displayed for the totals. That seems like too much of a kluge. There should be a simpler, better way to do this.I was trying to reproduce your issue with "Sample Sales" - but can't figure out which columns you'd like to see. Can you please post couple columns - and which count distinct you need? That would make it easier to reproduce the issue.
I was thinking that it might be difficult to pull it in 1 report (since you can't completely exclude columns in table view). I have two suggestions:
a) did you try to create a separate report and combine it with existing one (same Dashboard page)?
b) did you try Pivot Table and its calculated column feature? I've had some success with it when I needed to combine measures at different levels on the same report (i needed to see daily totals for 3 specific days, monthly values for specific months, and couple annual totals). This way you could have it on the same report.
I just tried A. And it worked (again, not sure if this is applicable to your situation). I used "Server Complex Aggregate" in column options. The formula is showing: SELECT "D5 Employee"."E01 Employee Name" saw_0, COUNT(DISTINCT "D1 Customer"."C1 Cust Name") saw_1 FROM "Sample Sales" ORDER BY saw_0
Edited by: wildmight on Oct 30, 2009 9:35 AM -
Map Viewer Query Rewriting for Dynamic themes and Materialized Views.
Hi,
I am usng a WMS request to render FOI points in my map.
Internally query rewrite is happening in Mapviewer for this dynamic theme and my data points query is getting converted as
select FROM
( select status, shape from MatView.MyTab where id = '3' )
WHERE MDSYS.SDO_FILTER(shape, MDSYS.SDO_GEOMETRY(2003, 4283, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 1003, 3), MDSYS.SDO_ORDINATE_ARRAY(144.948120117188,-37.8162934802451,144.950866699219,-37.8141237016045)), 'querytype=WINDOW') = 'TRUE'
here the rewritten query is not correct and is throwing exceptions in mapviewer log
How can I make this query to be written correctly.
(My orginal query before rewrite is: select status,shape from MatView.MyTab where id='3' )
I am using a materialised view : MatView is a materialized view.
When I used normal tables, the query is re written correctly.But for this materialized view this is happening.
How can I correct the error?
Is this has something to do with some Spatial Indexing in Materialised view or Query Rewriting for materialized view?
Edited by: 841309 on Mar 10, 2011 11:04 PMOops!
The Materialized view was not accessible from the schema I tried :)
And so when I gave permissions,it formed the correct query.
So if permission is not there,map viewer will rewrite the query in a wrong way! New information. -
OBIEE 10G Total by in answers not correct for count distinct fields. Is this a bug?
For example:
Sales fact has receipt no and line no as key. It has data like:
receipt no, line no, value
1, 1, 30
1, 2, 40
2, 1, 10
2, 2, 10
There is also a transaction field defined as count distinct of receipt no (in BMM)
In answers, I set to show Total.
without any filters:
receipt no, value, transactions
1, 70, 1
2, 20, 1
total: 90, 2
Transactions is 2, which is correct.
If apply filter of transaction value greater than 50.
Then transactions in total will still show 2
1, 70, 1
total: 70, 2
Is this a bug? It looks only SUM works no problem in the total by.I did look at the physical query and saw how it calculated the Total transactions and it didn't take into account of the filter of transaction value greater than 50. Don't know why though. I don't know why you want to count line no. The result would be still 2.
-
ABAP Query Code for Count Function
Hi,
I have an ABAP Query and just created a new field which requires ABAP Code to do the following :-
The report is for Purchase Orders and Purchase Requisitions. 2 Purchase Orders may have the same requisition, so what I want to do is count the number of the same requisitions in the report
e.g.
PO Req Count
45000001 10015 2
45000020 10015 2
Can some one please provide with the full code? Points will be awarded
ThanksHi,
No errors in the below code but it is bringing a value 0 even though there are duplicates in the table which is suggesting to me that the code is not working.
Any help would be appreciated. Thanks
DATA : BEGIN OF itab OCCURS 0,
aufnr LIKE ekkn-aufnr,
END OF itab.
DATA : count(2) TYPE p.
DATA : no_of_records(3) TYPE p.
Clear no_of_records.
select aufnr from ekkn into itab.
endselect.
SORT itab BY aufnr.
LOOP AT itab.
ADD 1 TO count.
AT END OF aufnr.
IF count GE 2.
ADD count TO no_of_records.
ENDIF.
CLEAR count.
ENDAT.
ENDLOOP.
*WRITE :/ 'Good Bye'.
DUPLICATES = NO_of_records. -
SQL query/report for "count of specific file type"?
I figured this would be a good first post, as I have found every question I have had to date by browsing the forums, but I am stumped with this one. I recently expressed some concern to HR regarding the use of...lets call it "unauthorized content"
on our network. Getting little to no response, I would like to produce a report containing a total count based on specific file type. For example, how many .mp3 files are stored on company equipment, how many torrent files (sadly, this is true), etc.
Having very basic knowledge of SQL, I am puzzled where to start. I am currently using SCCM SP3 and SQL 2008 R2 with both basic reporting and SSRS enabled on the site.You'll first need to inventory those file types then here's some reports for you.
Get the pstreports from my skydrive, import them and you can cange the file type to mp3 or whatever you need. I monitor mp3 files too.
http://cid-6a8d30f2bc0666d0.office.live.com/browse.aspx/SCCM%20Custom%20Reports
John Marcum | http://myitforum.com/cs2/blogs/jmarcum/| -
Hi Experts,
I have a table like:
Id item1 item2
1 a b
2 x b
3 b a
4 x a
i need take count exact item1,item2 combination should be (a,b) or (b,a).
Is there any way to get the combinations counts.
Regs,
Hwith xx as
(select 1 id,'a' item1,'b' item2 from dual union all
select 2 id,'x' item1,'b' item2 from dual union all
select 3 id,'b' item1,'a' item2 from dual union all
select 4 id,'x' item1,'a' item2 from dual union all
select 5 id,'x' item1,'b' item2 from dual union all
select 6 id,'a' item1,'b' item2 from dual union all
select 7 id,'b' item1,'a' item2 from dual
select count(*) from xx
where (item1 = 'a' and item2 ='b') OR (item1 = 'b' and item2 ='a'); -
Spatial vs. materialized views/query rewrite
Dear all,
we are trying to use Spatial (Locator) functionality together with performance optimization using materialized views and query rewrite, and it does not seem to work. Does anybody has experience with this?
The problem in more detail:
* There is a spatial attribut (vom Typ GEOMETRY) in our table;
* we define a materialized view on that table;
* we run a query that could be better answered using the materialized view with query rewrite;
*the optimizer does not choose the plan using the materialized view, query rewrite does not take place;
This happenes, even if neither the materialized view, nor the query contains the spatial attribut.
The explanation given by the procedure DBMS_MVIEW.Explain_Rewrite is:
"QSM-01064 query has a fixed table or view Cause: Query
rewrite is not allowed if query references any fixed tables or views"
We are using Oracle 9R2, Enterprise Edition, with locator. Nevertheless, it would also be interesting, if there is any improvement in 10g?
A more complicated task, using materialized views to optimize spatial operations (e.g., sdo_relate) would also be very interesting, as spatial joins are very expensive operations.
Thanks in advance for any comments, ideas!
Cheers,
Gergely LukacsHi Dan,
thanks for your rapid response!
A simple example is:
alter session set query_rewrite_integrity=trusted;
alter session set query_rewrite_enabled=true;
set serveroutput on;
/* Creating testtable */
CREATE TABLE TESTTABLE (
KEY1 NUMBER (4) NOT NULL,
KEY2 NUMBER (8) NOT NULL,
KEY3 NUMBER (14) NOT NULL,
NAME VARCHAR2 (255),
X NUMBER (9,2),
Y NUMBER (9,2),
ATTR1 VARCHAR2 (2),
ATTR2 VARCHAR2 (30),
ATTR3 VARCHAR2 (80),
ATTR4 NUMBER (7),
ATTR5 NUMBER (4),
ATTR6 NUMBER (5),
ATTR7 VARCHAR2 (40),
ATTR8 VARCHAR2 (40),
CONSTRAINT TESTTABLE_PK
PRIMARY KEY ( KEY1, KEY2, KEY3 ));
/* Creating materialized view */
CREATE MATERIALIZED VIEW TESTTABLE_MV
REFRESH COMPLETE
ENABLE QUERY REWRITE
AS SELECT DISTINCT ATTR7, ATTR8
FROM TESTTABLE;
/* Creating statistics, just to make sure */
execute dbms_stats.gather_table_stats(ownname=> 'TESTSCHEMA', tabname=> 'TESTTABLE', cascade=>TRUE);
execute dbms_stats.gather_table_stats(ownname=> 'TESTSCHEMA', tabname=> 'TESTTABLE_MV', cascade=>TRUE);
/* Explain rewrite procedure */
DECLARE
Rewrite_Array SYS.RewriteArrayType := SYS.RewriteArrayType();
querytxt VARCHAR2(1500) :=
'SELECT COUNT(*) FROM (
SELECT DISTINCT
ATTR8 FROM
TESTTABLE
i NUMBER;
BEGIN
DBMS_MVIEW.Explain_Rewrite(querytxt, 'TESTTABLE_MV', Rewrite_Array);
FOR i IN 1..Rewrite_Array.count
LOOP
DBMS_OUTPUT.PUT_LINE(Rewrite_Array(i).message);
END LOOP;
END;
The message you get is:
QSM-01009 materialized view, string, matched query text
Cause: The query was rewritten using a materialized view, because query text matched the materialized view text.
Action: No action required.
i.e. query rewrite works!
/* Adding geometry column to the testtable -- not to the materialized view, and not to the query! */
ALTER TABLE TESTTABLE
ADD GEOMETRYATTR mdsys.sdo_geometry;
/* Explain rewrite procedure */
DECLARE
Rewrite_Array SYS.RewriteArrayType := SYS.RewriteArrayType();
querytxt VARCHAR2(1500) :=
'SELECT COUNT(*) FROM (
SELECT DISTINCT
ATTR8 FROM
TESTTABLE
i NUMBER;
BEGIN
DBMS_MVIEW.Explain_Rewrite(querytxt, 'TESTTABLE_MV', Rewrite_Array);
FOR i IN 1..Rewrite_Array.count
LOOP
DBMS_OUTPUT.PUT_LINE(Rewrite_Array(i).message);
END LOOP;
END;
The messages you get are:
QSM-01064 query has a fixed table or view
Cause: Query rewrite is not allowed if query references any fixed tables or views.
Action: No action required.
QSM-01019 no suitable materialized view found to rewrite this query
Cause: There doesn't exist any materialized view that can be used to rewrite this query.
Action: Consider creating a new materialized view.
i.e. query rewrite does not work!
If this works, the next issue is to use materialized views for optimizing spatial operations, e.g., a spatial join. I can supply you with an example, if necessary (only makes sense, I think, after the first problem is solved).
Thanks in advance for any ideas, comments!
Cheers,
Gergely -
Count Distinct Wtih CASE Statement - Does not follow aggregation path
All,
I have a fact table, a day aggregate and a month aggregate. I have a time hierarchy and the month aggregate is set to the month level, the day aggregate is set to the day level within the time hierarchy.
When using any measures and a field from my time dimension .. the appropriate aggregate is chosen, ie month & activity count .. month aggregate is used. Day & activity count .. day aggregate is used.
However - when I use the count distinct aggregate rule .. the request always uses the lowest common denominator. The way I have found to get this to work is to use a logical table source override in the aggregation tab. Once I do this .. it does use the aggregates correctly.
A few questions
1. Is this the correct way to use aggregate navigation for the count distinct aggregation rule (using the source override option)? If yes, why is this necessary for count distinct .. what is special about it?
2. The main problem I have now is that I need to create a simple count measure that has a CASE statement in it. The only way I see to do this is to select the Based on Dimensions checkbox which then allows me to add a CASE statement into my count distinct clause. But now the aggregation issue comes back into play and I can't do the logical table source override when the based on dimensions checkbox is checked .. so I am now stuck .. any help is appreciated.
KOk - I found a workaround (and maybe the preferred solution for my particular issue), which is - Using a CASE Statement with a COUNT DISTINCT aggregation and still havine AGGREGATE AWARENESS
To get all three of the requirements above to work I had to do the following:
- Create the COUNT DISTINCT as normal (counting on a USERID physically mapped column in my case)
- Now I need to map my fact and aggregates to this column. This is where I got the case statement to work. Instead of trying to put the case statement inside of the Aggregate definition by using the checkbox 'Base on Dimension' (which didnt allow for aggregate awareness for some reason) .. I instead specified the case statement in the Column Mapping section of the Fact and Aggregate tables.
- Once all the LTS's (facts and aggregates) are mapped .. you still have to define the Logical Table Source overrides in the aggregate tab of the count distinct definition. Add in all the fact and aggregates.
Now the measure will use my month aggregate when i specify month, the day aggregate when i specify day, etc..
If you are just trying to use a Count Distinct (no CASE satement needed) with Aggregate Awareness, you just need to use the Logical Table Source override on the aggregate tab.
There is still a funky issue when using the COUNT aggregate type. As long as you dont map multiple logical table sources to the COUNT column it works fine and as expected. But, if you try to add in multiple sources and aggregate awareness it randomly starts SUMMING everything .. very weird. The blog in this thread says to check the 'Based on Dimension' checkbox to fix the problem but that did not work for me. Still not sure what to do on this one .. but its not currently causing me a problem so I will ignore for now ;)
Thanks for all the help
K -
Risky enable star transformations and trusted Query Rewrites?
Hi,
I need some advice/opinions from someone experienced with large scale
data warehousing.
I'm working on a fairly large data warehouse (around 3 TB), and we're
using Oracle 10.1.0.2.0.
So, I found out about MV's and Star Transformations, and that we're not
using them.
Naturally I decided to try them out in our test environment and I was
more than pleased (actually, I nearly wet my pants) with the potential
performance boost we could get for some of our more critical solutions.
However, I also noticed that the production environment has the
following settings:
star_transformation_enabled = false
query_rewrite_integrity = enforced
...which basically disables all the cool stuff. In the testing
environment I used the following:
star_transformation_enabled = true
query_rewrite_integrity = trusted (to make use of func. dep in
dimensions)
I would like to stand on somewhat solid grounds and increase my
understanding before aproaching our DBA's with the suggestion to change
system global settings :)
Basically, my question(s) are:
1. What are the impact of enabling Star Transformations on a system?
Is there any at all, if no previous solution has been built in a way
to
make use of star transformations?
Or could this change result in fine-tuned queries performing badly
since they
suddenly make use of star transformations?
2. Is "query_rewrite_integrity" used by Oracle for other things besides
Materialized Views?
I'm thinking, if the only thing it's used for is to resolve query
rewrites for MV's, then it's safe to change it, because there are no
such MV's.
Note that I'd like to set it to TRUSTED, in order to make real use
of the dependencies declared with CREATE DIMENSION...
I would be happy to know what you think about this.
Any thoughts, opinions are welcome since this is new grounds for me.
Best Regards
R.Following parameters are deprecated in release 10.2.
LOGMNR_MAX_PERSISTENT_SESSIONS
MAX_COMMIT_PROPAGATION_DELAY
REMOTE_ARCHIVE_ENABLE
SERIAL_REUSE
SQL_TRACE
Check this in your parameter file.
As per Oracle Errors Documents.
Error : ORA-32004
Cause: One or more obsolete and/or parameters were specified in the
SPFILE or the PFILE on the server side.
Action: See alert log for a list of parameters that are obsolete. or
deprecated. Remove them from the SPFILE or the server side PFILE
Regards,
Sabdar Syed. -
Query rewrite don't work wor aggregate query but work for join query
Dear experts,
Let me know what's wrong for
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
I have two MATERIALIZED VIEW:
A) -- Only join
CREATE MATERIALIZED VIEW "SCOTT"."TST_MV"
ENABLE QUERY REWRITE AS
SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
"T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
"T57410"."DISTANCE" "DISTANCE",
"T57410"."IS_LEAF" "IS_LEAF",
"T57460"."DEPARTMENTID" "DEPARTMENTID",
"T57460"."NAME" "NAME","T57460"."PARENT"
"PARENT","T57460"."SHORTNAME" "SHORTNAME",
"T57460"."SKIMOID" "SKIMOID"
FROM "BI_OIV_HIER" "T57410",
"BI_DEPARTMENTS" "T57460"
WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
B) -- Join with aggregation
CREATE MATERIALIZED VIEW "SCOTT"."TST_MV2"
("C41", "C42", "C43",
"C44", "C45", "C46",
"C47", "C48", "C49",
"C50", "C51", "C52",
"C53", "C54", "C55",
"C56", "C57", "C58",
"C59", "C60", "C61",
"INCIDENTTYPE")
ENABLE QUERY REWRITE
AS SELECT COUNT(T56454.TOTAL) AS c41,
T56840.CATEGORYID AS c42,
T56840.PARENT AS c43,
T56908.DOCSTATEID AS c44,
T56908.PARENT AS c45,
T56947.EXPIREDID AS c46,
T56947.PARENT AS c47,
T56986.ISSUESTATEID AS c48,
T56986.PARENT AS c49,
T57025.LOCATIONID AS c50,
T57025.PARENT AS c51,
T57064.NEWID AS c52,
T57064.PARENT AS c53,
T57103.PARENT AS c54,
T57103.RESOLUTIONID AS c55,
T57142.PARENT AS c56,
T57142.RESPONSIBLEID AS c57,
T57181.PARENT AS c58,
T57181.SOURCEID AS c59,
T57460.DEPARTMENTID AS c60,
T57460.PARENT AS c61,
T56454.INCIDENTTYPE
FROM BI_OIV_HIER T57410
BI_DEPARTMENTS T57460
BI_SOURCE_HIER T57176
SOURCE T57181
BI_RESPONSIBLE_HIER T57137
RESPONSIBLE T57142
BI_RESOLUTIONS_HIER T57098
RESOLUTIONS T57103
BI_NEW_HIER T57059
NEW T57064
BI_LOCATIONS_HIER T57020
LOCATIONS T57025
BI_ISSUESTATES_HIER T56981
ISSUESTATES T56986
BI_EXPIRED_HIER T56942
EXPIRED T56947
BI_DOCSTATES_HIER T56903
DOCSTATES T56908
BI_CATEGORY_HIER T56835
CATEGORY T56840
INCIDENTS T56454
WHERE ( T56454.RESOLUTION = T57098.MEMBER_KEY
AND T56454.CATEGORY = T56835.MEMBER_KEY
AND T56454.DOCSTATE = T56903.MEMBER_KEY
AND T56454.EXPIRED = T56942.MEMBER_KEY
AND T56454.ISSUESTATE = T56981.MEMBER_KEY
AND T56454.LOCATION = T57020.MEMBER_KEY
AND T56454.NEW = T57059.MEMBER_KEY
AND T56454.RESPONSIBLE = T57137.MEMBER_KEY
AND T56454.SOURCE = T57176.MEMBER_KEY
AND T56454.DEPARTMENTID = T57410.MEMBER_KEY
AND T56835.ANCESTOR_KEY = T56840.CATEGORYID
AND T56903.ANCESTOR_KEY = T56908.DOCSTATEID
AND T56942.ANCESTOR_KEY = T56947.EXPIREDID
AND T56981.ANCESTOR_KEY = T56986.ISSUESTATEID
AND T57020.ANCESTOR_KEY = T57025.LOCATIONID
AND T57059.ANCESTOR_KEY = T57064.NEWID
AND T57098.ANCESTOR_KEY = T57103.RESOLUTIONID
AND T57137.ANCESTOR_KEY = T57142.RESPONSIBLEID
AND T57176.ANCESTOR_KEY = T57181.SOURCEID
AND T57410.ANCESTOR_KEY = T57460.DEPARTMENTID
GROUP BY T56840.CATEGORYID,
T56840.PARENT,
T56908.DOCSTATEID,
T56908.PARENT,
T56947.EXPIREDID,
T56947.PARENT,
T56986.ISSUESTATEID,
T56986.PARENT,
T57025.LOCATIONID,
T57025.PARENT,
T57064.NEWID,
T57064.PARENT,
T57103.PARENT,
T57103.RESOLUTIONID,
T57142.PARENT,
T57142.RESPONSIBLEID,
T57181.PARENT,
T57181.SOURCEID,
T57460.DEPARTMENTID,
T57460.PARENT,
T56454.INCIDENTTYPE;
So, optimizer uses query rewrite in
select * from TST_MV
and don't use query rewrite in
select * from TST_MV2
within one session.
select * from TST_MV should be read as underlying select for TST_MV:
SELECT "T57410"."MEMBER_KEY" "MEMBER_KEY",
"T57410"."ANCESTOR_KEY" "ANCESTOR_KEY",
"T57410"."DISTANCE" "DISTANCE",
"T57410"."IS_LEAF" "IS_LEAF",
"T57460"."DEPARTMENTID" "DEPARTMENTID",
"T57460"."NAME" "NAME","T57460"."PARENT"
"PARENT","T57460"."SHORTNAME" "SHORTNAME",
"T57460"."SKIMOID" "SKIMOID"
FROM "BI_OIV_HIER" "T57410",
"BI_DEPARTMENTS" "T57460"
WHERE "T57410"."ANCESTOR_KEY"="T57460"."DEPARTMENTID";
So, select * from TST_MV2 should be read by similar way as underlying select to TST_MV2
DBMS_STATS.GATHER_TABLE_STAT is done for each table and MV.
Please help to investigate the issue.
Why TST_MV2 don't used for query rewrite ?
Kind regards.Hi Carlos
It looks like you have more than one question in your posting. Would I be right in saying that you have an issue with how long Discoverer takes when compared with SQL, and a second issue with regards to MVs not being used? I will add some comments on both. If one of these is not an issue please inform.
Issue 1:
Have you compared the explain plan from Discoverer with SQL? You may need to use a tool like TOAD to see it.
Also, is Discoverer doing anything complicated with the data after it comes back? By complicated I mean do you have a large number of Page Items and / or Group Sorted items? SQL wouldn't have this overhead you see.
Because SQL would create a table, have you tried creating a table in Discoverer and seeing how long it takes?
Finally, what version of the database are you using?
Issue 2:
Your initial statement was that query rewrite works with several MV but not with others, yet in the body of the report you only show explain plans that do use the MV. Could you therefore go into some more detail regarding this situation.
Best wishes
Michael -
Hi there,
I have a below query that has a count field, which I want to have count of unique records.
SELECT [SGMT_REV_BY_SITE].GLSeq, Count([SGMT_REV_BY_SITE].SITE_ADDR_ID) AS NUM_SITES, [SGMT_REV_BY_SITE].SGMNT1, [SGMT_REV_BY_SITE].SGMNT2, [SGMT_REV_BY_SITE].LOB$, [SGMT_REV_BY_SITE].PERIOD, [SGMT_REV_BY_SITE].RECUR, [SGMT_REV_BY_SITE].PROMOTED, Sum([SGMT_REV_BY_SITE].REV_GRS)
AS REV_GRS_Sum, Sum([SGMT_REV_BY_SITE].REV_NET) AS REV_NET_Sum, IIf(NUM_SITES >0, REV_GRS_Sum/NUM_SITES, REV_GRS_Sum) AS ARPSg_Sum, IIf(NUM_SITES >0,
REV_NET_Sum/NUM_SITES, REV_NET_Sum) AS ARPSn_Sum
FROM SGMT_REV_BY_SITE
WHERE ([SGMT_REV_BY_SITE].SGMNT1="Small"
Or [SGMT_REV_BY_SITE].SGMNT1="Medium")
And [SGMT_REV_BY_SITE].PERIOD>#12/31/2013#
And [SGMT_REV_BY_SITE].RECUR=Yes
And [SGMT_REV_BY_SITE].PROMOTED=Yes
And [SGMT_REV_BY_SITE].REV_NET<>0
GROUP BY [SGMT_REV_BY_SITE].GLSeq, [SGMT_REV_BY_SITE].LOB$, [SGMT_REV_BY_SITE].SGMNT1, [SGMT_REV_BY_SITE].SGMNT2, [SGMT_REV_BY_SITE].PERIOD, [SGMT_REV_BY_SITE].RECUR, [SGMT_REV_BY_SITE].PROMOTED;
Is there an easy way of accomplishing the task.
Thanks in advance for help.ARSagit,
Unfortunatly access don't support count(distinct fieldname) statement...
Workaround is:
SELECT [NS].glseq,
Count([NS].site_addr_id) AS NUM_SITES,
[NS].sgmnt1,
[NS].sgmnt2,
[NS].lob$,
[NS].period,
[NS].recur,
[NS].promoted,
FROM (
SELECT DISTINCT
[sgmt_rev_by_site].glseq,
[sgmt_rev_by_site].site_addr_id
[sgmt_rev_by_site].sgmnt1,
[sgmt_rev_by_site].sgmnt2,
[sgmt_rev_by_site].lob$,
[sgmt_rev_by_site].period,
[sgmt_rev_by_site].recur,
[sgmt_rev_by_site].promoted,
FROM
sgmt_rev_by_site
WHERE [sgmt_rev_by_site].sgmnt1 in ("small","medium")
AND [sgmt_rev_by_site].period >#12/31/2013#
AND [sgmt_rev_by_site].recur = yes
AND [sgmt_rev_by_site].promoted = yes
AND [sgmt_rev_by_site].rev_net <>0) AS NS
GROUP BY [NS].glseq,
[NS].lob$,
[NS].sgmnt1,
[NS].sgmnt2,
[NS].period,
[NS].recur,
[NS].promoted;
With this quer you can join with main query and show num_sites....
Michał -
Very high parse times for query rewrite using cube materialized views
We recently upgraded to version 11.2.0.2 (both AWM and Oracle database server). We are using cube materialized views with query rewrite enabled. Some observations of changes that took place when we rebuilt all the dimensions and cubes in this version:
1. Queries against the base tables take about 35 seconds to parse. Then they execute in a tenth of a second. Even simple queries that just get a sum of the amount from the fact table (which is joined to all the dimensions) takes that long to parse. Once parsed, the queries fly.
2. I noticed that the materialized views used to use grouping sets in the group by clause in version 11.2.0.1, but now they use group by rollup, rollup, rollup...
If we disable query rewrite on the MV or for my session, parse times drop to less than a second. Ideas?There does appear to be a slow down in parse times between 11.1.0.7 and 11.2. We are still investigating this, but in the meantime here is a way to force the code in 11.2 to generate a GROUPING SETS clause instead of the new ROLLUP syntax.
The trick is to create a dummy hierarchy containing only the leaf level. This is necessary for all dimensions that currently have a single hierarchy. As a simple example I created a dimension, PROD, with three levels, A, B, and C, in a single hierarchy. I then created a one dimensional cube, PC. Here is the SELECT statement for the MV in 11.2. Note the ROLLUP clause in the GROUP BY.
SELECT
GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID") SYS_GID,
(CASE GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID")
WHEN 3
THEN TO_CHAR(('A_' || T3."CLASS_ID") )
WHEN 1
THEN TO_CHAR(('B_' || T3."FAMILY_ID") )
ELSE TO_CHAR(('C_' || T3."ITEM_ID") ) END) "PROD",
T3."CLASS_ID" "D1_PROD_A_ID",
T3."FAMILY_ID" "D1_PROD_B_ID",
T3."ITEM_ID" "D1_PROD_C_ID",
SUM(T2."UNIT_PRICE") "PRICE",
COUNT(T2."UNIT_PRICE") "COUNT_PRICE",
COUNT(*) "SYS_COUNT"
FROM
GLOBAL."PRICE_AND_COST_FACT" T2,
GLOBAL."PRODUCT_DIM" T3
WHERE
(T3."ITEM_ID" = T2."ITEM_ID")
GROUP BY
(T3."CLASS_ID") ,
ROLLUP ((T3."FAMILY_ID") , (T3."ITEM_ID") )Next I modified the dimension to add a new hierarchy, DUMMY, containing just the leaf level, C. Once I have mapped the new level and re-enabled MVs, I get the following formulation.
SELECT
GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID") SYS_GID,
(CASE GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID")
WHEN 3
THEN ('A_' || T3."CLASS_ID")
WHEN 1
THEN ('B_' || T3."FAMILY_ID")
WHEN 0
THEN ('C_' || T3."ITEM_ID")
ELSE NULL END) "PROD",
T3."CLASS_ID" "D1_PROD_A_ID",
T3."FAMILY_ID" "D1_PROD_B_ID",
T3."ITEM_ID" "D1_PROD_C_ID",
SUM(T2."UNIT_PRICE") "PRICE",
COUNT(T2."UNIT_PRICE") "COUNT_PRICE",
COUNT(*) "SYS_COUNT"
FROM
GLOBAL."PRICE_AND_COST_FACT" T2,
GLOBAL."PRODUCT_DIM" T3
WHERE
(T3."ITEM_ID" = T2."ITEM_ID")
GROUP BY
GROUPING SETS ((T3."CLASS_ID") , (T3."FAMILY_ID", T3."CLASS_ID") , (T3."ITEM_ID", T3."FAMILY_ID", T3."CLASS_ID") )This puts things back the way they were in 11.1.0.7 when the GROUPING SETS clause was used in all cases. Note that the two queries are logically equivalent. -
Performance problem with more than one COUNT(DISTINCT ...) in a query
Hi,
(I hope this is the good forum).
In the following query, I have 2 Count Distinct on 2 different fields of the same table. Execution time is okay (2 s) with one or the other COUNT(DISCTINCT ...) in the SELECT clause, but is not tolerable (12 s) with both together in the query! I have
a similar case with 3 counts: 4 s each, 36 s when together!
I've looked at the execution plan, and it seems that with two count distinct, SQL server sorts the table twice before joining the results.
I do not have much experience with SQL server optimization, and I don't know what to improve and how. The SQL is generated by Business Objects, I have few possibilities to tune it. The most direct way would be to execute 2 different queries, but I'd like
to avoid it.
Any advice?
SELECT
DIM_MOIS.DATE_DEBUT_MOIS,
DIM_MOIS.NUM_ANNEE_MOIS,
DIM_DEMANDE_SCD.CAT_DEMANDE,
DIM_APPLICATION.LIB_APPLICATION,
DIM_DEMANDE_SCD.CAT_DEMANDE ,
count(distinct FAITS_DEMANDE.NB_DEMANDE_FLUX),
count(distinct FAITS_DEMANDE.NB_DEMANDE_RESOL_NIV1)
FROM
ALIM_SID.DIM_MOIS INNER JOIN ALIM_SID.DIM_JOUR ON (DIM_JOUR.SEQ_MOIS=DIM_MOIS.SEQ_MOIS)
INNER JOIN ALIM_SID.FAITS_DEMANDE ON (FAITS_DEMANDE.SEQ_JOUR=DIM_JOUR.SEQ_JOUR)
INNER JOIN ALIM_SID.DIM_APPLICATION ON (FAITS_DEMANDE.SEQ_APPLICATION=DIM_APPLICATION.SEQ_APPLICATION)
INNER JOIN ALIM_SID.DIM_DEMANDE_SCD ON (FAITS_DEMANDE.SEQ_DEMANDE_SCD=DIM_DEMANDE_SCD.SEQ_DEMANDE_SCD)
WHERE
( ( DIM_MOIS.NUM_ANNEE_MOIS ) >201301
GROUP BY
DIM_MOIS.DATE_DEBUT_MOIS,
DIM_MOIS.NUM_ANNEE_MOIS,
DIM_DEMANDE_SCD.CAT_DEMANDE,
DIM_APPLICATION.LIB_APPLICATIONHere is the script, nothing original. Hope this helps.
-- Fact table :
-- foreign keys begin by FK_,
-- measures to counted (COUNT DISTINCT) begin with NB_
CREATE TABLE [ALIM_SID].[FAITS_DEMANDE](
[SEQ_JOUR] [int] NOT NULL,
[SEQ_DEMANDE] [int] NOT NULL,
[SEQ_DEMANDE_SCD] [int] NOT NULL,
[SEQ_APPLICATION] [int] NOT NULL,
[SEQ_INTERVENANT] [int] NOT NULL,
[SEQ_SERVICE_RESPONSABLE] [int] NOT NULL,
[NB_DEMANDE_FLUX] [int] NULL,
[NB_DEMANDE_STOCK] [int] NULL,
[NB_DEMANDE_RESOLUE] [int] NULL,
[NB_DEMANDE_LIVREE] [int] NULL,
[NB_DEMANDE_MEP] [int] NULL,
[NB_DEMANDE_RESOL_NIV1] [int] NULL,
CONSTRAINT [PK_FAITS_DEMANDE] PRIMARY KEY CLUSTERED
[SEQ_JOUR] ASC,
[SEQ_DEMANDE] ASC,
[SEQ_DEMANDE_SCD] ASC,
[SEQ_APPLICATION] ASC,
[SEQ_INTERVENANT] ASC,
[SEQ_SERVICE_RESPONSABLE] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
CONSTRAINT [AK_AK_FAITS_DEMANDE_FAITS_DE] UNIQUE NONCLUSTERED
[SEQ_JOUR] ASC,
[SEQ_DEMANDE] ASC,
[SEQ_DEMANDE_SCD] ASC,
[SEQ_APPLICATION] ASC,
[SEQ_INTERVENANT] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] WITH CHECK ADD CONSTRAINT [FK_FAITS_DEMANDE_DIM_APPLICATION] FOREIGN KEY([SEQ_APPLICATION])
REFERENCES [ALIM_SID].[DIM_APPLICATION] ([SEQ_APPLICATION])
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_APPLICATION]
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] WITH CHECK ADD CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE] FOREIGN KEY([SEQ_DEMANDE])
REFERENCES [ALIM_SID].[DIM_DEMANDE] ([SEQ_DEMANDE])
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE]
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] WITH CHECK ADD CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE_SCD] FOREIGN KEY([SEQ_DEMANDE_SCD])
REFERENCES [ALIM_SID].[DIM_DEMANDE_SCD] ([SEQ_DEMANDE_SCD])
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE_SCD]
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] WITH CHECK ADD CONSTRAINT [FK_FAITS_DEMANDE_DIM_INTERVENANT] FOREIGN KEY([SEQ_INTERVENANT])
REFERENCES [ALIM_SID].[DIM_INTERVENANT] ([SEQ_INTERVENANT])
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_INTERVENANT]
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] WITH CHECK ADD CONSTRAINT [FK_FAITS_DEMANDE_DIM_JOUR] FOREIGN KEY([SEQ_JOUR])
REFERENCES [ALIM_SID].[DIM_JOUR] ([SEQ_JOUR])
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_JOUR]
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] WITH CHECK ADD CONSTRAINT [FK_FAITS_DEMANDE_DIM_SERVICE_RESPONSABLE] FOREIGN KEY([SEQ_SERVICE_RESPONSABLE])
REFERENCES [ALIM_SID].[DIM_SERVICE] ([SEQ_SERVICE])
GO
ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_SERVICE_RESPONSABLE]
GO
-- not shown : extended properties
-- One of the dimension tables (they all have a primary key named SEQ_)
CREATE TABLE [ALIM_SID].[DIM_JOUR](
[SEQ_JOUR] [int] IDENTITY(1,1) NOT NULL,
[SEQ_ANNEE] [int] NOT NULL,
[SEQ_MOIS] [int] NOT NULL,
[DATE_JOUR] [date] NULL,
[CODE_ANNEE] [varchar](25) NULL,
[CODE_MOIS] [varchar](25) NULL,
[CODE_SEMAINE_ISO] [varchar](25) NULL,
[CODE_JOUR_ANNEE] [varchar](25) NULL,
[CODE_ANNEE_JOUR] [varchar](25) NULL,
[LIB_JOUR] [varchar](25) NULL,
[LIB_JOUR_COURT] [varchar](25) NULL,
[JOUR_OUVRE] [tinyint] NULL,
[JOUR_CHOME] [tinyint] NULL,
CONSTRAINT [PK_DIM_JOUR] PRIMARY KEY CLUSTERED
[SEQ_JOUR] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [ALIM_SID].[DIM_JOUR] WITH CHECK ADD CONSTRAINT [FK_DIM_JOUR_DIM_ANNEE] FOREIGN KEY([SEQ_ANNEE])
REFERENCES [ALIM_SID].[DIM_ANNEE] ([SEQ_ANNEE])
GO
ALTER TABLE [ALIM_SID].[DIM_JOUR] CHECK CONSTRAINT [FK_DIM_JOUR_DIM_ANNEE]
GO
ALTER TABLE [ALIM_SID].[DIM_JOUR] WITH CHECK ADD CONSTRAINT [FK_DIM_JOUR_DIM_MOIS] FOREIGN KEY([SEQ_MOIS])
REFERENCES [ALIM_SID].[DIM_MOIS] ([SEQ_MOIS])
GO
ALTER TABLE [ALIM_SID].[DIM_JOUR] CHECK CONSTRAINT [FK_DIM_JOUR_DIM_MOIS]
GO -
Are Cube organized materialized view with Year to Date calculated measure eligible for Query Rewrite
Hi,
Will appreciate if someone can help me with a question regarding Cube organized MV (OLAP).
Does cube organized materialized view with calculated measures based on time series Year to date, inception to date eg.
SUM(FCT_POSITION.BASE_REALIZED_PNL) OVER (HIERARCHY DIM_CALENDAR.CALENDAR BETWEEN UNBOUNDED PRECEDING AND CURRENT MEMBER WITHIN ANCESTOR AT DIMENSION LEVEL DIM_CALENDAR."YEAR")
are eligible for query rewrites or these are considered advanced for query rewrite purposes.
I was hoping to find an example with YTD window function on physical fact dim tables with optimizer rewriting it to Cube Org. MV but not much success.
Thanks in advanceI dont think this is possible.
(My own reasoning)
Part of the reason query rewrite works for base measures only (not calc measures in olap like ytd would be) is due to the fact that the data is staged in olap but its lineage is understandable via the olap cube mappings. That dependency/source identification is lost when we build calculated measures in olap and i think its almost impossible for optimizer to understand the finer points relating to an olap calculation defined via olap calculation (olap dml or olap expression) and also match it with the equivalent calculation using relational sql expression. The difficulty may be because both the olap ytd as well as relational ytd defined via sum() over (partition by ... order by ...) have many non-standard variations of the same calculation/definition. E.g: You can choose to use or choose not to use the option relating to IGNORE NULLs within the sql analytic function. OLAP defn may use NASKIP or NASKIP2.
I tried to search for query rewrite solutions for Inventory stock based calculations (aggregation along time=last value along time) and see if olap cube with cube aggregation option set to "Last non-na hierarchical value" works as an alternative to relational calculation. My experience has been that its not possible. You can do it relationally or you can do it via olap but your application needs to be aware of each and make the appropriate backend sql/call. In such cases, you cannot make olap (aw/cubes/dimensions) appear magically behind the scenes to fulfill the query execution while appearing to work relationally.
HTH
Shankar
Maybe you are looking for
-
Multiple schemas to be used for a single webi report
Hi, Recently i got one requorement.as per the requirement i need to develop a single webi report in which i have to display the data from 7 schemas.i am using the BO XI R 3 version. Is there any options available to achieve this? Please provide me y
-
Realplayer "Download this Video" doesn't work on Firefox
Realplayer "Download this Video" doesn't work on the latest vesrion of Firefox
-
Vendor-Customer balances in group currency-table/structure/function module?
Hi All, Which table/structure/function module can be used to get balances for vendors and customers in group currency? We just wnat to extract balances in group currency so FBL1N and FBL5N are not options. Is there any SAP report which can provide th
-
NetUI file upload creates temporary files without cleanup
When using the NetUI File Upload in my web application, I notice that it stores temporary files in the server config directory. The file names are netxxxxx.tmp and they are never deleted on server startup/shutdown, etc. I am uploading large files (ab
-
How to use userexits in idocs Thanks in advance.