Different aggregation exceptions in a query
Dear all,
I would like to know is there is way to have in a same query for the same key figure (in the cube) different aggregations regarding the value of a characteristic.
My problem concerns materials, for some of them I want to summarize quantities for some I want a last and some others an average value. But all need to be displayed in the same report.
Thanks,
Matthieu
hi,
if I understood your requirement correctly is it like this,
you have Material valued from say M1, M2 ..... M30. and corresponding quantity stored in cube
now,
for M1 to M10 total of quantity is 200. ---represent total
for M11 to M20 quantity of M20 is 50. -
represent last
for M21 to M30 average of quantity is 45. -
represent average
Now in the out put you want to display as mentioned above right ?
this can be achieved by creating a structure and you can write formula in each based on above 3 requirement. Data will be displayed in one column.
But only thing is you will have to group the materials for data (no dynamism will be possible).
Regards,
Akshay
Similar Messages
-
Different aggregation operators for the measures in a compressed cube
I am using OWB 10gR2 to create a cube and its dimensions (deployed into a 10gR2 database). Since the cube has 11 dimensions I set all dimensions to sparse and the cube to compressed. The cube has 4 measures, two of them have SUM as aggregation operator for the TIME dimensions the other two should have AVERAGE (or FIRST). I have SUM for all other dimensions.
After loading data into the cube for the first time I realized that the aggregation for the TIME dimension was not always (although sometimes) correct. It was really strange because either the aggregated values were correct (for SUM and for AVERAGE) or seemed to be "near" the correct result (like average of 145.279 and 145.281 is 145.282 instead of 145.280 or 122+44+16=180 instead of 182). For all other dimensions the aggregation was OK.
Now I have the following questions:
1. Is it possible to have different aggregations for different measures in the same COMPRESSED cube?
2. Is it possible to have the AVERAGE or FIRST aggregation operator for measures in a COMPRESSED cube?
For a 10gR1 database the answer would be NO, but for a 10gR2 database I do not know. I could not find the answer, neither in the Oracle documentation nor somewhere else.What I found in Oracle presentation is that in 10GR2 compressed cube enhancements support all aggregation methods except weighted methods (first, last, minimum, maximum and so on). It is from September 2005 so maybe something changed since then.
Regarding your question about the results I think it is caused by the fact that calculation are made on doubles and then there is a compression, so maybe precsion is lost a little bit :(. I really am curious whether it is because of numeric (precision loss) issues. -
Different aggregation for different Dimensions
Hello,
is it possible to have different aggregations on different dimensions.
I have following situation:
I have a measure per client and day.
I'm interested in the maximum per month from the daily sums over clients.
In the measure properties I can only choose between Maximum and Sum in general but not per Dimensions.
To clearify what i mean here is some sample data.
* * Client A * Client B *
* 2014-11-28 * 7 * 8 * SUM() = 15
* 2014-11-29 * 6 * 8 * SUM() = 14
* 2014-11-30 * 6 * 10 * SUM() = 16 <-- monthly max
* 2014-12-01 * 7 * 8 * SUM() = 15
* 2014-12-02 * 5 * 12 * SUM() = 17 <-- monthly max
* 2014-12-03 * 6 * 9 * SUM() = 15
This data is stored in my fact table with reference to date and client dimensions.
This example data would have to be reported as:
/* Report on measure
* * Measure *
* 2014-11 * 16 *
* 2014-12 * 16 *
* Report on measure per client
(max per client and month)
* * Client A * Client B *
* 2014-11 * 7 * 8 *
* 2014-12 * 7 * 12 *
Can this be achieved with SSAS? Didn't find any property for that on the measure.
Best Regards,
ThomasHi Thomas,
According to your description, you want to calculate different aggregation for different dimensions, right?
Based on your scenario, I tested it on AdventureWorks cube, the query below is for you reference.
with member [Customer].[Country].[USA & Canada] as
Aggregate( { [Customer].[Country].&[United States],
[Customer].[Country].&[Canada]
member [Measures].[MaxAmount]
as
max([Date].[Calendar].currentmember.children,[Measures].[Internet Sales Amount])
select {[Customer].[Country].&[United States],[Customer].[Country].&[Canada],[Customer].[Country].[USA & Canada]} on 0,
[Date].[Calendar].[Month].members on 1
from
[Adventure Works]
where [Measures].[MaxAmount]
Here is similar thread with yours, please see:
https://social.technet.microsoft.com/Forums/en-US/1bd493ef-f957-4fd5-916b-ee60639106c3/calculated-member-different-aggregations-on-different-dimensions?forum=sqlanalysisservices
Regards,
Charlie Liao
If you have any feedback on our support, please click
here.
Charlie Liao
TechNet Community Support -
Can't SSAS engines make use of more than one aggregation to answer a query?!
I have a very simple cube (just for testing and training) . This cube contains two dimensions: [Dim Soccer Player] contains one attribute hierarchy [player name], the other dimension
is [Dim Match Acts] contains also one attribute [Acts] which has values like fouls, goals, saves, tackles… etc. And of course a Fact that contains one measure of Just Count ... that simple ... so this cube can
answers a question like how many goals scored by "Messi", for example ... a very simple trivial cube.
I'm testing aggregations and their effect. so first I've designed one aggregation (Aggregation 0) on the granularity level of [Player name], then
I run a query to get the count of ALL the[Acts] done by each [Player name] ... I've checked the SQL Profiler and I found that the aggregation was used.
Then I cleared the cache, and I run another query, but this time to get just the number of Fouls committed by each [Player name], I checked the Profiler but the Aggregation 0 was NOT used.
I went back to the aggregations design tab in BIDS, and I added another new aggregation (Aggregation 1) on the level of [Acts], so now I have two aggregation one on the granularity level of
[Player name] and the second on the level of the [Acts].... I cleared the cache again and rerun the last query. NONE of the aggregation was used!
In the third test I deleted Aggregation 1 and added [Acts] to Aggregation 0. so Aggregation 0 now on both [Player name] AND [Acts]... cleared the cache and rerun the last query. Aggregation
0 appeared again.
I just want to make sure (and if possible know why) the SSAS engine can't make use of and combine more than one aggregation to serve a query (point number 2), and that to design an aggregation
that will serve a query which contains attributes from different dimensions, I have to add ALL the attributes in that query in that one aggregation, like point 3 ... is this true?!I think you are on the right track. You need to include all the attributes used in one query in the same aggregation (like #3) for it to be used. Example #2 works as I would expect. Queries above the grain of the agg (query by player name and an agg by
player/act can be used) can be used. Queries below grain of the agg (example #2) can't use the agg.
http://artisconsulting.com/Blogs/GregGalloway -
Different selection in a single query according to an ID
Hi
I'm looking for a way to perform different selections in a single query according to a specific value:
Here is the first selection:
select g.*,gf.*,gs.*
FROM graphs g
LEFT JOIN graph_frames gf on g.graph_id = gf.graph_id
LEFT JOIN graph_sets gs on gf.frame_id = gs.frame_id
WHERE g.graph_id = :IDHere is the second selection:
SELECT gg.graph_id, gg.graph_name
FROM generic_graphs gg
INNER JOIN generic_graph_frames ggf on gg.graph_id = ggf.graph_id
INNER JOIN generic_graph_sets ggs on ggf.frame_id = ggs.frame_id
WHERE gg.graph_id = :IDNow, the ID cannot be in both the tables and I want to perform that in a single query, UNION cannot be applied since the tables are different.
Any ideas?
Edited by: BluShadow on 14-Sep-2011 09:09
added {noformat}{noformat} tags. Please read {message:id=9360002} and learn to do this yourself.Example of consolidating the columns...
SQL> ed
Wrote file afiedt.buf
1 with t as (select &id as id from dual)
2 select e.empno, e.ename, e.job, e.mgr, d.deptno, d.dname, d.loc
3 from (select * from emp cross join t where empno = t.id) e
4 full outer join
5 (select * from dept cross join t where deptno = t.id) d
6* on (1=1)
SQL> /
Enter value for id: 7521
old 1: with t as (select &id as id from dual)
new 1: with t as (select 7521 as id from dual)
EMPNO ENAME JOB MGR DEPTNO DNAME LOC
7521 WARD SALESMAN 7698
SQL> /
Enter value for id: 10
old 1: with t as (select &id as id from dual)
new 1: with t as (select 10 as id from dual)
EMPNO ENAME JOB MGR DEPTNO DNAME LOC
10 ACCOUNTING NEW YORK
SQL>Though, this would be considered poor design because you are trying to query two disperate things, so they should be treated differently. i.e. in my example, I should already know if I'm querying an employee or a department beforehand. -
Is there any documentation which throws light on how data aggregation happens in data warehouse grooming? what algorithm exactly it follows in different aggregation type (raw, hourly, daily)?
How exactly it picks up a specific data value during Hourly aggregations and Daily aggregations?As in How the value is chosen. Does it say averages out or simply picks value at the start of the hour/day or end of the hour/day ??I'll try one more time. :)
Views in the operations console are derived from data in the operational database. This is always raw data, and typically does not go back more than 7 days.
Reports get data from the data warehouse. Unless you create a custom report that uses raw data, you will never see raw data in a report - Microsoft and probably all 3rd party vendors do not develop reports that fetch raw data.
Reports use aggregated data - hourly and daily. The data is aggregated by min, max, and avg sample for that particular aggregation. If it's hourly data, then you will see the min, max, and avg for that entire hour. Same goes for daily - you will see the
min, max, and avg data sample for that entire day.
And to try clarifying even more, the values you see plotted on the report are avg samples. If you drill into the performance detail report, then you can see the min, max, and avg samples, as well as standard deviation (which is calculated based on these
three values).
Jonathan Almquist | SCOMskills, LLC (http://scomskills.com) -
Shared Members in Custom can have different aggregation weight?
Hi guys,
I have created a Flow dimension to track the cash flow movements. Under TotalFlows, I have the different movements (OpBalance, CloBalance, Variation, Gain, Loss), all with aggregation weight of 1. But I have to create an additional structure (sibling of TotalFlows), called TotalFlows2, with shared members (OpBalance, CloBalance...) but with aggregation weight of zero.
Can I used the Shared Member with different aggregation weight? Or should I rename them (for ex., TF2_OpBalance)?
Please, advise.
Thanks!
JaiAbsolutely use the shared member and set the aggregation weight to zero in your duplicate structure, this is the key benefit to custom dimensions.
-
Different aggregations at different levels
When I view the data using 'Measure Data Viewer', the items in a dimension are showing in random order.
How do I load the data in a dimension in ascending order so I can view it in ascending order.
Also, is it possible to apply different aggregations at different levels in a dimension?
Thanks.Thank you. I will put different measures with different dimensions in different cubes.
After I mapped my measures in the mapping canvas(I can see the mapping lines), I tried to maintain the measure. But I am getting an error 'some-measure-name may not be maintained since mapping do not exist for the measure'.
I am using AWM 10.2.0.3A and the database is 10.2.0.4
Thanks -
BAPI to export features of Exceptions and Jump query from BW system to othe
Is there any BAPI is available to export features of Exceptions and Jump query from BW system to other SAP or Non SAP system, whereas won't need to re-define Exceptions and Jump Query. I mean directly export features of Exceptions and Jump query from BW system to other system.
Thanks,
RohanThanks for quick response.
I am working on Hyperion, they are using essbase, and Integration connector to connet BW directly and takes data from BW System.They have features to define Exceptions(Traffic Light) and Jump query in the Hyperion side. But now they want to import directly features of Exceptions(Traffic light) and Jump query from BW System(using BAPI) instead of defining Exceptions and Jump Query in the Hyperion side.
Please help me on this.
Thanks,
Rohan -
Query rewrites with Nested materialized views with different aggregations
Platform used : Oracle 11g.
Here is a simple fact table (with measures m1,m2) and dimensions (a) Location (b) Calendar and (c) Product. The business problem is that aggregation operator for measure m1,m2 are different along location dimension and Calendar dimension. The intention is to preaggregate the measures for a product along the calendar dimension and Location dimension and store it as materialized views.
The direct option is to define a materialized view with Inline queries (Because of the different aggrergation operator, it is not possible to write a query without Inline query). http://download-uk.oracle.com/docs/cd/B28359_01/server.111/b28313/qradv.htm#BABEAJBF documents the limitations that it works only for 'Text match' and 'Equivalent queries' and that is too limiting.
So decided to have nested materialized view, with first view having just joins(my_dim_mvw_joins), the second view having aggregations along Calendar dimension (my_dim_mvw_calendar) and third view having aggregations along the Location dimension(my_dim_mvw_location). Obviously I do not want the query I fire to know about materialized views and I fire it against the fact table. I see that for the fired query (Which needs aggregations along both Calendar and Location), is rewritten with just second materialized view but not the third. (Had set QUERY_REWRITE_INTEGRITY as TRUSTED) .
Wanted to know whether there are limitations on Query Writes with nested materialized views? Thanks
(Have given a simple testable example below. Pls ignore the values given in 'CALENDAR_IDs', 'PRODUCT_IDs' etc as they are the same for all the queries)
-- Calendar hierarchy table
CREATE TABLE CALENDAR_HIERARCHY_TREE
( "CALENDAR_ID" NUMBER(5,0) NOT NULL ENABLE,
"HIERARCHY1_ID" NUMBER(5,0),
"HIERARCHY2_ID" NUMBER(5,0),
"HIERARCHY3_ID" NUMBER(5,0),
"HIERARCHY4_ID" NUMBER(5,0),
CONSTRAINT "CALENDAR_HIERARCHY_TREE_PK" PRIMARY KEY ("CALENDAR_ID")
-- Location hierarchy table
CREATE TABLE LOCATION_HIERARCHY_TREE
( "LOCATION_ID" NUMBER(3,0) NOT NULL ENABLE,
"HIERARCHY1_ID" NUMBER(3,0),
"HIERARCHY2_ID" NUMBER(3,0),
"HIERARCHY3_ID" NUMBER(3,0),
"HIERARCHY4_ID" NUMBER(3,0),
CONSTRAINT "LOCATION_HIERARCHY_TREE_PK" PRIMARY KEY ("LOCATION_ID")
-- Product hierarchy table
CREATE TABLE PRODUCT_HIERARCHY_TREE
( "PRODUCT_ID" NUMBER(3,0) NOT NULL ENABLE,
"HIERARCHY1_ID" NUMBER(3,0),
"HIERARCHY2_ID" NUMBER(3,0),
"HIERARCHY3_ID" NUMBER(3,0),
"HIERARCHY4_ID" NUMBER(3,0),
"HIERARCHY5_ID" NUMBER(3,0),
"HIERARCHY6_ID" NUMBER(3,0),
CONSTRAINT "PRODUCT_HIERARCHY_TREE_PK" PRIMARY KEY ("PRODUCT_ID")
-- Fact table
CREATE TABLE RETAILER_SALES_TBL
( "PRODUCT_ID" NUMBER,
"PRODUCT_KEY" VARCHAR2(50 BYTE),
"PLAN_ID" NUMBER,
"PLAN_PERIOD_ID" NUMBER,
"PERIOD_ID" NUMBER(5,0),
"M1" NUMBER,
"M2" NUMBER,
"M3" NUMBER,
"M4" NUMBER,
"M5" NUMBER,
"M6" NUMBER,
"M7" NUMBER,
"M8" NUMBER,
"LOCATION_ID" NUMBER(3,0),
"M9" NUMBER,
CONSTRAINT "RETAILER_SALES_TBL_LOCATI_FK1" FOREIGN KEY ("LOCATION_ID")
REFERENCES LOCATION_HIERARCHY_TREE ("LOCATION_ID") ENABLE,
CONSTRAINT "RETAILER_SALES_TBL_PRODUC_FK1" FOREIGN KEY ("PRODUCT_ID")
REFERENCES PRODUCT_HIERARCHY_TREE ("PRODUCT_ID") ENABLE,
CONSTRAINT "RETAILER_SALES_TBL_CALEND_FK1" FOREIGN KEY ("PERIOD_ID")
REFERENCES CALENDAR_HIERARCHY_TREE ("CALENDAR_ID") ENABLE
-- Location dimension definition to promote query rewrite
create DIMENSION LOCATION_DIM
LEVEL CHAIN IS LOCATION_HIERARCHY_TREE.HIERARCHY1_ID
LEVEL CONSUMER_SEGMENT IS LOCATION_HIERARCHY_TREE.HIERARCHY3_ID
LEVEL STORE IS LOCATION_HIERARCHY_TREE.LOCATION_ID
LEVEL TRADING_AREA IS LOCATION_HIERARCHY_TREE.HIERARCHY2_ID
HIERARCHY PROD_ROLLUP (
STORE CHILD OF
CONSUMER_SEGMENT CHILD OF
TRADING_AREA CHILD OF
CHAIN
-- Calendar dimension definition
create DIMENSION CALENDAR_DIM
LEVEL MONTH IS CALENDAR_HIERARCHY_TREE.HIERARCHY3_ID
LEVEL QUARTER IS CALENDAR_HIERARCHY_TREE.HIERARCHY2_ID
LEVEL WEEK IS CALENDAR_HIERARCHY_TREE.CALENDAR_ID
LEVEL YEAR IS CALENDAR_HIERARCHY_TREE.HIERARCHY1_ID
HIERARCHY CALENDAR_ROLLUP (
WEEK CHILD OF
MONTH CHILD OF
QUARTER CHILD OF
YEAR
-- Materialized view with just joins needed for other views
CREATE MATERIALIZED VIEW my_dim_mvw_joins build immediate refresh complete enable query rewrite as
select product_id, lht.HIERARCHY1_ID, lht.HIERARCHY2_ID, lht.HIERARCHY3_ID, lht.location_id, cht.HIERARCHY1_ID year,
cht.HIERARCHY2_ID quarter, cht.HIERARCHY3_ID month, cht.calendar_id week, m1, m3, m7, m9
from retailer_sales_tbl RS, calendar_hierarchy_tree cht, location_hierarchy_tree lht
WHERE RS.period_id = cht.CALENDAR_ID
and RS.location_id = lht.location_id
and cht.CALENDAR_ID in (10,236,237,238,239,608,609,610,611,612,613,614,615,616,617,618,619,1426,1427,1428,1429,1430,1431,1432,1433,1434,1435,1436,1437,1438,1439,1440,1441,1442,1443,1444,1445,1446,1447,1448,1449,1450,1451,1452,1453,1454,1455,1456,1457,1458,1459,1460,1461,1462,1463,1464,1465,1466,1467,1468,1469,1470,1471,1472,1473,1474,1475,1476,1477)
AND product_id IN (5, 6, 7, 8, 11, 12, 13, 14, 17, 18, 19, 20)
AND lht.location_id IN (2, 3, 11, 12, 13, 14, 15, 4, 16, 17, 18, 19, 20)
-- Materialized view which aggregate along calendar dimension
CREATE MATERIALIZED VIEW my_dim_mvw_calendar build immediate refresh complete enable query rewrite as
select product_id, HIERARCHY1_ID , HIERARCHY2_ID , HIERARCHY3_ID ,location_id, year, quarter, month, week,
sum(m1) m1_total, sum(m3) m3_total, sum(m7) m7_total, sum(m9) m9_total,
GROUPING_ID(product_id, location_id, year, quarter, month, week) dim_mvw_gid
from my_dim_mvw_joins
GROUP BY product_id, HIERARCHY1_ID , HIERARCHY2_ID , HIERARCHY3_ID , location_id,
rollup (year, quarter, month, week);
-- Materialized view which aggregate along Location dimension
CREATE MATERIALIZED VIEW my_dim_mvw_location build immediate refresh complete enable query rewrite as
select product_id, year, quarter, month, week, HIERARCHY1_ID, HIERARCHY2_ID, HIERARCHY3_ID, location_id,
sum(m1_total) m1_total_1, sum(m3_total) m3_total_1, sum(m7_total) m7_total_1, sum(m9_total) m9_total_1,
GROUPING_ID(product_id, HIERARCHY1_ID, HIERARCHY2_ID, HIERARCHY3_ID, location_id, year, quarter, month, week) dim_mvw_gid
from my_dim_mvw_calendar
GROUP BY product_id, year, quarter, month, week,
rollup (HIERARCHY1_ID, HIERARCHY2_ID, HIERARCHY3_ID, location_id)
-- SQL Query Fired (for simplicity have used SUM as aggregation operator for both, but they will be different)
select product_id, year, HIERARCHY1_ID, HIERARCHY2_ID,
sum(m1_total) m1_total_1, sum(m3_total) m3_total_1, sum(m7_total) m7_total_1, sum(m9_total) m9_total_1
from
select product_id, HIERARCHY1_ID , HIERARCHY2_ID , year,
sum(m1) m1_total, sum(m3) m3_total, sum(m7) m7_total, sum(m9) m9_total
from
select product_id, lht.HIERARCHY1_ID , lht.HIERARCHY2_ID , lht.HIERARCHY3_ID ,lht.location_id, cht.HIERARCHY1_ID year, cht.HIERARCHY2_ID quarter, cht.HIERARCHY3_ID month, cht.calendar_id week,m1,m3,m7,m9
from
retailer_sales_tbl RS, calendar_hierarchy_tree cht, location_hierarchy_tree lht
WHERE RS.period_id = cht.CALENDAR_ID
and RS.location_id = lht.location_id
and cht.CALENDAR_ID in (10,236,237,238,239,608,609,610,611,612,613,614,615,616,617,618,619,1426,1427,1428,1429,1430,1431,1432,1433,1434,1435,1436,1437,1438,1439,1440,1441,1442,1443,1444,1445,1446,1447,1448,1449,1450,1451,1452,1453,1454,1455,1456,1457,1458,1459,1460,1461,1462,1463,1464,1465,1466,1467,1468,1469,1470,1471,1472,1473,1474,1475,1476,1477)
AND product_id IN (5, 6, 7, 8, 11, 12, 13, 14, 17, 18, 19, 20)
AND lht.location_id IN (2, 3, 11, 12, 13, 14, 15, 4, 16, 17, 18, 19, 20)
GROUP BY product_id, HIERARCHY1_ID , HIERARCHY2_ID , HIERARCHY3_ID , location_id, year
) sales_time
GROUP BY product_id, year,HIERARCHY1_ID, HIERARCHY2_ID
This Query rewrites only with my_dim_mvw_calendar. (as saw in Query Plan and EXPLAIN_MVIEW). But we would like it to use my_dim_mvw_location as that has aggregations for both dimensions.blackhole001 wrote:
Hi all,
I'm trying to make my programmer's life easier by creating a database view for them to query the data, so they don't have to worry about joining tables. This sounds like a pretty horrible idea. I say this because you will eventually end up with programmers that know nothing about your data model and how to properly interact with it.
Additionally, what you will get is a developer that takes one of your views and see's that of the 20 columns in it, it has 4 that he needs. If all those 4 columns comes from a simple 2 table join, but the view has 8 tables, you're wasting a tonne of resources by using the view (and heaven forbid they have to join that view to another view to get 4 of the 20 columns from that other view as well).
Ideally you'd write stored routines that satisfy exactly what is required (if you are the database resource and these other programmers are java, .net, etc... based) and the front end developers would call those routines customized for an exact purpose.
Creating views is not bad, but it's by no means a proper solution to having developers not learn or understand SQL and/or the data model. -
Exceptional aggregation on Non *** KF - Aggregation issue in the Query
Hi Gurus,
Can anyone tell me a solution for the below scenario. I am using BW 3.5 front end.
I have a non cumulative KF coming from my Stock cube and Pricing KF coming from my
Pricing Cube.(Both the cubes are in Multiprovider and my Query is on top of it).
I want to multiply both the KF's to get WSL Value CKF but my query is not at the material level
it is at the Plant level.
So it is behaving like this: for Eg: ( Remember my Qty is Non-*** KF)
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20
My WSL val should be 600 but it is giving me 15 * 95 which is way too high.
I have tried out all options of storing the QTY and PRC in two separate CKF and setting the aggregation
as before aggregation and then multiplying them but it din't work.
I also tried to use Exceptional Aggregation but we don't have option of ' TOTAL' as we have in BI 7.0
front end here.
So any other ideas guys. Any responses would be appreciated.
Thanks
Jay.I dont think you are able to solve this issue on the query level
This type of calculation should be done before agregation and this feature doesnt exist in BI 7.0 any longer. Any kind of exceptional aggregation wont help here
It should be be done either through virtual KF (see below ) or use stock snapshot approach
Key figure QTY*PRC should be virtual key figure. In this case U just need to one cbe (stock quantity) and pick up PRC on the query run time
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20 -
Can we see value of CKF in exception cell in query designer-please reply
Hello all,
I am defining a query in query designer in which i am using the exception cells, I am having two structures in this query. Now at the intersection of these two structure on one particular line I am trying to define the properties of that cell. I selected new selection and after that I added the calculated key figure that I had created. So should that cell show me the value of the calculated key figure (that is what I thought, that if I can put my CKF in that cell it will show me the value for that CKF). I am seeing different results.
Is there any way you can actually show the value of calculated key figure in that particular cell.
Thanks in advance,
RajHi Raj,
You can add Calculated key figure,RKF and even single key figures .The cells are independent of each other and that should not be an issue.
Whatever the key figure is in the cell it should show that value.
Just check again whether you have put any key figure into the selections of the structures
There should be no key fgures into selctions.
Hope it helps
Thanks -
Application Module retrieves different data than direct DB query
Hi,
I am using JDeveloper 1.1.1.6
I have a big headache trying to figure out why my application Module is retrieving different data than if I execute the query of my ViewObject directly to the data base. To prove me that I am not crazy I have created a non updateable view object based in this sql query
select * from hr_lookups;
The application module is connecting to the database using JDBC URL Connection localhost:1529/DB and user name apps. When I run the application module I can see rows and everything as expected.
If I connect to the same database using the same username, when I execute select * from hr_lookups; I get no rows!!!!
I don't know what else to do and I really will appreciate your help in this one.
Kind RegardsHi Timo,
Thank you for your reply. Yes, I can see the exact same query. In fact all the rows are the same except one column. One attribute which is coming from a stored procedure. By this I mean the view object has the following structure;
select att1, att2, ... , mypackage.myprocedure(param1,param2) from tables where clauses;
So att1, att2, etc are the same whether I run the appModule or I copy and paste the query in a SQL WorkSheet and run it. The only different is the results of mypackage.myprocedure(param1,param2). So I have created a new view object from SQL query;
select mypackage.myprocedure(param1,param2) from dual. From application module it returns one value and directly from SQL WorkSheet returns different value...
Im sorry if I am not understandable but I am really desperate. Might it be permission or something like that???
Regards -
Different LOV behavior between SQL query data model and data template
I have noticed different behavior when using parameters linked to list of values (LOV) of type menu with the multiple selection option enabled and a SQL query data model vs a data template. Here's the example because that first sentence was probably really confusing.
SQL Query:
select
plmc.MonthCode, plmc.ModalityDim, plmc.ModalityName,plmc.RegionDim
from
DataOut.dbo.PatientLabMonthlyCross plmc
where
plmc.MonthCode = 200202
and plmc.RegionDim = 1209
and 1 =
case
when coalesce(:modalityDim,null) is null
then 1
else
case
when plmc.ModalityDim in (:modalityDim)
then 1
else 0
end
end
Putting BI Publisher into debug mode, defining a data model of type SQL Query, defining a parameter called :modalityDim linked to a LOV that allows multiple selections, and selecting a couple of values from the LOV the output of the prepared statement is:
[081607_122647956][][STATEMENT] Sql Query : select
plmc.MonthCode,
plmc.ModalityDim,
plmc.ModalityName,
plmc.RegionDim
from
DataOut.dbo.PatientLabMonthlyCross plmc
where
plmc.MonthCode = 200202
and plmc.RegionDim = 1209
and 1 =
case
when coalesce(?,?,null) is null
then 1
else
case
when plmc.ModalityDim in (?,?)
then 1
else 0
end
end
[081607_122647956][][STATEMENT] 1:6
[081607_122647956][][STATEMENT] 2:7
[081607_122647956][][STATEMENT] 3:6
[081607_122647956][][STATEMENT] 4:7
[081607_122654713][][EVENT] Data Generation Completed...
[081607_122654713][][EVENT] Total Data Generation Time 7.0 seconds
Note how the bind variable :modalityDim was changed into two parameters in the prepared statement.
When I use this same SQL Query in a data template the output is:
[081607_012113018][][STATEMENT] Sql Query : select
plmc.MonthCode,
plmc.ModalityDim,
plmc.ModalityName,
plmc.RegionDim
from
DataOut.dbo.PatientLabMonthlyCross plmc
where
plmc.MonthCode = 200202
and plmc.RegionDim = 1209
and 1 =
case
when coalesce(?,null) is null
then 1
else
case
when plmc.ModalityDim in (?)
then 1
else 0
end
end
[081607_012113018][][STATEMENT] 1:'6','7'
[081607_012113018][][STATEMENT] 2:'6','7'
[081607_012113574][][EXCEPTION] java.sql.SQLException: Syntax error converting the nvarchar value ''6','7'' to a column of data type int.
Note the exception because it is trying to convert the multiple parameter values.
Am I doing something completely wrong here? I really need to use a data template because I will need to link a couple of queries together from different database vendors.
-markThis is for 10.1.3.4 - because in 11g every SQL query is automatially part of a data model.
In 10g SQL query is for simple unrelated SQL queries.
If you need to use advance features such as:
a) multiple SQL queries that are joined in master-detail relation ships
b) before/after report triggers
Then you will need to use the data template, which is an XML description
of the queries, links, and PL/SQL calls.
hope that helps,
Klaus -
Different Aggregation rule while aggregating
Hi Folks
in OBIEE 10.1.3.4.1 and BI Apps 7.9.6, using answers, i developed a report which has Organization division dimension and active headcounts as fact. while the report is created, the default aggregation( Server determined) rule that the server is using gives wrong results i.e. the grand total of the active Headcounts results to 9603 where as when i explicitly give "sum" as aggregation rule, i get the summation of the headcounts in the active headcount column which results in 25000. i checked this physical queries and couldnt get a clear understanding of the physical db queries. there is an aggregation rule on the Active headcount in the rpd which is like this
:- LAST(Core."Fact - HR - Operation (Workforce)"."Active Headcount") with time dimension
:- SUM(Core."Fact - HR - Operation (Workforce)"."Active Headcount") with any other dimension
and there is a case statement in expression builder like this:
CASE WHEN "Oracle Data Warehouse"."Catalog"."dbo"."Dim_W_EMPLOYMENT_D"."W_EMPLOYMENT_STAT_CODE" = 'A' THEN "Oracle Data Warehouse"."Catalog"."dbo"."Fact_W_WRKFC_EVT_MONTH_F_Snapshot"."HEADCOUNT" ELSE 0 END
btw, i did not create the same report with time dimension first and when i combined it with time dimension, i get the same results as before.
here are the different queries from log:
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>
With out aggregation:
WITH
SAWITH0 AS (select T334016.ORG_HIER13_NAME as c2,
T334016.ORG_HIER13_NUM as c3,
sum(case when T264890.W_EMPLOYMENT_STAT_CODE = 'A' then T535616.HEADCOUNT else 0 end ) as c4,
T277242.PER_NAME_YEAR as c5
from
W_INT_ORG_DH T334016 /* Dim_W_INT_ORG_DH_Employee_Org */ ,
W_EMPLOYMENT_D T264890 /* Dim_W_EMPLOYMENT_D */ ,
W_YEAR_D T277242 /* Dim_W_YEAR_D */ ,
W_WRKFC_EVT_MONTH_F T535616 /* Fact_W_WRKFC_EVT_MONTH_F_Snapshot */
where ( T264890.ROW_WID = T535616.EMPLOYMENT_WID and T277242.ROW_WID = T535616.EVENT_YEAR_WID and T334016.ORG_WID = T535616.HR_ORG_WID and T535616.SNAPSHOT_IND = 1 and T535616.DELETE_FLG <> 'Y' and T277242.CAL_YEAR_START_DT >= TO_DATE('2004-01-01 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') and (T535616.SNAPSHOT_MONTH_END_IND in (1) or T535616.EFFECTIVE_END_DATE >= TO_DATE('2009-10-12' , 'YYYY-MM-DD')) and (T535616.LAST_MONTH_IN_YEAR_IND in (1) or T535616.EFFECTIVE_END_DATE >= TO_DATE('2009-10-12' , 'YYYY-MM-DD')) and (T334016.ROW_WID in (0) or T334016.HR_ORG_FLG in ('Y')) and (T334016.ROW_WID in (0) or T334016.W_HIERARCHY_CLASS in ('HR-ORG')) and (T334016.ROW_WID in (0) or T334016.CURRENT_VER_HIER_FLG in ('Y')) and T535616.EFFECTIVE_START_DATE <= TO_DATE('2009-10-12' , 'YYYY-MM-DD') )
group by T277242.PER_NAME_YEAR, T334016.ORG_HIER13_NUM, T334016.ORG_HIER13_NAME)
select distinct SAWITH0.c2 as c1,
LAST_VALUE(SAWITH0.c4 IGNORE NULLS) OVER (PARTITION BY SAWITH0.c3 ORDER BY SAWITH0.c3 NULLS FIRST, SAWITH0.c5 NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as c2,
SAWITH0.c3 as c3
from
SAWITH0
order by c1
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<With aggregation:
-------------------- Sending query to database named Oracle Data Warehouse (id: <<8194579>>):
WITH
SAWITH0 AS (select T334016.ORG_HIER13_NAME as c2,
T334016.ORG_HIER13_NUM as c3,
sum(case when T264890.W_EMPLOYMENT_STAT_CODE = 'A' then T535616.HEADCOUNT else 0 end ) as c4,
T277242.PER_NAME_YEAR as c5
from
W_INT_ORG_DH T334016 /* Dim_W_INT_ORG_DH_Employee_Org */ ,
W_EMPLOYMENT_D T264890 /* Dim_W_EMPLOYMENT_D */ ,
W_YEAR_D T277242 /* Dim_W_YEAR_D */ ,
W_WRKFC_EVT_MONTH_F T535616 /* Fact_W_WRKFC_EVT_MONTH_F_Snapshot */
where ( T264890.ROW_WID = T535616.EMPLOYMENT_WID and T277242.ROW_WID = T535616.EVENT_YEAR_WID and T334016.ORG_WID = T535616.HR_ORG_WID and T535616.SNAPSHOT_IND = 1 and T535616.DELETE_FLG <> 'Y' and T277242.CAL_YEAR_START_DT >= TO_DATE('2004-01-01 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') and (T535616.SNAPSHOT_MONTH_END_IND in (1) or T535616.EFFECTIVE_END_DATE >= TO_DATE('2009-10-12' , 'YYYY-MM-DD')) and (T535616.LAST_MONTH_IN_YEAR_IND in (1) or T535616.EFFECTIVE_END_DATE >= TO_DATE('2009-10-12' , 'YYYY-MM-DD')) and (T334016.ROW_WID in (0) or T334016.HR_ORG_FLG in ('Y')) and (T334016.ROW_WID in (0) or T334016.W_HIERARCHY_CLASS in ('HR-ORG')) and (T334016.ROW_WID in (0) or T334016.CURRENT_VER_HIER_FLG in ('Y')) and T535616.EFFECTIVE_START_DATE <= TO_DATE('2009-10-12' , 'YYYY-MM-DD') )
group by T277242.PER_NAME_YEAR, T334016.ORG_HIER13_NUM, T334016.ORG_HIER13_NAME),
SAWITH1 AS (select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3
from
(select LAST_VALUE(SAWITH0.c4 IGNORE NULLS) OVER (PARTITION BY SAWITH0.c3 ORDER BY SAWITH0.c3 NULLS FIRST, SAWITH0.c5 NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as c1,
SAWITH0.c2 as c2,
SAWITH0.c3 as c3,
ROW_NUMBER() OVER (PARTITION BY SAWITH0.c3 ORDER BY SAWITH0.c3 ASC) as c4
from
SAWITH0
) D1
where ( D1.c4 = 1 ) ),
SAWITH2 AS (select sum(case when T264890.W_EMPLOYMENT_STAT_CODE = 'A' then T535616.HEADCOUNT else 0 end ) as c2,
T277242.PER_NAME_YEAR as c3
from
W_EMPLOYMENT_D T264890 /* Dim_W_EMPLOYMENT_D */ ,
W_YEAR_D T277242 /* Dim_W_YEAR_D */ ,
W_WRKFC_EVT_MONTH_F T535616 /* Fact_W_WRKFC_EVT_MONTH_F_Snapshot */
where ( T264890.ROW_WID = T535616.EMPLOYMENT_WID and T277242.ROW_WID = T535616.EVENT_YEAR_WID and T535616.SNAPSHOT_IND = 1 and T535616.DELETE_FLG <> 'Y' and T277242.CAL_YEAR_START_DT >= TO_DATE('2004-01-01 00:00:00' , 'YYYY-MM-DD HH24:MI:SS') and (T535616.SNAPSHOT_MONTH_END_IND in (1) or T535616.EFFECTIVE_END_DATE >= TO_DATE('2009-10-12' , 'YYYY-MM-DD')) and (T535616.LAST_MONTH_IN_YEAR_IND in (1) or T535616.EFFECTIVE_END_DATE >= TO_DATE('2009-10-12' , 'YYYY-MM-DD')) and T535616.EFFECTIVE_START_DATE <= TO_DATE('2009-10-12' , 'YYYY-MM-DD') )
group by T277242.PER_NAME_YEAR),
SAWITH3 AS (select distinct LAST_VALUE(SAWITH2.c2 IGNORE NULLS) OVER ( ORDER BY SAWITH2.c3 NULLS FIRST ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as c1
from
SAWITH2)
select SAWITH1.c2 as c1,
SAWITH1.c1 as c2,
SAWITH1.c3 as c4,
SAWITH3.c1 as c5
from
SAWITH1,
SAWITH3
Thank you in advance
kumrPassing parameters from one report to another. If you go to Column Properties you will find 2 main types of Drills. One is a default drill which comes from the repository. The other is a navigation drill wherein you can specify a target report. So basically when you click on the column of your report that will navigate to another report and will also pass the parameter(value clicked) and will filter the target report. I am not sure which version of OBI EE you are on?
Thanks,
Venkat
http://oraclebizint.wordpress.com
Maybe you are looking for
-
Looking for a good alarm app that plays my songs
I was never a fan alarm clocks, the awful alarms they sounds are certainly no way to start the day. My CD radio alarm clock is slowly starting to pack in and I figure my iPhone4 can do the same thing. Can anyone recommend an alarm app that randomly p
-
Sorting problem in BEx/Portal
Hi all, I am facing a problem while sorting an Infoobject in BEx or in portal. For e.g I am having a PO Status Report in which I have PO, Trasaction description and calday.Now when I am executing that report in BEx and on right clicking -> Sort PO w
-
Tag Files seem to block TagSupport.findAncestorWithClass
Seems I stumbled upon a rather glaring hole in with JSP 2.0 Tag Files. Say that I have normal JSP tags like this: <x:outer ... > <x:inner .../> <x:inner .../> <x:inner .../> </x:outer>outer and inner are related, and the inner tag "talks"
-
Regarding print preview for BOM
Hi, In sales order BOM item is appearing with two other subitems but while taking the print out only main item should appear in print out or it should give the option of printing either main item or sub items pls let me know the setting need to be do
-
Hi I'm trying to fill up the aggregates for the cube 0SD_C01. When i goto SM37 and see the generated jobs, it will be in green( complete) but the aggregate is not filled. If i goto job detyailed log, then the message "Error in SQL Statement: SAPSQL_F