Different aggregation level comparison
Hi all, I'd like to compare each single value of my resultset to the average value for that resultset, can you plase help me?
I'd like to get something like:
Vendor | Order No |Order Amount| Deviation from average ord
1 |10 |5 |-1
1 |20 |7 |+1
|Total: |6 |0
Hi Paolo,
You can try creating a formula using SUMGT. This will bring the result against each row, and then another formula that gives you the difference of Order Amount and the SUMGT column.
Hope this helps...
Similar Messages
-
Different aggregations at different levels
When I view the data using 'Measure Data Viewer', the items in a dimension are showing in random order.
How do I load the data in a dimension in ascending order so I can view it in ascending order.
Also, is it possible to apply different aggregations at different levels in a dimension?
Thanks.Thank you. I will put different measures with different dimensions in different cubes.
After I mapped my measures in the mapping canvas(I can see the mapping lines), I tried to maintain the measure. But I am getting an error 'some-measure-name may not be maintained since mapping do not exist for the measure'.
I am using AWM 10.2.0.3A and the database is 10.2.0.4
Thanks -
Aggregating data loaded into different hierarchy levels
I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
I read the help in DML Reference of the OLAP Worksheet and it said the follow:
When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
LIMIT all_but_q4 TO ALL
LIMIT all_but_q4 REMOVE 'Q4'
Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
How to do it this for more than one dimension?
Above i wrote my case of study:
DEFINE T_TIME DIMENSION TEXT
T_TIME
200401
200402
200403
200404
200405
200406
200407
200408
200409
200410
200411
2004
200412
200501
200502
200503
200504
200505
200506
200507
200508
200509
200510
200511
2005
200512
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
2004 NA
200412 2004
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
2005 NA
200512 2005
DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
EQ -
aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 4,00 ---> here its right!! but...
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00 ---> here must be 30,00 not 10,00
200512 NA
DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
T_TIME PRUEBA2_IMPORTE_STORED
200401 NA
200402 NA
200403 NA
200404 NA
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 NA
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00
200512 NA
DEFINE OBJ262568349 AGGMAP
AGGMAP
RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
AGGINDEX NO
CACHE NONE
END
DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
T_TIME_AGGRHIER_VSET1 = (H_TIME)
DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
T_TIME_AGGRDIM_VSET1 = (2005)
Regards,
Mel.Mel,
There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
= solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
2. Data is loaded at both a detail level and it's ancestor, as in your example case.
= the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
To solve your usage case I would suggest a hierarchy that looks more like this:
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
200412 2004
2004_SELF 2004
2004 NA
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
200512 2005
2005_SELF 2005
2005 NA
Resulting in the following cube:
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
200412 NA
2004_SELF NA
2004 4,00
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
200512 NA
2005_SELF 10,00
2005 30,00
3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
= this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
= often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation. -
Dear Experts
I have 2 restricted key figures both set with aggregation level Total with reference to a characteristic. both are with reference to different characteristics. KF 1 is a formula containing KF 2 .
So, the referred characteristics are Char 1 for KF 1 and Char 2 for KF 2.
In the rows, if I drill down further by chars other than char 1 and char 2, how will the KF 1 and KF 2 values be affected?
Am i right to say drill down will not affect the aggregation behavior?
What determines which chars to be referred to when setting aggregation level in the KF?
Thanks in advance.
regards
PascalHi Pascal,
Can i ask you to paste your report requirement so that we can visualize in a better way and guide you on the same.
Regards,
AL -
Customer Exit in query on aggregation level
Hi,
I try to have variables filled with a customer exit.
The coding of the customer exit is correct, this have been tested in queries on multiproviders.
Unfortunately it is not working when these variables are used on level of aggregation levels.
What I would like to achieve:
We have some planning queries on aggregation levels. Different users can plan on the same query (and aggregation level), but not for the same set of data. Therefore the query should be restricted to the authorized values. Unfortunately we can not switch to the new authorization concept (analysis authorizations) yet, but we already need this functionality very soon.
The customer exits are the only possible option. Unfortunately it seems that the customer exits are not being executed when the variables are used in queries on aggregation levels.
The variables are not ready for input and should be filled in I_STEP = 2
Is this normal? If so, is there a work around?
Thanks in advance for quick replies!
Kind regards,
BartHi,
You can debug your query by putting the break-point in your exit code and execute the query in RSRT. This way you will be able to find if your customer exit is actually being called or not. If it is being called then there can be some logical problem with your code due to which the variable values are not getting populated.
Regards,
Deepti -
Aggregation level includes filter objects...
Hi,
We are using a BO universe on top of a BEX query to create WebI reports.
We now run into the following problem when filtering on a certain characteristic in the BO query.
The result set of rows is limited to the correct values, but not aggregated up to the requested aggregation level. The problem exists whether the key figure is defined as having an aggregation to be database delegated or sum.
In the query results we see the number of rows being pulled back is too high, the results displayed are correct (because aggregated by WebI). In the example below we selected three stores, but not store as a result object.
*** Query Name:Query 1 ***
** Query Properties:
Universe:Retail Sales and Stock
Last Refresh Date:9/2/10 2:29 PM
Last Execution Duration: 33
Number of rows: 4,947
Retrieve Duplicate Row: ON
** Query Definition:
Result Objects: Style Code, NSLS @ PCS
Filters ( Only RMS Billing Documents Infoprovider
AND Given Legacy Division Set
AND Given Currency Conversion Type
AND Store Key In List { 1001; 1002; 1003 }
AND NSLS @ PCS Not Equal 0
We cannot attach a document, but the actual amount of rows should have been 2,856,
the number of different style in these 3 stores.
When we include the store object in the results, we can see the number of rows stays the same (proof that it did indeed return rows on a store level and not aggregated upt to the style code...
*** Query Name:Query 1 ***
** Query Properties:
Universe:Retail Sales and Stock
Last Refresh Date:9/2/10 2:09 PM
Last Execution Duration: 13
Number of rows: 4,947
Retrieve Duplicate Row: ON
** Query Definition:
Result Objects: Style Code, Store Key, NSLS @ PCS
Filters ( Only RMS Billing Documents Infoprovider
AND Given Legacy Division Set
AND Given Currency Conversion Type
AND Store Key In List { 1001; 1002; 1003 }
AND NSLS @ PCS Not Equal 0
Is this behaviour fixed in any fix pack? Or did noone address this problem?
The problem becomes a real problem when the user wants to select everything u/i a certain date.
The number of rows are then pulled back on a style/store/day combination and easily reached 100.000 rows.
Thanks for any insight you can give on this,
MarianneHi Rik,
Thanks for at least trying
I didn't try this before, so I just did, despite the fact that the aggregation level only has something to do with the way data aggregates within webi. (I'm that desperate)
No difference, other then the fact that I now get #MULTIVALUE in the cells with multiple underlying rows.
Anyone else?
Marianne -
Activate an Aggregation Level in Production
Hi,
Can anyone please assist me in activating an aggregation level in Production?
Some of our aggregation levels are inactive and we can't transport them at the moment as we are busy with an upgrade in development.
Your help would really be appreciated, as this is quite high priority at the moment.
Thank you!
Regards,
TanyaHi Shafi,
We did do the investigation. The MultiProvider on which the Aggregation Level was built, was still active in the morning. We ran two queries from the MultiProvider, and the second one (for some reason) de-activated the one InfoCube in the MultiProvider.
We did version comparisons to see what might have changed in the InfoCube or MultiProvider, but there was nothing. We re-activated both the InfoCube and MultiProvider with programs that are available in SE38 (RSDG_CUBE_ACTIVATE and RSDG_MPRO_ACTIVATE), but could not find a program to re-activate the aggregation levels (and the function module did not work).
Thank you!
Regards,
Tanya -
Aggregation Level Vs Multiprovider
Hi Folks,
Can some one can explain me what makes to use Aggregation Levels as we have an option of building Read Input Queries on Multiprovider it self?
Is it a mere performance issue or else thats the only way we need to go build a Input Query.
Also can some one tell me how I should final Layout after I have designed a RI Query in Bex Designer. So far my understanding is to Execute that Query
1. Bex Analyzer or else
2. Go and hook this query by using WAD.
What difference that really makes me in terms of functionalities between both layouts except for the Visual Feel might be different one in Excel and other on Web. Any major bottle necks here?
I really appreciate any kind inputs here for my above concerns.
Points will be assigned.
Thanks in advance for every input on this.
BI-IP GuestRavi/Manyak,
Thanks for your responses. I appreciate that and I understand that we cant build Input Ready Queries on regular Multiprovider.
How to decide like whether we should go with Multiprovider or Aggr Level for building my Ready Input Queries.
Ravi saying that it's when we want to do manipulations or else while using Planning Functions.
I find BPS more easy to understand when seen with IP especially when I built Lay outs. Atleast in BPS we can feel on what we are building. But here in IP Iam getting Confused on how to build Rolling Forecast Layouts and also using Aggregate Levels/ Multiproviders. Iam saying this in general what my situaion is.
Any comments for Knowledge Transfer will be highly appreciated.
Thanks,
BI-IP Guest -
Input Ready query based on aggregation level
I have an input ready query that reads data from an aggregation level, based on a multiprovider that contains a cube for planning and a cube with real data.
There are two columns of data, two of them for read-only and another for input.
I´m selecting different fiscal year/periods for each column.
The problem is that, when I select fiscal year/periods and the real data cube doesn´t contain any data for this filter, it doesn´t show me a line to input data anyway..
how do I configure the query so that it understands that eventhough I don´t have data for the selected period, I still want to be able to perform the planning?
Thanks,
Cris.Hi Christina,
Even though there is no Data in the Real time Cube the Query must allow you to Input values. I guess the Query you have designed is not enabled for input yet.
Kindly check the following Points before proceeding.
1. In the Query properties under Planning tab, make sure the Start Query in change mode is turned On.
2. Make sure you have used all of the characteristic in the aggregation level in the Query and is restricted to single value.
3. make sure the columns are Input ready under the Planning tab in its properties
Hope this Helps.
Regards.
Shafi. -
when i am loading the data from cube to PA, i see there is good change in the total qty in PA. I see the numbers are different if i load on aggregated levels ( say if i load on just 3 chars out of 10) to the load on detailed level.
Why this should change in the totals ?
Any idea ?
Thanks
venkathi
when i am loading the data from my backup cube to PA, I am loading on 3 chars (grouping condition) out of total 10 chars. I have just loaded one KF from cube to the same KF in PA. Then i see my qty in PA is much higher than whats there in the cube as totals. I am confused why i should get more data than the cube. I understand if its less, may be deletion of CVCs, etc.. But how its getting increased to ?
What does exactly the grouping condition mean ? I understand that all records are grouped under those chars in totals and disaggregates later. But i am seeing my PA totals dont match with the Cube totals. Cube totals found to be lesser than PA totals.
fully confused with this.
thx
venkat -
Hello Macro Gurus,
I'm trying to work out a macro to get the values of different stock categories (ex ; CC, CK ) in the Initial column.The macro successfully calculates and displays at the detailed level.But the Stock on Hand column at aggregated level is empty.I have tried SUM_CALC ( ), INTEGER_CALC , AGG_LEVEL() and somehow I'm not able to populate the key figure at aggregated level.
Any help much appreciated.
Regards
VenkatHello Uma Mahesh,
First of all many thanks for your offer of help and response.Following is the two bits of code.
The macro attributes are set at all planning objects.
The code as follows
Stock on Hand Projected =
ACT_LOCATION_PRODUCTS ('GRID = 1')
PHYSICAL_STOCK( ACT_PRODUCT ;ACT_LOCATION ; ACT_VERSION ; 'CC' ; 'CK').
The other bit of code
STOCK_CALC(0 ; 0 ;
INITIAL_STOCK( ACT_VERSION ; ACT_PRODUCT ; ACT_LOCATION ; 'AGGR')
Both works at detailed level , when I say detailed level after drill down.
Hope this helps.
Look forward to hearing from you.
Best Regards
Venkat -
Data Visible At Aggregated Level but not at Leaf Node Level in ASO
Hi,
I am facing an issue in Essbase Version 7. I have a BSO - ASO partition. I have 4 dimensions Customer, Accounts, Product and Time. When i try to view data across customer, time and accounts the data is visible at the leaf node level and the aggregated level. But when i include Customer in my analysis the data is visible at an aggregated level for the customer but not a the leaf node level. What could be the cause of this? I am not getting any errors during my data load in ASO as well as when i run the aggregation in ASO...
Any inputs on this issue are highly appreciated....Without having complete information, I'll guess you are trying to look at the data in the BSO cube. I would look at the partition definition. One of two things is most likely happening
1. You only have the partition defined to look at the top level of customers
2. THe member names of lower levels of customers is not consistent betweent he two cubes and you don't map member names.
You can prove that is it a partition definition problem by doing the same retrieves from your ASO cube. If you get back data you know it is a partition definition problem. If you don't get back the proper data you have different problems. One that would not seem logical unless you had odd formulas on your ASO cube. -
Error : Reading from Aggregation Level not permitted
Hello Gurus,
Could somebody please give some help or advice regarding this?
I have a multiprovider on a regular cube and an aggregation level, for some reason the multicube gives me the following error message when I try to display data using listcube.
Reading from Aggregation Level is not permitted
Message no. RSPLS801
Also the Query on the multicube does not display data for any of the KF's in the Agg Level but when I create a query on the Agg level itself it is fine.
Any suggestions?
Thanks.
Swaroop.
Edited by: Swaroop Chandra on Dec 10, 2009 7:29 PMHi,
transaction LISTCUBE does not support all InfoProviders, e.g. aggregation level are not supported. LISTCUBE is a 'low level' to read data from the BW persistence layer, e.g. InfoCubes. Since aggregation level always read transaction data via the so called planning buffer and the planning buffer technically is a special OLAP query LISTCUBE does not support aggregation level.
Regards,
Gregor -
Report from system on different stock levels-min, max stock levels
hI gURUS,
Is there any Standard Report from SAP system to see on different stock levels-minIMUM stock, maximum stock levelsHi,
MMBE will not give stock level min and max that is maintained in material master MRP view. I think you need to make a z report for th same or may be through query.
Regards
Sangeta -
How to save Jobs with different priority level in a Queue?
Hi, Friends,
I have a set of Job (see below) objects.
I would make a queue: if they have the save priority level, first in and first out. this is easy to do by ArrayList. however, If they have different priority level, I would like make the Jobs with the highest level first out.
How can I implemented this idea in Java?
Regards,
Youbin
public class Job {
private short _priorityLevel = 0;
public void setPriorityLevel(short priorityLevel) {
this._priorityLevel = priorityLevel;
public short getPriorityLevel() {
return _priorityLevel;Hi,
Here is my test code, it works:
public class Job implements Comparable{
private int _priorityLevel=0;
private String _jobDescription=null;
public Job() {
public void setPriorityLevel(int priorityLevel) {
this._priorityLevel=priorityLevel;
public int getPriorityLevel() {
return this._priorityLevel;
public void setJobDescription(String jobDescription) {
this._jobDescription=jobDescription;
public String getJobDescription() {
return this._jobDescription;
public int compareTo(Object obj) {
return (this._priorityLevel-((Job)obj)._priorityLevel);
import java.util.LinkedList;
import java.util.Iterator;
import java.util.Collections;
import java.util.Collection;
public class test {
public test() {
public static void main(String[] args) {
Job job1 = new Job();
job1.setJobDescription("Job1");
job1.setPriorityLevel(2);
Job job2 = new Job();
job2.setJobDescription("Job2");
job2.setPriorityLevel(2);
Job job3 = new Job();
job3.setJobDescription("Job3");
job3.setPriorityLevel(2);
Job job4 = new Job();
job4.setJobDescription("Job4");
job4.setPriorityLevel(1);
Job job5 = new Job();
job5.setJobDescription("Job5");
job5.setPriorityLevel(1);
Job job6 = new Job();
job6.setJobDescription("Job6");
job6.setPriorityLevel(1);
LinkedList linkedList = new LinkedList();
linkedList.addLast(job1);
linkedList.addLast(job2);
linkedList.addLast(job4);
Iterator ite=linkedList.iterator();
while (ite.hasNext()) {
System.out.println(((Job)ite.next()).getJobDescription());
System.out.println("---------");
Collections.sort(linkedList);
ite=linkedList.iterator();
while (ite.hasNext()) {
System.out.println(((Job)ite.next()).getJobDescription());
System.out.println("---------");
linkedList.addLast(job3);
ite=linkedList.iterator();
while (ite.hasNext()) {
System.out.println(((Job)ite.next()).getJobDescription());
System.out.println("---------");
Collections.sort(linkedList);
ite=linkedList.iterator();
while (ite.hasNext()) {
System.out.println(((Job)ite.next()).getJobDescription());
System.out.println("---------");
linkedList.addLast(job5);
linkedList.addLast(job6);
ite=linkedList.iterator();
while (ite.hasNext()) {
System.out.println(((Job)ite.next()).getJobDescription());
System.out.println("---------");
Collections.sort(linkedList);
ite=linkedList.iterator();
while (ite.hasNext()) {
System.out.println(((Job)ite.next()).getJobDescription());
}
Maybe you are looking for
-
Dear All, I am having a Service PR with value 2,695,929.00 USD, also one contract with Target Value 7000000 SAR, but in the service specification of contract, value is 5.00 SAR. Now I had created release order with ref to both PR and Contract, and PO
-
Connectivity issues with 11g Express Edition on 64bit Windows 7 Professional
Hello, I uninstalled my 10g version of Express Edition, and proceeded to install the 11g version. I can connect with sqlplus, but when I try to open the DB web app (apex), I'm unable to. The clsc.log file shows the following error: Oracle Database 1
-
hi guys, i'm trying for long time a gallery image to be my own galley , in order to make a good personal portfoflio,,if someone can advise me a good free website for tutorials? thanks audrey
-
Storing and accessing photos in iCloud
Once my photos have been backed up to iCloud from my iTouch, can I then delete them from my iPod (to free up space) and later access those deleted photos? Also, is there a way to see what photos have been stored in iCloud and access individual photo
-
Hello, I need help for getting Image size, when i am using JAI. for example: RenderedImage img = JAI.create("url", url); If you could help me than please. Thanks bye Jarrar.