Aggregation Questions
Hi there
Got a couple of questions about OWB 10g R2 aggregations.
#1 When I create a cube with aggregations, I cannot for the life of me determine how the aggregations are actually implemented.
Are they implemented by separate tables? materialised views?
So far, when I browse the schema, I can't see any extra database objects created for the purpose of providing aggregates.
#2 I have seen this problem posted by a number of people, but have not yet seen any answer on how to overcome it.
When I create a cube, for some measures I would like to "SUM", for others I would like to "AVERAGE" and for columns such as degenerate dimensions (i.e. Transaction_ID) I would like to have no aggregation at all.
Can anyone tell me how to achieve this using the OWB Cube object???
Hi
Answer to the second question:
In Design Center you have to double click in Project Explorer on the cube you want to examine. Than Data Object Editor is launched. To change the aggregation function of certain measures you have to select Aggregation tab in the low right corner. Than in the Measures panel select the measure that you want change aggregation function for. You can now change aggregation for that measure in panel: Aggregation for measure xxx.
Regards
Peter
Similar Messages
-
I have an instance where I need 2 sender file adapters to send XI 2 different files (layouts are different). XI then needs to combine and map the messages from these two files into a single IDoc for each combined message. I know in a multi-mapping I can do it either with a BPM or without. When "aggregating" messages I know I can do it with a BPM. My question is does anyone know of a blog that exists where someone has accomplished this without the need of a BPM (and if there are any good blogs which cover how to aggregate with a BPM that would be helpful as well).
Thanks!Shaun,
As you can see here, [Multi-Mappings|http://help.sap.com/saphelp_nw70/helpdata/en/21/6faf35c2d74295a3cb97f6f3ccf43c/content.htm], you can not do the Message-Merge (n:1) mapping w/o BPM.
Using the BPM, you can refer to the pattern
BpmPatternCollectMultiIf - [Collecting and Bundling Messages - Multiple Interfaces|http://help.sap.com/saphelp_nw70/helpdata/en/0e/56373f7853494fe10000000a114084/content.htm]
I suggest that you schedule your sender adapters at around same time..so that the BPM doesn't have to wait too long to process them. If you can not ensure that, then you can use [Event-Driven Message Processing|http://help.sap.com/saphelp_nw70/helpdata/en/7a/00143f011f4b2ee10000000a114084/content.htm] to wait for both the files to be picked up before they are sent to BPM.
praveen -
SSAS aggregation question.
I have a SSAS cube dimension with hierarchies - Country, State, City, Street.
I have two measures (Price and Quantity) that need to aggregate independently up to City level only. Product (multiplication) of these two measures at City level should then be aggregated for higher level hierarchies (Country, State).
How do I do this in MDX?
Thanks,
J.Hi,
Here is my question with example:
Dimension Hierarchy - Country, State, City, Street.
Country State City Street Qty
US CA City1 Street1 10
US CA City1 Street2 5
US FL City3 Street3 8
US FL City4 Street4 4
Country State Plan Price
US CA A $100
US CA C $70
US FL B $50
Calculated Measure at Country level = 100*10 + 100*5 + 70*10 + 70*5 + 50*8 + 50*4 = 3150
This is different from SUM(Price)*SUM(Qty) as it would be - 220*17 = 3740.
I want 3150 to be my answer at Country level.
I can JOIN the two table using Country and State fields to get a Cartesian product and that could works. The issue is the fact table becomes too big. -
http://bdn.borland.com/article/0,1410,31863,00.html
Please have a look at the class diagram. The relationship of Order and OrderDetail is aggregation but how come the OrderDetail and Item is not? They also contains collection there. why?not sure why they don't show any collections as class members. my guess is because this is an example, not a detailed specification. if you did it out in all its glory you'd include more than you see on this diagram.
i agree that there's no reason for OrderDetail not to use aggregation/composition with Item. my guess is that this is a teaching example, not a rigorous specification.
i'd take this link as an example, a teaching exercise and nothing more. it's good that you're questioning what you see, because it implies that you have enough insight to think about the problem for yourself. but if this is your first tour through UML don't worry about it. suck up those main ideas and start using them for yourself.
% -
I have been looking through posts and tutorials but have not found the answer to a few questions about Aggregator.
Background
I have 7 projects each with quiz materials inside them I would like to aggregate.
If someone starts an aggregated project and leaves it before it finishes, can they resume or do they need to start over?
If each project has a quiz that needs to be passed before they can move on to the next project and reports back to the quiz results analyzer tool, will I have any problems with this?
Can someone move ahead in the TOC or can you prevent someone from clicking on TOC items until they have been viewed like you can in a normal project?
Would it be better to daisy chain the projects instead?
Thanks in advance
DavidThat's what Adobe's Multi-SCORM Packaging tool does. It came by default with Captivate 4, but only if you bought the entire E-learning Suite 2.0 or 2.5 thereafter.
With a Multi-SCO package you do not try to create a single TOC for all modules as you do with the Aggregator. The LMS is supposed to create the overall TOC to get to each module. Within the module, once it is launched, you can have the normal Captivate TOC. -
BEX: Aggregation question
Hi to all,
I think I need some help of a BEX expert.
I have an issue with a query. Following scenario:
In my cube I post the following values:
Org-unit Employee Position P E
00000001 00000001 00000001 0 1
00000001 00000002 00000001 0 1
00000001 00000000 00000001 2 0
00000001 00000003 00000002 0 1
00000001 00000004 00000002 0 1
00000001 00000000 00000002 1 0
00000001 00000005 00000003 0 1
00000001 00000000 00000003 2 0
P = Capacity of the Position
E = Capacity of the Employee
Now to the issue.
In the query I need to calculate the difference of E - P as well as to create new keyfigures for storing a negative result and storing a positive result. At the end my query should look like this:
Org-unit Position P E diff. neg. pos.
00000001 00000001 2 2 00000 0000 0000
00000001 00000002 1 2 -0001 -001 0000
00000001 00000003 2 1 00001 0000 0001
and if I remove the position similar
Org-unit P E diff. neg. pos.
00000001 5 5 00000 -001 0001
The difference should not be displayed, but the rest should. Does anybody have an idea for solving this issue. I am playing around with constant selection on Employee and/or Position but the result is never as I expect it to be.
If you still have some problems understanding my issue feel free to ask.
Seeing forward to get your valueable and helpful hints.
regards
SiggiHi all,
well what I did so far was creating a calculated keyfigure E - P and additionally the two keyfigures negative and positive and the results are as expected if the position and/or the employee is initially displayed in the query. But if I take both out and any other characteristic in the result is not what I want to display. I want to get
this:
Org-unit P E diff. neg. pos.
00000001 5 5 00000 -001 0001
but I get this:
Org-unit P E diff. neg. pos.
00000001 5 5 00000 0000 0000
There is an issue with the aggregation level. The keyfigures diff., neg. and pos. should always show the results depending on the position/employee combination.
Hope it is clearer now.
Siggi -
3.6 Group Aggregation Question
I am trying to achieve a sql group by sort of behavior with coherence 3.6. I have achieved some success by using the following code.
public InvocableMap.EntryAggregator getAggregationCriteria()
BigDecimalSum agg1 = new BigDecimalSum("getTradeDateMVLocal");
BigDecimalSum agg2 = new BigDecimalSum("getTradeDateCashLocal");
BigDecimalSum agg3 = new BigDecimalSum("getCostLocal");
BigDecimalSum agg4 = new BigDecimalSum("getInterestUnrealizedLocal");
CompositeAggregator compAgg =
CompositeAggregator.createInstance(new InvocableMap.EntryAggregator[]
{agg1, agg2, agg3, agg4});
ChainedExtractor cr1 = new ChainedExtractor("getKey.getAccountName");
ChainedExtractor cr2 = new ChainedExtractor("getKey.getCurrency");
ValueExtractor[] extractors = new ValueExtractor[2];
extractors[0] = cr1;
extractors[1] = cr2;
MultiExtractor multiEx = new MultiExtractor(extractors);
GroupAggregator gpa = GroupAggregator.createInstance(multiEx, compAgg);
return gpa;
once the GroupAggregator is constructed I pass it to the namedcache.aggregate method using the following wrapper method.
public LiteMap aggregate(NamedCache cache, Filter filter, InvocableMap.EntryAggregator aggregationCriteria)
LiteMap map = (LiteMap) cache.aggregate(filter, aggregationCriteria);
return map;
the issue is that in a multi-node environment not all the data is aggregated.
for example if i have a single node and i run my aggregation code just in that node i get the expected number of grouped items. in a multi node scenario it ends up with lesser items. now the columns that i am grouping by are part of my composite key for the cache. the implementation of my key class is as follows.
package com.sac.dream.model;
import com.sac.dream.core.model.GridEntityKey;
import com.sac.dream.util.externalization.ObjectReader;
import com.sac.dream.util.externalization.ObjectWriter;
import com.tangosol.net.cache.KeyAssociation;
import javax.persistence.Embeddable;
import javax.persistence.Transient;
import java.io.IOException;
* Created by IntelliJ IDEA.
* User: ahmads
* Date: Jul 28, 2010
* Time: 1:54:45 PM
* To change this template use File | Settings | File Templates.
@Embeddable
public class GenevaValuationKey extends GridEntityKey implements KeyAssociation
private static final long serialVersionUID = 1L;
private String accountName;
private String currency;
private Long uid;
public GenevaValuationKey(Long uid)
this.uid = uid;
public GenevaValuationKey()
@Transient
public Object getAssociatedKey()
int hash = 1;
hash = hash * 31 + getAccountName().hashCode();
hash = hash * 31 + getCurrency().hashCode();
return hash;
public void setAssociatedKey(Object value)
public Long getUid() {
return uid;
public void setUid(Long uid) {
this.uid = uid;
@Override
public String toString()
return "GenevaValuationKey::uid:" + this.uid;
@Override
public boolean equals(Object o)
//if(this == o) return true;
//if (o == null || getClass() != o.getClass()) return false;
GenevaValuationKey that = (GenevaValuationKey) o;
if(this.getAccountName().equals(that.getAccountName()) && this.getCurrency().equals(that.getCurrency()) && this.uid == that.uid)
return true;
else
return false;
@Override
public int hashCode()
int hash = 1;
hash = hash * 31 + getAccountName().hashCode();
hash = hash * 31 + getCurrency().hashCode();
hash = hash * 31 + uid.hashCode();
return hash;
@Override
public int compareTo(GridEntityKey o)
return this.uid.compareTo(((GenevaValuationKey) o).getUid());
@Override
public final void readObject(ObjectReader reader) throws IOException
try
this.setAccountName(reader.readString());
this.setCurrency(reader.readString());
this.uid = reader.readLong();
catch(IOException e)
throw new RuntimeException(e);
@Override
public final void writeObject(ObjectWriter writer) throws IOException
try
writer.writeString(this.getAccountName());
writer.writeString(this.getCurrency());
writer.writeLong(this.uid);
catch(IOException e)
throw new RuntimeException(e);
public String getAccountName() {
return accountName;
public void setAccountName(String accountName) {
this.accountName = accountName;
public String getCurrency() {
return currency;
public void setCurrency(String currency) {
this.currency = currency;
i implemented the keyassociation assuming that i need to make sure that for a certain group all the rows within that group need to exist on the same node. there might be something wrong with that implementation.
thanksrehevkor5 wrote:
Yeah apparently you're not supposed to call readRemainder or writeRemainder from within the PortableObject methods, too bad the documentation does not mention this.
Here is a better idea of what a subclass's PortableObject methods should look like:
@Override
public void readExternal(PofReader in) throws IOException {
super.readExternal(in.createNestedPofReader(0));
myObj = (MyType) in.readObject(1);
@Override
public void writeExternal(PofWriter out) throws IOException {
super.writeExternal(out.createNestedPofWriter(0));
out.writeObject(1, myObj);
}Since you cannot read or write the remainder, the way that you support PortableObjects that need to evolve is by implementing the Evolvable interface. Coherence will detect that your object is an instanceof Evolvable, and will handle reading/writing the remainder/futureData and dataVersion for you.Yep. Otherwise, if you have handled the remainder in a PortableObject, you would not be able to sensibly override that method which handled the remainder.
Best regards,
Robert -
Currency exchange and aggregation question
Hi,
My requirement is to calculate the sum falling into intervals like
1) <2 million
2) 5 - 10 million
3) 10-20 million and so on
This part is easy. The question is that the intervals can be requested in any currency by the user. Since there r thousands of records and each record has its amount in its own currency.....what is the best way to do it so that the BW performance is not affected?
any help would be highly appreciated.
thks,
willsHi,
You can do currency conversion in the update rules during the data load as well as during the execution of the report.
Currency conversion doesn't have much effect on the execution time of the report.
You can have many records but it will not effect the performance of the query in big way.
Thanks -
Non-Level0 aggregation question
Hi There,
I have Account dimension, for example, Income - rental income - GL accounts (level 0), rental income is the sum of all level0 GL accounts. Data storage is "Store Data". Now I move one GL out of rental income, but the sum of Rental is still the same which is incorrect. My question is what is the way to fix this?
I try couple of things, for example, restructure, clean data then re-import into the essbase, but the issue is still there. I know if I change Rental Income as Dynamic Calc, the issue will be fixed, but I am wondering if there are other ways to do this without changing data storage?
ThanksSrinivas Bobbala wrote:
Donz,
Whenever you are doing outline changes, Do as below.
1) Take the Levo data export.
2) Clear the cube.
3) Do the modifications in the outline.
4) Reload the Lev0 data
5) Do the Rollup with CALCALL. Then this kinda issues will not arise.
Note: If any deletions are there at Lev0, better to delete after loading the Lev0 data and mapping the data with other or new members as required.Hi Srinivas,
As per your assumptions when ever we required to change the out line we need to clear the all data but as per my knowledge for this issue we no need of clearing the cube..we can directly modify the out line and Executing the CALC ALL is enough...
Regards,
Prabhas.. -
Hi,
I have a measure named DESCRIPTION, it makes data seprate into Y or N, if number > 0, we got Y, if number< =0, we got N.
eg:
NAMBER DESCRIPTION
11 Y
21 Y
0 N
7 Y
then we got 3 Y and 1 N after using aggregation rule with sum.
but if we got 4 Y, it does not show 0 N.
eg:
NAMBER DESCRIPTION
11 Y
21 Y
5 Y
7 Y
How can I make it to show 4 Y and 0 N?
Please kindly help me , thanks.This is OBIEE behaviour, since you have no data which is <=0 that's why you are not getting any N count or '0 N'. There shud be a specific row value which satifies the condition to get N or Y counts.
BTW how are you designing your measure in RPD or in Answers ?? and whats the procedure..?? -
Hi!
I am working on a special aggregator that in order to do its job needs to look-up some data in the local partition of the cache it is aggregating over (I know that the particular data it needs is available in the same partition as the entries it receives since I use a custom KeyPartitioningStrategy that assigns them there - no remote calls should be needed).
When my aggregator executes it triggers an com.tangosol.util.AssertionException claiming that "poll() is a blocking call and cannot be called on the Service thread". Is my key partitioning strategy not working as expected or is it not allowed to do any "potentially blocking" calls from an aggregator?
As a side note - assuming that this kind of calls are allowed I would by the way have loved a way for an aggregator to find the cache it is operating on programmatically since that would simplify using the same aggregator class in different caches (with the same types of data in them making the same aggregator useful).
Best Regards
MagnusHi Magnus,
If you don't specify the "thread-count" element explicitly, all your aggregations are executed on the main service thread and are not allowed to make blocking calls into the same service (an obvious dead lock potential). Some operations are dangerous even on a worker thread and may result in a warning that looks like:
Application code running on "DistributedCacheWorker:1" service thread(s) should not call "ensureCache" as this may result in deadlock. The most common case is a CacheFactory call from a custom CacheStore implementation.
I would suggest taking a discussion regarding your specific implementations off line - I will email you directly.
Regards,
Gene -
Small data aggregation question
Starting with this data:
select * from things
NAME TYPE THING
dave item can
mike item box
mike consumer elec television
mike consumer elec radio
mike automobile volvo
ryan automobile saab
ryan automobile chevrolet
mike automobile volvo
mike automobile volvo
mike automobile volvo
mike consumer elec radio
mike consumer elec radio
mike consumer elec radio
13 rows selectedI have successfully constructed this query, which almost gets me where I want to be:
select name,
ltrim(max(sys_connect_by_path(thing,','))
keep(dense_rank last order by curr),',') as things
from (select
name,
thing,
row_number() over (partition by name order by thing) as curr,
row_number() over (partition by name order by thing) -1 as prev
from things)
group by name
connect by prev = prior curr
and name = prior name
start with curr = 1;
NAME THINGS
dave can
mike box,radio,radio,radio,radio,television,volvo,volvo,volvo,volvo
ryan chevrolet,saab
3 rows selectedwhat I want (hope for, rather) is this:
NAME THINGS
dave can
mike box,radio(4),television,volvo(4)
ryan chevrolet,saab Can anyone give me some clues to help me get what I want?
Thanks!Just aggregate your data:
SQL> with things as(
2 select 'dave' name,'item' type,'can' thing from dual union all
3 select 'mike','item','box' from dual union all
4 select 'mike','consumer elec','television' from dual union all
5 select 'mike','consumer elec','radio' from dual union all
6 select 'mike','automobile','volvo' from dual union all
7 select 'ryan','automobile','saab' from dual union all
8 select 'ryan','automobile','chevrolet' from dual union all
9 select 'mike','automobile','volvo' from dual union all
10 select 'mike','automobile','volvo' from dual union all
11 select 'mike','automobile','volvo' from dual union all
12 select 'mike','consumer elec','radio' from dual union all
13 select 'mike','consumer elec','radio' from dual union all
14 select 'mike','consumer elec','radio' from dual)
15 -- Test data
16 select name,
17 ltrim(max(sys_connect_by_path(thing||decode(cnt,1,null,'('||cnt||')'),','))
18 keep(dense_rank last order by curr),',') as things
19 from (select
20 name,
21 thing,
22 count(*) cnt,
23 row_number() over (partition by name order by thing) as curr,
24 row_number() over (partition by name order by thing) -1 as prev
25 from things group by name,type,thing)
26 group by name
27 connect by prev = prior curr
28 and name = prior name
29 start with curr = 1
30 /
NAME THINGS
dave can
mike box,radio(4),television,volvo(4)
ryan chevrolet,saabBest regards
Maxim -
Dimension fact aggregation question
Hi,
I am new to Oracle OLAP and I noticed something in this tool. It doesn't aggregate the right way.
For example, if there is a customer A in the Customer dimension and customer A has 5 records in the fact with amounts 10,20,30,40,50. When I build the cube, the cube doesn't aggregate the amounts for customer A. It picks one of the amounts. I have to actually aggregate in the view and pass it to the cube.
Is this how it is in Oracle OLAP? or am i missing somehting? Help me on this basic fact/dimension design.
Thanks in advance.Oracle OLAP assumes that you're loading in source data at the same dimensionality as what the cube is designed at. In your case below, that isn't true. Your cube is dimensioned only by customer, while means each customer should have one and only one record.
Two ways to resolve this - either add another dimension to the cube that lets you break out the 5 records (maybe a time dimension?), or create a view on the fact table that summarizes the data to the lowest level of your cube.
Hope this helps,
Scott -
Compression - Aggregation Question
Hi,
We are having a basic cube, data in which is not yet compressed and no aggregates exist yet. We will be building the aggregates in the future, but we do not have the aggregate details yet.
When we create an aggregate, aggregate uses REQID as a pointer. So, is it possible to compress the cube first and later build aggregates on it? Or, we can compress a request only after an aggregate is created?
Example:
Lets say, as of today, I have the following Requests in the cube.
5
4
3
2
1
No aggregates exist as of now. I decide to compress the req IDs 5,4, 3, 2, 1. New requests 7, 6 are posted to the cube. Now, can I create an aggregate and still be able to get the compressed data into the aggragate.
please clarify.
Thanks
JohnRoberto,
Thanks for the reply.
One more clarification. Once I compress and then create aggregates for the first time, the subsequent requests MUST BE rolled up first before compression. They can not be rolled up if they are compressed. Am I correct?
Another clarification.
I have aquery built on multiprovider MP1 for example, and this multiprovider accesses basic cubes BC1 and BC2. Now, can I create aggregates on the basic cubes based on the query defined on the multi provider. When I tried to generate the proposals for aggregates on BC1, system did not generate any. I have executed the query a few times though.
So, is it possible to generate aggregates if the query was defined on multi provider? Is there any way around this?
Thanks
John -
More than one fact tables...
Hi.
I have tried OLAP until now with only one fact table.
But now I have more than one. To start i added one more.
I am always using SOLVED LEVEL...LOWEST LEVEL.
I am always receiving the following error when creating the cube with this measures:
"exact fetch returns more than requested number of rows"
What shall I look for when dealing with more than one fact table?
Thanks.
ODDS
:: ... and still have a very poor performance ...1.
Well ... I saw the global star schema and we have two fact tables there!!!
Do I have to build different cubes for each fact table always?
2.
I have built cubes, created a java client and a jsp client.
Performance is much better in JSP using the AppServer(sure!).
The power of the JSP client is more limited i presume.
I wonder if I can do things such setCellEditing for a crosstab in both.
3.
Some aggregation questions:
Everytime I create a cube using CWM2 and also a AW using AWM wizards with that cube I have one aggregation plan by default that processes everything online.
After that I create and deploy my own aggregation plan.
My question is: If I don't want to aggregate anything!??! I want to see, for instance, in BiBeans the lowest level values only. And everything at the top levels empty.
I am missing something 'cause I still have everything aggregated !!!
Thanks.
ODDS
Maybe you are looking for
-
Passing variable having value as whole SOAP request to command while invoking ODI WS call
When passing variable in place of soap request message (variable value is whole SOAP request message prepared using procedure) in ODI Invoke WebService command like --> OdiInvokeWebService "-URL=url...." "-PORT_TYPE=..." "-OPERATION=..." "-RESPONSE_M
-
Hi If anyone is able to give me some advice I'd really appreciate it. I have seen similar issues on here and tried uninstalling then re-installing the driver which seemed to fix the similar issue on someone's Windows 7 system but it doesn't seem to h
-
IDoc2FIle:Receiver Determination Error
Hi all, I have done idoc2 file scenario and i got the status as receiver could not be determined in SXMB_MONI. Please Can anyone solve this issue? Thanks, Radhika
-
Is it possible to organize your music in iTunes Match into the same playlists on your iPhone?
I tried using iTunes Match, but the music is completely unorganized on my phone when I activate it. Can I access my iTunes Match music in the same playlists as they are on my iPhone? How? Thanks gang!!!!
-
[SOLVED] error while loading shared libraries
Im open mplayer and cmplaye its not start getting this error cmplayer: error while loading shared libraries: libhogweed.so.2: cannot open shared object file: No such file or directory mplayer: error while loading shared libraries: libhogweed.so.2: ca