Non-Level0 aggregation question
Hi There,
I have Account dimension, for example, Income - rental income - GL accounts (level 0), rental income is the sum of all level0 GL accounts. Data storage is "Store Data". Now I move one GL out of rental income, but the sum of Rental is still the same which is incorrect. My question is what is the way to fix this?
I try couple of things, for example, restructure, clean data then re-import into the essbase, but the issue is still there. I know if I change Rental Income as Dynamic Calc, the issue will be fixed, but I am wondering if there are other ways to do this without changing data storage?
Thanks
Srinivas Bobbala wrote:
Donz,
Whenever you are doing outline changes, Do as below.
1) Take the Levo data export.
2) Clear the cube.
3) Do the modifications in the outline.
4) Reload the Lev0 data
5) Do the Rollup with CALCALL. Then this kinda issues will not arise.
Note: If any deletions are there at Lev0, better to delete after loading the Lev0 data and mapping the data with other or new members as required.Hi Srinivas,
As per your assumptions when ever we required to change the out line we need to clear the all data but as per my knowledge for this issue we no need of clearing the cube..we can directly modify the out line and Executing the CALC ALL is enough...
Regards,
Prabhas..
Similar Messages
-
Agg tables with non-Sum aggregation type
Situation: We have a fact called Ending Cash Balance that is non-additive across time. Each month, it shows the cash on hand at the end of the month. So our aggregation method is "Last" for the Time dimension (i.e. the value for Q1 = March, value for year = Dec), and "Sum" for all Other dimensions.
Using the Aggregation Wizard, we've built an agg table that contains the correct values for Ending Cash Balance (abbrev. ECB), rolled up along all levels properly.
However, our Answers query will NOT use that agg table when querying ECB. In fact, our logical table contains ECB (non-additive along Time) and Capital Spending (additive along time, so the agg method is Sum). When we query Capital Spending by month, the query hits the agg table. When we query ECB by month, the query refuses to hit the agg table. The only difference between the two fact columns is the aggregation method along the Time dimension.
The agg table is perfectly good, but the query will not use it for the fact that has a "non-sum" aggregation method. Any ideas?Mark, OBIEE repositories from ver 10.1.3.x allows for an flag "Data is Dense" to be set up in the aggregation tab of the measure (fact) in question. Please check if this allows you to get the last along time calculation to occur based on results taken from Aggregate table instead of base level fact table. Read up help on this option also.
With this option turned on, I expect measure to get aggregated as follows:
Base level Fact: Day level
Aggregate Table: Month level (say)
Query at Month level => should fetch results from Agg table (sum along other dimensions as reqd)
Query at Quarter level => should fetch results from Agg table (sum along all other dimensions except time as reqd) and choose the last month of Quarter for showing value at Quarter level.
Also experiment with using Aggregation method in Answers (not rpd) as "Server (default)" if using in pivot. Sometimes the results are correct from db to rpd (mid tier) but the front end aggregation spoils the report.
HTH
Shankar -
Non additive aggregation - custom defined aggregation possible in BW?
I have the following problem:
there is a key figure that is non-additive relative to one characteristic, e.g we need the third minimum as an aggregation for a time characteristic (there a 250 values for that characteristic).
Is there a way to create user defined (exception) aggregation (like Var or mean value) by ABAP Coding?
Message was edited by: Michael WaleschDoes your database support analytic functions? Last and first are analytics functions. If your database does not support them, BI has to prepare selects with subqueries and this could slow down the response time
Begoña -
Exceptional aggregation on Non *** KF - Aggregation issue in the Query
Hi Gurus,
Can anyone tell me a solution for the below scenario. I am using BW 3.5 front end.
I have a non cumulative KF coming from my Stock cube and Pricing KF coming from my
Pricing Cube.(Both the cubes are in Multiprovider and my Query is on top of it).
I want to multiply both the KF's to get WSL Value CKF but my query is not at the material level
it is at the Plant level.
So it is behaving like this: for Eg: ( Remember my Qty is Non-*** KF)
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20
My WSL val should be 600 but it is giving me 15 * 95 which is way too high.
I have tried out all options of storing the QTY and PRC in two separate CKF and setting the aggregation
as before aggregation and then multiplying them but it din't work.
I also tried to use Exceptional Aggregation but we don't have option of ' TOTAL' as we have in BI 7.0
front end here.
So any other ideas guys. Any responses would be appreciated.
Thanks
Jay.I dont think you are able to solve this issue on the query level
This type of calculation should be done before agregation and this feature doesnt exist in BI 7.0 any longer. Any kind of exceptional aggregation wont help here
It should be be done either through virtual KF (see below ) or use stock snapshot approach
Key figure QTY*PRC should be virtual key figure. In this case U just need to one cbe (stock quantity) and pick up PRC on the query run time
QTY PRC
P1 M1 10 50
P1 M2 0 25
P1 M3 5 20 -
I have an instance where I need 2 sender file adapters to send XI 2 different files (layouts are different). XI then needs to combine and map the messages from these two files into a single IDoc for each combined message. I know in a multi-mapping I can do it either with a BPM or without. When "aggregating" messages I know I can do it with a BPM. My question is does anyone know of a blog that exists where someone has accomplished this without the need of a BPM (and if there are any good blogs which cover how to aggregate with a BPM that would be helpful as well).
Thanks!Shaun,
As you can see here, [Multi-Mappings|http://help.sap.com/saphelp_nw70/helpdata/en/21/6faf35c2d74295a3cb97f6f3ccf43c/content.htm], you can not do the Message-Merge (n:1) mapping w/o BPM.
Using the BPM, you can refer to the pattern
BpmPatternCollectMultiIf - [Collecting and Bundling Messages - Multiple Interfaces|http://help.sap.com/saphelp_nw70/helpdata/en/0e/56373f7853494fe10000000a114084/content.htm]
I suggest that you schedule your sender adapters at around same time..so that the BPM doesn't have to wait too long to process them. If you can not ensure that, then you can use [Event-Driven Message Processing|http://help.sap.com/saphelp_nw70/helpdata/en/7a/00143f011f4b2ee10000000a114084/content.htm] to wait for both the files to be picked up before they are sent to BPM.
praveen -
Hi there
Got a couple of questions about OWB 10g R2 aggregations.
#1 When I create a cube with aggregations, I cannot for the life of me determine how the aggregations are actually implemented.
Are they implemented by separate tables? materialised views?
So far, when I browse the schema, I can't see any extra database objects created for the purpose of providing aggregates.
#2 I have seen this problem posted by a number of people, but have not yet seen any answer on how to overcome it.
When I create a cube, for some measures I would like to "SUM", for others I would like to "AVERAGE" and for columns such as degenerate dimensions (i.e. Transaction_ID) I would like to have no aggregation at all.
Can anyone tell me how to achieve this using the OWB Cube object???Hi
Answer to the second question:
In Design Center you have to double click in Project Explorer on the cube you want to examine. Than Data Object Editor is launched. To change the aggregation function of certain measures you have to select Aggregation tab in the low right corner. Than in the Measures panel select the measure that you want change aggregation function for. You can now change aggregation for that measure in panel: Aggregation for measure xxx.
Regards
Peter -
SSAS aggregation question.
I have a SSAS cube dimension with hierarchies - Country, State, City, Street.
I have two measures (Price and Quantity) that need to aggregate independently up to City level only. Product (multiplication) of these two measures at City level should then be aggregated for higher level hierarchies (Country, State).
How do I do this in MDX?
Thanks,
J.Hi,
Here is my question with example:
Dimension Hierarchy - Country, State, City, Street.
Country State City Street Qty
US CA City1 Street1 10
US CA City1 Street2 5
US FL City3 Street3 8
US FL City4 Street4 4
Country State Plan Price
US CA A $100
US CA C $70
US FL B $50
Calculated Measure at Country level = 100*10 + 100*5 + 70*10 + 70*5 + 50*8 + 50*4 = 3150
This is different from SUM(Price)*SUM(Qty) as it would be - 220*17 = 3740.
I want 3150 to be my answer at Country level.
I can JOIN the two table using Country and State fields to get a Cartesian product and that could works. The issue is the fact table becomes too big. -
http://bdn.borland.com/article/0,1410,31863,00.html
Please have a look at the class diagram. The relationship of Order and OrderDetail is aggregation but how come the OrderDetail and Item is not? They also contains collection there. why?not sure why they don't show any collections as class members. my guess is because this is an example, not a detailed specification. if you did it out in all its glory you'd include more than you see on this diagram.
i agree that there's no reason for OrderDetail not to use aggregation/composition with Item. my guess is that this is a teaching example, not a rigorous specification.
i'd take this link as an example, a teaching exercise and nothing more. it's good that you're questioning what you see, because it implies that you have enough insight to think about the problem for yourself. but if this is your first tour through UML don't worry about it. suck up those main ideas and start using them for yourself.
% -
I have been looking through posts and tutorials but have not found the answer to a few questions about Aggregator.
Background
I have 7 projects each with quiz materials inside them I would like to aggregate.
If someone starts an aggregated project and leaves it before it finishes, can they resume or do they need to start over?
If each project has a quiz that needs to be passed before they can move on to the next project and reports back to the quiz results analyzer tool, will I have any problems with this?
Can someone move ahead in the TOC or can you prevent someone from clicking on TOC items until they have been viewed like you can in a normal project?
Would it be better to daisy chain the projects instead?
Thanks in advance
DavidThat's what Adobe's Multi-SCORM Packaging tool does. It came by default with Captivate 4, but only if you bought the entire E-learning Suite 2.0 or 2.5 thereafter.
With a Multi-SCO package you do not try to create a single TOC for all modules as you do with the Aggregator. The LMS is supposed to create the overall TOC to get to each module. Within the module, once it is launched, you can have the normal Captivate TOC. -
Java.nio selector non-blocking IO question
Hi,
I am designing an interactive server where multiple clients can log on and communicate with the server. I designed a protocol that the client/server use to talk to each other. My server runs a Selector to monitor a ServerSocket, accepting connections and reading continuously from clients.
Now my question is, since read() on ServerChannel are non-blocking using selector, how can I be sure that my entire protocol message will be read each time selector wakes up? For example, a slow client sends me a 5kb message, in one write() command, can I be sure that I will be able to read the entire message in one non-blocking read() command as well? If not, then the design becomes much more complicated, as I have to pipe each client's input into a handler thread that performs blocking i/o to read a protocol message one at a time. If I do that, then I might as well not use select() at all.
I did some preliminary tests, and it seems that for my purpose (message of size <= 50kb), a read() command will always be able to read the entire message. But I can't find any documentation on this subject. My guess is that I cannot trust non-blocking I/O as well, which means it does not fit my purpose.
Any help will be much appreciated.
Thanks,
FrankYou can't be sure a read() will read in all the data from a client in one call.
For example, say your message from the client to the server is of the following format. <start>message here<end>, where <start> indicates the start of a message and <end> the end of the message. In one read() call you might get "<start>message he". Your server would recognize this is partially correct but it needs the rest of the message. The server would store this and on the second read() you might get "re<end>" for the complete message.
The purpose of non-blocking I/O is so you don't have to wait around for the second part of the message, you can process other client messages while the first client finishes sending their message. This way other clients aren't waiting around while you(the server) sit and wait for client 1 to finish sending it's data.
So basically there is no gaurantee you will get a whole message intact. Your protocol will have to deal with partial messages, recognize them, store the partial message in a buffer, and on subsequent reads get the rest of the message.
Nick -
Using administrator and non administrator accounts - questions
I have been looking around re security for my iMac - newly updated to Snow Leopard. I am not very savvy re much of computer things. I just found a pdf entitled Mac OS X Security Configuration. It recommended having a standard nonadministrator account as well as an administrator account. When I first set up my iMac in Leopard coming from a pc, I had telephone support for my first three years and used it when I ran into some issues. During that time following directions from different support people I have ended up in Systems Preferences "Accounts" having 5 different accounts - one "Administrator", one "Login only" entitled "Guest Account", and three Standard [one entitiled with my name and the other two "TEST1" and "TEST2"]. When I am in the "accounts" window in Systems Preferences, my "Administrator" account is selected, but I cannot select any of the others.
I am thinking from what I read in the article that I should probably delete the three "standard" accounts so I am left with the "Administrator" and "Guest" accounts. And then when my computer turns on, it will use my "Guest" account. Would you agree? Right now when I want to get back in after my computer went to sleep, I have to enter my password. Would this not be required if I am in the "Guest" acccount?
Two questions:
(1) I don't know how to delete those accounts - if, in fact, I should.
(2) How and when will I use the two accounts that are left when the computer turns on?1. You can delete the Test1 and Test2 accounts if you log into your Administrator Account. Once in your Admin Account, open System Preferences > Users & Groups and you will see and be able to delete the Test1 & Test2 accounts.
2. Leave your Guest account for, well, guest users. Do not use it in the normal course of events. When you log out of the Guest account, all the settings, caches, etc. are wiped, as are all files and folders that you may have saved in the Guest account home folder. The Guest account is truly designed only for temporary, guest use.
3. Leave your Administrator account for use only for installing programs, doing system administration, managing accounts, etc.
4. User your named account as your regular account. It appears to already be a User account. The primary limitation is you cannot install programs in a regular User account. This actually helps protect your Mac from viruses and other malware that would need to install software in order to corrupt your system.
5. You can turn off the need to enter a password when your computer sleeps in System Preferences > Security & Privacy > General. UNcheck the option called "Require password for sleep and screen saver." -
I have placed a series of photos in a collage in photoshop. I've never had this problem before but none of the pix are editable. It won't let me use the eraser on them. What have I done wrong?
No. I was asking if your image was 32 bit, and it's not.
I'm pretty confident your layers are smart objects (you can tell by the little mini-document icon in the bottom right of the layer icon, in the layers panel). You cannot erase smart objects. You either need to rasterize them and then erase them, or create a layer mask fro them and paint on that to erase them.
http://help.adobe.com/en_US/Photoshop/11.0/WSCCBCA4AB-7821-4986-BC03-4D1045EF2A57a.html
If you have type layers (with the "T" icon in the layers panel) in the doc, these also cannot be erased, and you'll have to do the same to them, either rasterize then erase, or create a mask and paint black on it to "erase" the layer. -
BEX: Aggregation question
Hi to all,
I think I need some help of a BEX expert.
I have an issue with a query. Following scenario:
In my cube I post the following values:
Org-unit Employee Position P E
00000001 00000001 00000001 0 1
00000001 00000002 00000001 0 1
00000001 00000000 00000001 2 0
00000001 00000003 00000002 0 1
00000001 00000004 00000002 0 1
00000001 00000000 00000002 1 0
00000001 00000005 00000003 0 1
00000001 00000000 00000003 2 0
P = Capacity of the Position
E = Capacity of the Employee
Now to the issue.
In the query I need to calculate the difference of E - P as well as to create new keyfigures for storing a negative result and storing a positive result. At the end my query should look like this:
Org-unit Position P E diff. neg. pos.
00000001 00000001 2 2 00000 0000 0000
00000001 00000002 1 2 -0001 -001 0000
00000001 00000003 2 1 00001 0000 0001
and if I remove the position similar
Org-unit P E diff. neg. pos.
00000001 5 5 00000 -001 0001
The difference should not be displayed, but the rest should. Does anybody have an idea for solving this issue. I am playing around with constant selection on Employee and/or Position but the result is never as I expect it to be.
If you still have some problems understanding my issue feel free to ask.
Seeing forward to get your valueable and helpful hints.
regards
SiggiHi all,
well what I did so far was creating a calculated keyfigure E - P and additionally the two keyfigures negative and positive and the results are as expected if the position and/or the employee is initially displayed in the query. But if I take both out and any other characteristic in the result is not what I want to display. I want to get
this:
Org-unit P E diff. neg. pos.
00000001 5 5 00000 -001 0001
but I get this:
Org-unit P E diff. neg. pos.
00000001 5 5 00000 0000 0000
There is an issue with the aggregation level. The keyfigures diff., neg. and pos. should always show the results depending on the position/employee combination.
Hope it is clearer now.
Siggi -
3.6 Group Aggregation Question
I am trying to achieve a sql group by sort of behavior with coherence 3.6. I have achieved some success by using the following code.
public InvocableMap.EntryAggregator getAggregationCriteria()
BigDecimalSum agg1 = new BigDecimalSum("getTradeDateMVLocal");
BigDecimalSum agg2 = new BigDecimalSum("getTradeDateCashLocal");
BigDecimalSum agg3 = new BigDecimalSum("getCostLocal");
BigDecimalSum agg4 = new BigDecimalSum("getInterestUnrealizedLocal");
CompositeAggregator compAgg =
CompositeAggregator.createInstance(new InvocableMap.EntryAggregator[]
{agg1, agg2, agg3, agg4});
ChainedExtractor cr1 = new ChainedExtractor("getKey.getAccountName");
ChainedExtractor cr2 = new ChainedExtractor("getKey.getCurrency");
ValueExtractor[] extractors = new ValueExtractor[2];
extractors[0] = cr1;
extractors[1] = cr2;
MultiExtractor multiEx = new MultiExtractor(extractors);
GroupAggregator gpa = GroupAggregator.createInstance(multiEx, compAgg);
return gpa;
once the GroupAggregator is constructed I pass it to the namedcache.aggregate method using the following wrapper method.
public LiteMap aggregate(NamedCache cache, Filter filter, InvocableMap.EntryAggregator aggregationCriteria)
LiteMap map = (LiteMap) cache.aggregate(filter, aggregationCriteria);
return map;
the issue is that in a multi-node environment not all the data is aggregated.
for example if i have a single node and i run my aggregation code just in that node i get the expected number of grouped items. in a multi node scenario it ends up with lesser items. now the columns that i am grouping by are part of my composite key for the cache. the implementation of my key class is as follows.
package com.sac.dream.model;
import com.sac.dream.core.model.GridEntityKey;
import com.sac.dream.util.externalization.ObjectReader;
import com.sac.dream.util.externalization.ObjectWriter;
import com.tangosol.net.cache.KeyAssociation;
import javax.persistence.Embeddable;
import javax.persistence.Transient;
import java.io.IOException;
* Created by IntelliJ IDEA.
* User: ahmads
* Date: Jul 28, 2010
* Time: 1:54:45 PM
* To change this template use File | Settings | File Templates.
@Embeddable
public class GenevaValuationKey extends GridEntityKey implements KeyAssociation
private static final long serialVersionUID = 1L;
private String accountName;
private String currency;
private Long uid;
public GenevaValuationKey(Long uid)
this.uid = uid;
public GenevaValuationKey()
@Transient
public Object getAssociatedKey()
int hash = 1;
hash = hash * 31 + getAccountName().hashCode();
hash = hash * 31 + getCurrency().hashCode();
return hash;
public void setAssociatedKey(Object value)
public Long getUid() {
return uid;
public void setUid(Long uid) {
this.uid = uid;
@Override
public String toString()
return "GenevaValuationKey::uid:" + this.uid;
@Override
public boolean equals(Object o)
//if(this == o) return true;
//if (o == null || getClass() != o.getClass()) return false;
GenevaValuationKey that = (GenevaValuationKey) o;
if(this.getAccountName().equals(that.getAccountName()) && this.getCurrency().equals(that.getCurrency()) && this.uid == that.uid)
return true;
else
return false;
@Override
public int hashCode()
int hash = 1;
hash = hash * 31 + getAccountName().hashCode();
hash = hash * 31 + getCurrency().hashCode();
hash = hash * 31 + uid.hashCode();
return hash;
@Override
public int compareTo(GridEntityKey o)
return this.uid.compareTo(((GenevaValuationKey) o).getUid());
@Override
public final void readObject(ObjectReader reader) throws IOException
try
this.setAccountName(reader.readString());
this.setCurrency(reader.readString());
this.uid = reader.readLong();
catch(IOException e)
throw new RuntimeException(e);
@Override
public final void writeObject(ObjectWriter writer) throws IOException
try
writer.writeString(this.getAccountName());
writer.writeString(this.getCurrency());
writer.writeLong(this.uid);
catch(IOException e)
throw new RuntimeException(e);
public String getAccountName() {
return accountName;
public void setAccountName(String accountName) {
this.accountName = accountName;
public String getCurrency() {
return currency;
public void setCurrency(String currency) {
this.currency = currency;
i implemented the keyassociation assuming that i need to make sure that for a certain group all the rows within that group need to exist on the same node. there might be something wrong with that implementation.
thanksrehevkor5 wrote:
Yeah apparently you're not supposed to call readRemainder or writeRemainder from within the PortableObject methods, too bad the documentation does not mention this.
Here is a better idea of what a subclass's PortableObject methods should look like:
@Override
public void readExternal(PofReader in) throws IOException {
super.readExternal(in.createNestedPofReader(0));
myObj = (MyType) in.readObject(1);
@Override
public void writeExternal(PofWriter out) throws IOException {
super.writeExternal(out.createNestedPofWriter(0));
out.writeObject(1, myObj);
}Since you cannot read or write the remainder, the way that you support PortableObjects that need to evolve is by implementing the Evolvable interface. Coherence will detect that your object is an instanceof Evolvable, and will handle reading/writing the remainder/futureData and dataVersion for you.Yep. Otherwise, if you have handled the remainder in a PortableObject, you would not be able to sensibly override that method which handled the remainder.
Best regards,
Robert -
Important non classpath related question
Simple enought question.
Danni or Kylie?I'd like to apologise about this comment yesterday
"if you don't need it tomorrow, then you probably don't have very much use for it today, this is NOT a fallacy."
I was feeling discombobulated with the program I was working on and had just deleted and eliminated the bug-ridden one rather than the the near-completed one - HATE IT WHEN THAT HAPPENS.
PS: Kylie is an embarrassment and a disgrace to the entire female race
(and her bum is 10x better than mine ever was or ever will be)
Maybe you are looking for
-
Hi, I have doene a VO substitution and have deployed it on the server.It was working some time back but now for some reason isn't working. I have gaian deployed the VO.xml,VOImplx.xml and VORowImpl.xml have run the JPXImport.Apache recycle also done
-
Does Apple TV work with ipads?
Does Apple TV work with ipads?
-
How to find out how many 802.11b clients are connecting
Hello We got around 70 Cisco AP1231 here in autonomous function. They're all equiped with 802.11b/g modules. I'd like to soon disable 802.11b here (school), but I should somehow find out how many people are still using 802.11b. Any ideas how I could
-
Acrobat Crashes when form Filling
I can open the from sent to me when I try to fill in a field the app crashes
-
Getting WebService return object in BPEL
I have a simple WebService that returns a DTO object: public MyDTO testDTO(String s1, String s2){ MyDTO myDTO = new MyDTO(); myDTO.setInput1(s1); myDTO.setInput2(s2); return myDTO; public class MyDTO implements Serializable{ public MyDTO() { private