Aggregate inheritence

Is there some documentation or tutorial I can find for doing inheritence using Aggregates? I have seen in http://download.oracle.com/docs/cd/B32110_01/web.1013/b28218/descun.htm it states "When configuring inheritance for a relational aggregate descriptor, all the descriptors in the inheritance tree must be aggregates. The descriptors for aggregate and non-aggregate classes cannot exist in the same inheritance tree." So I am guessing it can be done, but I do not see how to do it anywhere.

Thank you so much. For those that may need this in the future, I will give a full solution to this. In the inherited aggregates, map the fields as they would normally be mapped. So, in this case, inside the Toplink Workbench, I would map extension as direct-to-field.
Then, inside the class that holds the aggregate, you need to add an "After Load" method.
public void afterLoadPerson(ClassDescriptor descriptor) {
AggregateObjectMapping mapping = (AggregateObjectMapping )aDesc.getMappingForAttributeName("phoneNum");
mapping.addFieldNameTranslation("XE.PERSON.EXTENSION", "extension->DIRECT");
}This way, the direct-to-field mapping of the extension attribute in NumberAndExtension will be mapped to the database field it should be mapping to for the Person class.
Then, if some other class and table need to use the Number aggregate with inheritance, the same method would be followed, except instead of doing "XE.PERSON.EXTENSION", the other table and field name would be place here.

Similar Messages

  • Optional 1:1 aggregate relationship with inheritence

    I have a class which has a 1:1 relationship via aggregation to another class. The 'aggregated' class also has inheritence.
    Unfortunately this is actually a 1:0,1 relationship. The zero scenario has the class indicator field null for the aggregate object. I get the following error:
    Exception Description: Missing class indicator field from database row [DatabaseRow(
    Is there a way to map this relationship?
    thanks,
    craig

    Hi,
    Welcome to the forum!
    I don't know what's causing that problem, but here's a woirk-around:
    WITH  got_rnk     AS
         SELECT     id
         ,     val
         ,     DENSE_RANK () OVER ( PARTITION BY  id
                                   ORDER BY          edate     DESC
                           ) AS rnk
         FROM     xmlagg_bug
    SELECT       XMLELEMENT ( "root"
                   , XMLAGG ( XMLELEMENT ( "sroot"
                                           , MAX (val)
                   ).getClobVal ()     AS res
    FROM       got_rnk
    WHERE       rnk     = 1
    GROUP BY  id
    ;You shouldn't be using wm_concat any way. If you need a function like that, create your own STRAGG fucntion instead. You can copy STRAGG from
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2196162600402
    STRAGG seems to do exactly the same thing as WM_CONCAT. I say it "seems" to, because WM_CONCAT is undocumented, so we don't know exactly what it does.

  • Inheritence / Aggregates / Class / Bean question

    Hi all, I'm wondering if anybody has tried/succeeded in the following:
    We have a User bean and a User class which we would like to map to one DB table with an aggregate. The user must be available un-aggregated in the mapping workbench for use by other classes which do not require the bean wrapper, so we created a ToplinkUser extending from User, which we turn on aggregation for. The bean then contains a value holder interface attribute dataObject, which we map TopLinkUser [using aggregation as said above].
    Potential issues:
    1) The plain User needs a PK mapping in the DB table, which then gets inherited by ToplinkUser, so the bean ends up with 2 writable PK mappings. The bean mapping must be writable to allow for sequencing.
    2) The User has a many to many Roles mapping which gets inherited by ToplinkUser and shows up on the list of fields that must be mapped in the aggregate usage inside the bean. Is it valid to point this back to the PK?
    Thanks in advance for any thoughts, tips, ideas, or someone who already knows it isn't possible.
    - Jesse

    1 - I'm not sure I follow the issue -- If you have inheritance setup on the descriptor, the mappings should not be duplicated in the subclass. You can override the super class mapping by right-clicking on the descriptor, and setting the visability on "Map inherited attributes".
    2 - It shouldn't have to be mapped, if there is a little "up arrow" on the mapping, that means it's inherited and mapped in the superclass.
    At runtime TopLink checks a descriptor for a mapping, and if it can't find it, if there is inheritance, it goes to the "super descriptor".
    - Don

  • Index's on cubes or Aggregates on infoobjects

    Hello,
    Please tell me if it is possible to put index's on cubes; are they automatically added or is this something I put on them?
    I do not understand index's are they like aggregates?
    Need to find info that explains this.
    Thanks for the hlep.
    Newbie

    Indexes are quite different from aggregates.
    An Aggregate is a slice of a cube which helps the data retrival on a faster note when a query is executed on a cube. Basically it is kind of a snapshot of KPI's and Business Indicators (Chars) which will be displayed as the initial query run result.
    Index is a process which is inturn will reduce the query response time. While an object gets activated, the system automatically create primary indexes. Optionaly, you can create additional index called secondary indexes.Before loading data, it is advisable to delete the indexes and insert them back after the loading.
    Indexes act like pointers for quickly geting the Data.When u delete it will delete indexes and when u create it will create the indexes.
    When loading we delete Bcs during loading it has to look for existing Indexes and try to update so it will effect the Data load performence so we delete and create it will take less time when compared to updating the existing ones.
    There is one more issue we have to take care if u r having more than 50 million records this is not a good practice insteah we can delete and create during week end when they r no users.

  • How can I see the data in the aggregates

    how can see the data available in the aggregates.
    Jay

    Hi Jay,
    its so simple,
    please goto the manage aggregates screen and copy the technical name of the aggregate and add
    /bic/exxx  xxx is the aggregate technical name, and for f fat table use /bic/fxxx, and go to se16 and enter the table name and thats it ur data is with u.
    R

  • Aggregates, VLAN's, Jumbo-Frames and cluster interconnect opinions

    Hi All,
    I'm reviewing my options for a new cluster configuration and would like the opinions of people with more expertise than myself out there.
    What I have in mind as follows:
    2 x X4170 servers with 8 x NIC's in each.
    On each 4170 I was going to configure 2 aggregates with 3 nics in each aggregate as follows
    igb0 device in aggr1
    igb1 device in aggr1
    igb2 device in aggr1
    igb3 stand-alone device for iSCSI network
    e1000g0 device in aggr2
    e1000g1 device in aggr2
    e1000g2 device in aggr3
    e1000g3 stand-alone device of iSCSI network
    Now, on top of these aggregates, I was planning on creating VLAN interfaces which will allow me to connect to our two "public" network segments and for the cluster heartbeat network.
    I was then going to configure the vlan's in an IPMP group for failover. I know there are some questions around that configuration in the sense that IPMP will not detect a nic failure if a NIC goes offline in the aggregate, but I could monitor that in a different manner.
    At this point, my questions are:
    [1] Are vlan's, on top of aggregates, supported withing Solaris Cluster? I've not seen anything in the documentation to mention that it is, or is not for that matter. I see that vlan's are supported, inluding support for cluster interconnects over vlan's.
    Now with the standalone interface I want to enable jumbo frames, but I've noticed that the igb.conf file has a global setting for all nic ports, whereas I can enable it for a single nic port in the e1000g.conf kernel driver. My questions are as follows:
    [2] What is the general feeling with mixing mtu sizes on the same lan/vlan? Ive seen some comments that this is not a good idea, and some say that it doesnt cause a problem.
    [3] If the underlying nic, igb0-2 (aggr1) for example, has 9k mtu enabled, I can force the mtu size (1500) for "normal" networks on the vlan interfaces pointing to my "public" network and cluster interconnect vlan. Does anyone have experience of this causing any issues?
    Thanks in advance for all comments/suggestions.

    For 1) the question is really "Do I need to enable Jumbo Frames if I don't want to use them (neither public nore private network)" - the answer is no.
    For 2) each cluster needs to have its own seperate set of VLANs.
    Greets
    Thorsten

  • Aggregates on Non-cumulative InfoCubes, stock key figures, stock, stocks,

    Hi..Guru's
    Please let me know if  anybody has created aggregates on Non-Cumulative Cubes or key figure (i.e. 0IC_C03 Inventory Management.)
    I am facing the problem of performance related at the time of execution of query in 0IC_C03.( runtime dump )
    I have tried lot on to create aggregate by using proposal from query and other options. But its not working or using that aggr by query.
    Can somebody tell me about any sample aggr. which they are using on 0ic_c03.
    Or any tool to get better performance to execute query of the said cube.
    One more clarification req that what is Move the Marker pointer for stock calculation. I have compressed only two inital data loading req. should I compress the all req in cube (Regularly)
    If so there would be any option to get req compress automatically after successfully load in data target.
    We are using all three data sources 2lis_03_bx,bf & um for the same.
    Regards,
    Navin

    Hi,
    Definately the compression has lot of effect on the quey execution time for Inventory cubes <b>than</b> other cumulated cubes.
    So Do compression reqularly, once you feel that the deletion of request is not needed any more.
    And ,If the query do not has calday characterstic and need only month characterstic ,use Snap shot Info cube(which is mentioned and procedure is given in How to paper) and divert the month wise(and higher granularity on time characterstic ,like quarter & year) queries to this cube.
    And, the percentage of improvement in qury execution time in case of aggregates is less for non cumulated cubes when compared to other normal(cumulated) cubes. But still there is improvement in using aggregates.
    With rgds,
    Anil Kumar Sharma .P
    Message was edited by: Anil Kumar Sharma

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Back end activities for Activation & Deactivation of Aggregates

    Hi ,
    Could any body help me to understand the back-end activites performed at the time of activation and deactivation of aggregates.
    Is filling of Agreegate is same as Roll up?
    What is the diffrence between de-activation and deletion of Aggregate?
    Thanks.
    Santanu

    Hi Bose,
    Activation:
    In order to use an aggregate in the first place, it must be defined activated and filled.When you activate it, the required tables are created in the database from theaggregate definition. Technically speaking, an aggregate is actually a separate BasicCube with its own fact table and dimension tables. Dimension tables that agree with the InfoCube are used together. Upon creation, every aggregate is given a six-digit number that starts with the figure1. The table names that make up the logical object that is the aggregate are then derived in a similar manner, as are the table names of an InfoCube. For example, if the aggregate has the technical name 100001, the fact tables are called: /BIC/E100001 and /BIC/F100001. Its dimensions, which are not the same as those in the InfoCube,have the table names /BIC/D100001P, /BIC/D100001T and so on.
    Rollup:
    New data packets / requests that are loaded into the InfoCube cannot be used at first for reporting if there are aggregates that are already filled. The new packets must first be written to the aggregates by a so-called “roll-up”. In other words, data that has been recently loaded into an InfoCube is not visible for reporting, from the InfoCube or aggregates, until an aggregate roll-up takes place. During this process you can continue to report using the data that existed prior to the recent data load. The new data is only displayed by queries that are executed after a successful roll-up.
    Go for the below link for more information.
    http://sapbibw2010.blogspot.in/2010/10/aggregates.html
    Naresh

  • Aggregate build suggestions

    Hello Experts,
    We are trying to do performance tuning on a BW3.5 setup ... Currently, we are focusing on queries and looking at the feasibility of building aggregates for performance improvement. We find that most queries see data at the very granular level as follows:
    Rows
    Profit centre group -> Profit Centre-> Product hierarchy level 1
    Columns
    Actual , Planed, difference.
    Now the proposal generated for the queries do not give profit centre group but only profit centre in the first dimension. Please note that profit centre group is a navigational attribute of profit centre( There is a message that says profit centre cannot be in aggregate because of presence of profit centre group)
    Now my question is , in order for these queries to hit the aggregate, should I introduce both Profit centre group and Profit centre in the aggregate? In doing so, will I risk creating a bigger aggregate?
    I am not sure if I am making sense. But please feel free to ask questions I will explain more if needed. My question to be precise is, whether all the characteristics required to view the data in a query needs to be present in the aggregate?
    Many thanks in advance for all your inputs..
    Regards,
    Solomon

    Hi Solomon,
    If your query has to hit the aggregate, then all the characteristics in the selection, filter, default values, rows, columns, used in RKFs, used in exceptional aggregation should be present in the aggregate.
    You can execute the query in RSRT ->Execute+Debug-> with Display aggregate found option.
    This will tell you exactly what are the characteristics those should be present so that your default view hits the aggregate.
    Needless to say, if you are planning to drilldown the report with any characteristic from the free characteristics, even that should be present in the aggregate.
    Now coming to your confusion on Profit centre and Profit Centre Group, Since profit centre group is already navigational attribute of profit centre you need not ( can not ) place that in the aggregate, when profit centre is already present.
    However, if the query is executed with this nav attribute, it will certainly hit the aggregate ( Can be checked in RSRT ).
    Thanks,
    Krishnan

  • Key Figure Aggregate in Bex Query

    Hi Gurus
    I am using BI7.0; but 3.5x BEx tools
    I am loading 6 fields from a flat file.  I am loading data for tickets.  I have create an InfoObject that counts the number of tickets.  No problem.  Also I also have key figures that I am assigning the same value to all Charactersitics: 10,30 per ticket.
    The Key figures are (Sum) with a Summation aggregation type. 
    In my Query, the 10,30 aggregate up based on the number of tickets (characteristic) that are available.
    Question:  How do I remove/stop my key figures from aggregating up (sums) the values of 10, 30 based on the charactersitic?  I want only 10, 30 to be present regardless of the number of tickets (constant value applied to the key figure).
    Should I change my aggregation type?  If so, to what?  I see a number of options, such as Last Number, No aggregation, etc
    Thank you

    I found a solution to my requirement.

  • Key Figure Aggregate in Bex Query based on Charactersitic

    Hi Gurus
    I am using BI7.0; but 3.5x BEx tools
    I am loading 6 fields from a flat file.  I am loading data for tickets.  I have create an InfoObject that counts the number of tickets.  No problem.  Also I also have key figures that I am assigning the same value to all Charactersitics: 10,30 per ticket.
    The Key figures are (Sum) with a Summation aggregation type. 
    In my Query, the 10,30 aggregate up based on the number of tickets (characteristic) that are available.
    Question:  How do I remove/stop my key figures from aggregating up (sums) the values of 10, 30 based on the charactersitic?  I want only 10, 30 to be present regardless of the number of tickets (constant value applied to the key figure).
    Should I change my aggregation type?  If so, to what?  I see a number of options, such as Last Number, No aggregation, etc or can I override this in my Query property?
    Thank you

    Hi Client
    I would like to know how to avoid aggregation of a key figure in Bex 3.5?
    Thanks
    GS

  • Key figure fixing in aggregate level partially locking

    Hi Guys,
    When fix the cell in the planning book, getting "One or more cells could not be completely fixed" message.
    1. If a material having only one CVC in the MPOS those quantity can be fixed correctly without any issues.
    2. If a material having more than 1 CVC combination and try to fix one of the CVC combination quantity, it is fixing partially and getting the above message.
    3. Even, it is not allowing to fix the quantity in aggregate level also.
    We are in SCM 7.0.
    Is there precondition that need to fix the material having only one CVC combination.
    Even a material having multiple CVC combination why it is not allowing to fix one CVC combination in detail level.
    Is aggregate level key figure fixing is not allowed ?
    Please clarify.
    Thanks
    Saravanan V

    Hi,
    It is not mandatory to assign Standard KF to be able to fix. However your custom infoobject that you created must be of type APO KF and not BW KF.
    That said, Let us try and address your first problem.
    You can fix at an aggregate level. However there a few points to remember.
    Let us consider a couple of scenarios.
    1) In your selection id, it is showing a number of products. You are selecting all the products at one go and load the data and try to fix at this level. This is not possible.
    2) In your selection id, you have selected a product division. For a single product division you are loading data and try to fix at this level. This is the true aggregate level and it should be possible at this level.
    Hope this helps.
    Thanks
    Mani Suresh

  • Effect of Full Attribute Extract on Aggregate Attribute Change Run

    Because we extract data that does not trigger the change pointer on 0CUST_SALES we do a FULL extract of this data daily instead of a DELTA.  We recently created an aggregate which contains navigational attributes of 0CUST_SALES (namely Sales Office & Sales Group).
    My question is does the system look to see if any information has changed between the new and old values or does it assume since there is a new version for every entry that 100% of the values have changed?
    The reason I'm asking is BW appears to be rebuilding this aggregate from scratch each day instead of changing the aggregate for the few entries that actually changed.

    I did a FULL extract for 0CUST_SALES in our QA system.  It transferred 28617 records.  When I look into the monitor > processing > data package 1 > update I see the following:
    3 data records in table /BI0/XCUST_SALES marked for deletion
    3 data records inserted in table /BI0/XCUST_SALES
    3 data records n table /BI0/PCUST_SALES marked for deletion
    3 data records inserted in table /BI0/PCUST_SALES
    Different database operations were executed
    When I run the attribute change I see the following messages:
    The Change Run is executing                                                                               
    Attribute 0CUST_SALES__ZZROUTE for basic characteristic 0CUST_SALES: 2 changes   
    Attribute 0CUST_SALES__0BILLTOPRTY for basic characteristic 0CUST_SALES: 2 changes
    Attribute 0CUST_SALES__0PAYER for basic characteristic 0CUST_SALES: 2 changes    
    Attribute 0CUST_SALES__ZZACGRP for basic characteristic 0CUST_SALES: 2 changes   
    0001 ZSD_C02    100016     Aggregates Will Be Adjusted by:Reconstruction
    0002 ZSD_C02    100019     Aggregates Will Be Adjusted by:Reconstruction
    After seeing this i revisited the documentation for threshold value in RSCUSTV8.  Ours is BLANK!  So according to the documents when it is blank the aggregate is reconstructed generally.  Not what we want!!
    I will change the setting to the suggested 20% to start and we'll work from there.
    Thanks for your help Oliver.

Maybe you are looking for

  • Change the Colour of the title safe zones

    Is there a way to change the yellow tittle safe zone colour, to something that can be seen over all colours.  yellow cant be seen over white and it makes it very difficult to use, FCP 7 used aqua Blue it worked much better.   PLease help, Or Apple PL

  • Duplication of pr00 in sales order

    In our sales order the pr00 condition is manual , Many a times the user give more than once the pr00 condition , the system accepts the same . How to supress this . I am using the user exit FORM USEREXIT_PRICING_PREPARE_TKOMK OR FORM USEREXIT_PRICING

  • Is there any way to enable SSH via Terminal in the OSX Installer utility list?

    Hi guys, I've messed up my install a little on my internal HDD.. I can't boot into OSX as I keep getting kernel panics on boot. I was just wondering if there's any way I can SSH into my Mac Pro via the Terminal on the OSX Installer Utilities list.. I

  • HDCP display issue with Yosemite and Macbook air

    Hi - Sorry if the question has been asked before and answered. I have an issue where since I have upgraded to Yosemite, I cannot play any of the movies I bought prior to upgrading and I can even play trailer from the iTunes store... Any ideas? Thanks

  • Change Tracking internals behave differently, SQL Server 2012 vs SQL Server 2008

    <original post by Glenn Estrada> Reposting an issue from Stack Overflow that a coworker and I are dealing with. In trouble shooting an issue with synchronizing disconnected devices with a central database server using Sync Framework 1.0, we are exper