Structure of aggregates

what is the structure of aggregates

Hi Jaya
Please find the below links
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1e553368-0601-0010-49ab-c429607f3eb3
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/e55aaca6-0301-0010-928e-af44060bda32
Hope its Helps U.
Thanks
SRS

Similar Messages

  • Fixing at aggregate level & unfixing at detailed level.

    Hi,
    Our requirement is as given below......
    Our Demand Planning Process is Top-down process. Head Quarter will enter the demand at Territory, Channel and Product Group Level. These values are fixed and gets disaggregated to further planning level based on historical sales. The further planning levels are Sales Officer, Location & Product. Territory Manager can change the demand with in his targets across Sales officers (i.e. he should be able to change the targets of Sales officers without exceeding or reducing the Territory targets).
    Can anyone suggest how to fix the keyfigures at Aggregate Level without fixing it at detailed level. We tried FIX_CALC macro, but could not resolve the issue.
    Thanks in advance.
    SP

    You need to create AGGREGATES in your PLANNING OBJECT STRUCTURE, right click on POS and create  the following aggregates
    (1) ALL TERRITORY characteristics
    Now you will be able to fix at the top level and still be able to change at the detailed level.
    Read the following:* Let me know if you have any questions*
    Aggregate  
    Definition
    The aggregates in APO are not the same as those in the Business Information Warehouse (BW), but they have the same purpose: to speed up data access and therefore increase performance.  SAP recommends that you use aggregates in Demand Planning.
    These aggregates are sometimes referred to as persistent aggregates, in particular when talking about detail and aggregate levels in interactive planning.
    Use
    An APO aggregate contains a subset of the characteristics in the master planning object structure. The creation and use of aggregates is optional. The data is always saved on the lowest level of detail. If aggregates exist, the system saves the planning data on the defined aggregate levels as well as on the lowest level of detail. The data is saved twice, but consistently; that is, the sum of the details equals the aggregate value.
    SAP recommends that the aggregates you build in APO for planning data also exist in BW for actual data.
    If you want to save fixed values at aggregate level ; that is not at detail level of the master planning object structure, in a planning book, you must create aggregates for this level. Otherwise the data is only saved at detail level.
    You cannot form aggregates for navigational attributes
    See also:
    Fixing Key Figure Values
    Aggregates are discussed in detail in OSS note 503363. SAP recommends that you read this note thoroughly before configuring your master planning object structures and aggregate hierarchy.

  • Please help me out with some fundamentals in BW

    Hello,
    Please guide me regarding the below mentioned questions.
    1. what is the key date in query designer.
    2. when do we perform attribute change run
        like once the master data is loaded then we perform attribute change run and load the transactional data ?
    3.what is the disadvantage of using aggregates.
    4. what is full repair options?
    please help me out with these questions

    HI,
    Repair full request :
    If you indicate a request in full update mode as a repair request, then it is able to be updated in all data targets. This is also true if they already contain data from initial runs or deltas for this DataSource / source system combination, and they have overlapping selections.
    Consequently, a repair request can be updated at any time without checking each ODS object. The system supports loading in an ODS object by using the repair request without having to check the data for overlapping or request sequencing. This is because you can also delete selectively without checking an ODS object.
    Posting such requests can lead to duplicate data records in the data target.
    Hierarchy/attribute change run after loading master data;
    When hierarchies or attributes of characteristics change, the aggregate affected by the change can be adjusted manually or calculated automatically in process chains.
    Aggregates:
    Aggregates are materialized, pre-aggregated views on InfoCube fact table data. They are independent structures where summary data is stored within separate transparent InfoCubes. The purpose of aggregates is purely to accelerate the response time of queries by reducing the amount of data that must be read in the database for a given query navigation step. In the best case, the records presented in the report will exactly match the records that were read from the database.
    Aggregates can only be defined on basic InfoCubes for dimension characteristics, navigational attributes (time-dependent and time-independent) and on hierarchy levels (for time-dependent and time-independent hierarchy structures). Aggregates may not be created on ODS objects, MultiProviders or Remote Cubes.
    Queries may be automatically split up into several subqueries, e.g for individual restricted key figures (restricted key figures sales 2001 and sales 2002). Each subquery can use one aggregate; hence, one query can involve several aggregates.
    If an aggregate has less than 15 components, BW 3.x puts each component automatically into a separate dimension that will be marked as “line item” (except package and unit dimension); these aggregates are called flat aggregates. Hence, dimension tables are omitted and SID tables referenced directly.  Flat aggregates can be rolled up on the DB server (i.e., without loading data into the application server). This accelerates the roll up (hence the upload) process.
    Disadvantage : The more aggregates exist, the more time-consuming is the roll-up process and thus the data loading process; the change run is also affected.
    Hope this info Helps.
    Thanks,Ramoji.

  • Urgent!!! Roll Ups?

    Hi Gurus,
    I have two question as below..
    1. If the aggregates are active in BI Testing server then how to fill them without activating again.? Or Do I need to transport the
        filled and active aggregates again from BI Dev by doing the same there using a new request.
    2. And  when does the sign of Sigma appears in front of Cube( when aggregates are active only or active and filled)?
    Thanks,
    SDP

    Hi,
    By transports, we can move the structure of aggregates to target system.
    It will not fill the data.. ( Also the data will be different in Dev, Quality etc )
    So you have to do "Activate and Fill" in target system also.
    So if your source of transport is BI Training system and your are moving aggregate TR to BI Development , please do Activate and Fill aggregate in BI Develoment once the TR is moved.
    You have option like,
    Activate and Fill : Activate and fills data to created aggregate structure.
    Switch off : Deletes data only. Aggregate structure remains.
    Delete : Deletes  aggregate structure and data
    In your case, I think your aggreggate structure is Saved, but not activated and filled.
    Please go ahead and do activate and Fill. This will not create any issues.
    You can find it in menu options.
    Hope this helps.
    VInod

  • Usb interface not on aggregate structure list

    when i pull up my aggregate device box the structure list only has only built in components listed what do i do to get my usb interface on there?

    The problem lies with Tascam and their inability to write stable drivers for OS X...  and isn't anything to do with Logic Pro ..
    As you know, you are not the only person to have had this issue and the reality is, while the 1641 might work fine with  Windows, when it comes to OS X Macs.. it's an entirely different story... depending on the version of OS XS you are running.
    The one tip I have read, that seemed to fix it for most people.. (Though not all) is as follows;
    (Thanks to Paulypaulpaul for this)
    -uninstall your tascam driver 2.10
    -restart
    -install driver 1.03
    -restart
    -plug in your tascam
    -goto where you download your drivers (http://tascam.com/product/us-1641/downloads/), scroll down to the bottom where it says firmware, Download the firmware 1.02 (3rd from top under firmware), install it. your control panal should now read 1.03 for your drivers and 1.02 for your firmware.
    -install drivers 2.10
    -restart
    Follow each of the steps exactly.. including all restarts when indicated..
    ...and good luck...

  • Aggregate and structure

    i just what to know the  difference between aggregate and structure? although aggregate is used for query performance and structure for reporting.Both are giving the same view?

    adding to the paolo's answer
    Aggregates are used for query performance in the sense that If ur query has the same selections as they are in the aggregates then the query performance increases dramtically and structure is used in reporting in cases such as where u might need a complex selections of char and KF's.
    for structures
    http://help.sap.com/saphelp_nw04/helpdata/en/4d/e2bebb41da1d42917100471b364efa/content.htm
    for aggregates
    http://help.sap.com/saphelp_nw04/helpdata/en/7d/eb683cc5e8ca68e10000000a114084/frameset.htm

  • Error when structuring the index of aggregate 100165 for InfoCube 0TCT_C01

    Hi,
    I was getting following error:
    **Error when structuring the index of aggregate 100165 for InfoCube 0TCT_C01
    I used this link Please Help me...
    I followed all the steps:  
    1.Delete indexes.
    2.Load the InfoCube.
    3.Create Indexes.
    4.Roll-up of aggregates
    but still getting the same error(Roll-up of aggregate is failing).
    After the failure if i repeat the step then it gets executed successfully.
    I need to monitor the PC ...and at the end of the complition it fails(daily)...
    Please can anyone help me on this issue(that why the chain is failing at the roll-up)?

    Go to TX RSDDV with your cube an identify the technical ID of ur aggregate (a number 100045 for instance).
    TX SE11 goto table RSDDAGGRDIR filter the selection with AGGRCUBE = 100045.
    Copy the AGGRUID (something like 3QL29Z7ZLO3BQZDSSLRU0MGOI)
    Then come to TX SE37; RSDDK_AGGREGATES_FILL; hit the single test button
    I_T_Aggregate enter your AGGRUID
    I_T_INFOCUBE enter the techid of your cube
    Execute.
    This will fill your aggregate.
    Hope this will help......
    Regards,
    Mahesh

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • How to create suitable aggregates for queries on multiprovider ?

    hi all,
    goal reduce db time of query and improve performance
    i have queries on a multicube. I have 5 cubes under the multiprovider. I having performance issue with one of the cubes.  it  had
    high slection/transfer ratio. The same cube had high 94% DB TIme. All the BW and DB indexes and stats are green. I chose
    the path of aggregates. when i tried suggest from proposal it is giving me query time and date range and i gave last
    3 days and  query time 150 sec. it is suggesting huge number of aggregates like 150 of them and not getting reduced much
    when i tried optimize funcitonality.
    The faulty cube had nearly 9 million records and 4 years of data
    1. generally how many aggregates do we need to create on a cube?
    2. how do i use propose from last navigation? it is not creating any aggregates
    3. is there a way for system to propose less number of aggregates?
    4. if nothing works i want to cut the volume of the aggregates base on years or quarters. how do i do that?
    i created with time charactersitic 0calquarter and dragged in ocal day and 0calmonth. activated and filled in ..but
    query is not hitting it when i do a monthly selection. i tried bringing in all the other dimensions...except line item
    dimenisons....no use ...it  is not hitting the manual aggregates in RSRT.the slection on 0calquarter is * .
    5. should i change it to fixed value and bring in the line items too and create it?
    6. I wanted to try propose aggregate from query option..but  my query is on multiprovider...and not able to copy it to cube..
    plz help me how to find the suitable aggregates for query on multiprovider
    7. should i create any new indexes on the cube using the chractrestics in the where condiotion of select statment...but in
    that case, select statement changes with drill down.......how do i handle it ?
    8. how will i make sure the aggregates work for all queries run time imporvement?
    9. plz suggest other approaches if any with procedures
    this is a urgent problem plz help...
    <b>thanks in advance
    points will be assigned for inputs</b>

    1. generally how many aggregates do we need to create on a cube?
    it depends on your specific needs, you can need none or several.
    2. how do i use propose from last navigation? it is not creating any aggregates
    Can you elaborate?
    3. is there a way for system to propose less number of aggregates?
    In any of the menus of screen for creating aggregates you have an option for sytem propose aggregates for one specific queries i am not sure it worked with multicubes.
    4. if nothing works i want to cut the volume of the aggregates base on years or quarters. how do i do that?
    You should delete 0calday from aggregates in order to accumulate data for any greater time unit. Other solution for times is to study try to do a partition in cube.
    5. should i change it to fixed value and bring in the line items too and create it?
    Can you elaborate?
    6. I wanted to try propose aggregate from query option..but my query is on multiprovider...and not able to copy it to cube..
    Answered before, maybe you can create a query only with data on that cube that appears in multicube query in order to proposal any aggregate in thath cube.
    7. should i create any new indexes on the cube using the chractrestics in the where condiotion of select statment...but in
    that case, select statement changes with drill down.......how do i handle it ?
    is not recomendable create new indexes in multidemensional structures. Try avoid selections for navigational attributes, if necesary add navigate attributte as dimension attributes, put filters in filter section in BEX.
    8. how will i make sure the aggregates work for all queries run time imporvement?
    try transaction st03
    9. plz suggest other approaches if any with procedures
    Some other approches yet answering
    Good luck

  • How to delete aggreagetd data in a cube without deleting the Aggregates?

    Hi Experts,
    How to delete aggreagetd data in a cube without deleting the Aggregates?
    Regards
    Alok Kashyap

    Hi,
    You can deactivate the aggregate. The data will be deleted but structure will remain.
    If you switch off the aggregates it wont be identified by the OLAP processor. report will fetch the data directly from the cube. Switching off the aggreagte won't delete any data,but temporarly the aggregate will not be availbale as if it is not built on the info cube. No reporting is not possible on swtiched off aggregates. The definition of the aggregate is not deleted.
    You can temporarily switch off an aggregate to check if you need to use it. An aggregate that is switched off is not used when a query is executed.This aggregate will be having data from the previous load's. If the aggregate is switched off means it wont be available for reporting but data can be rolled up into it.
    If u deactivate the aggregates the data will be deleted from the aggregates and the aggregate structure will remain the same.
    The system deletes all the data and database tables of an aggregate. The definition of the aggregate is not deleted.
    Later when you need those aggregate once again you have to create it from scratch.
    Hope this helps.
    Thanks,
    JituK

  • How to select data from an aggregate in a customer exit for a query?

    Hi,
    I have written a virtual key figure customer exit for a query. Earlier the selection was from the cube, where there was severe performance issue. So I have created an aggregate, activated and have loaded the data.
    Now when I select that data I find that the Key table is different in development and production. How do I resolve this.
    My code is attached below. The table in developemnt is KEY_100027 and in production is KEY_100004. This code is activated and running in BW development server.
    SELECT
        F~KEY_1000041 AS K____035
         F~KEY_1000271 AS K____035
         F~QUANT_B AS K____051
         F~VALUE_LC AS K____052
    INTO (xdoc_date, xval1, xqty1)
    UP TO 1 ROWS
    FROM
    FROM
    */BIC/E100004 AS F JOIN
    /BIC/E100027 AS F JOIN
    /BIC/DZMM_CGRNU AS DU
    ON FKEY_ZMM_CGRNU = DUDIMID
    JOIN /BI0/SUNIT AS S1
    ON  DUSID_0BASE_UOM = S1SID
    JOIN /BI0/SCURRENCY AS S2
    ON DUSID_0LOC_CURRCY = S2SID
    JOIN /BI0/SMATERIAL AS S3
    *ON FKEY_1000042 = S3SID
    ON FKEY_1000272 = S3SID
    JOIN /BI0/SMOVETYPE AS S4
    *ON FKEY_1000043 = S4SID
    ON FKEY_1000273 = S4SID
    JOIN /BI0/SPLANT AS S5
    *ON FKEY_1000044 = S5SID
    ON FKEY_1000274 = S5SID
    JOIN /BIC/D100004P AS DP
    *ON FKEY_100004P = DPDIMID
    ON FKEY_100027P = DPDIMID
    WHERE
    WHERE
    ( ( ( ( F~KEY_1000041 BETWEEN 20051230 AND 20060630  ) ) AND  ( (
    ( ( ( ( F~KEY_1000271 BETWEEN 20051230 AND 20060630  ) ) AND  ( (
             S3~MATERIAL = <l_0material> ) ) AND  ( (
                s2~movetype BETWEEN '101' AND '102' OR
             s4~movetype BETWEEN '921' AND '922' OR
             s4~movetype BETWEEN '105' AND '106' OR
             s4~movetype BETWEEN '701' AND '701' OR
             s4~movetype BETWEEN '632' AND '632' ) ) AND  ( (
             S5~PLANT = <l_0plant> ) ) AND  ( (
             DP~SID_0RECORDTP = 0  ) ) ) )
    GROUP BY
        ORDER BY F~KEY_1000271 DESCENDING.
          IF sy-subrc NE 0.
            EXIT.
          ENDIF.
        ENDSELECT.
    How do I transport the code and make it work?
    Whats the reason that the two key fields are different.
    I had transported the aggregate from development to production. Activated it and filled the data.
    What is the way out? Please help.
    Regards,
    Annie.

    Hi Sonu,
    The main task is to move the contents of the one internal table to another with some condition.
    First sort and delete the duplicate entries from the First Internal table like below : 
    sort it_tab by material ascending date_modified descending.
    delete adjacent duplicates from it_tab.
    Then move that Internal table contents to another internal table.
    Define another internal table with the same structure as you have first internal table and then
    Second Step :
    it_itab1 = it_itab.
    If you are using seperate Header line and Body then you can do like below :
           it_itab1[] = it_itab[].
    This will fix the issue.
    Please let me know if you need any further explonation.
    Regards,
    Kittu
    Edited by: Kittu on Apr 24, 2009 12:21 PM

  • Trying to create a Histogram type/object for aggregate functions

    Hi,
    I am trying to create an aggregate function that will return a histogram
    type.
    It doesn't have to be an object that is returned, I don't mind returning
    a string but I would like to keep the associative array (or something
    else indexed by varchar2) as a static variable between iterations.
    I started out with the SecondMax example in
    http://www.csis.gvsu.edu/GeneralInfo/Oracle/appdev.920/a96595/dci11agg.htm#1004821
    But even seems that even a simpler aggregate function like one strCat
    below (which works) has problems because I get multiple permutations for
    every combination. The natural way to solve this would be to create an
    associative array as a static variable as part of the Histogram (see
    code below). However, apparently Oracle refuses to accept associate
    arrays in this context (PLS-00355 use of pl/sql table not allowed in
    this context).
    If there is no easy way to do the histogram quickly can we at least get
    something like strCat to work in a specific order with a "partition by
    ... order by clause"? It seems that even with "PARALLEL_ENABLE"
    commented out strCat still calls merge for function calls like:
    select hr,qtr, count(tzrwy) rwys,
    noam.strCat(cnt) rwycnt,
    noam.strCat(tzrwy) config,
    sum(cnt) cnt, min(minscore) minscore, max(maxscore) maxscore from
    ordrwys group by hr,qtr
    Not only does this create duplicate entries in the query result like
    "A,B,C" and "A,C,B" it seems that the order in rwycnt and config are not
    always the same so a user can not match the results based on their
    order.
    The difference between my functions and functions like sum and the
    secondMax demonstrated in the documentation is that secondMax does not
    care about the order in which it gets its arguments and does not need to
    maintain an ordered set in order to return the correct results. A good
    example of a built in oracle function that does care about all its
    arguments and probably has to maintain a similar data structure to the
    one I want is the PERCTILE_DISC function. If you can find the code for
    that function (or something like it) and forward a reference to me that
    in itself would be very helpful.
    Thanks,
    K.Dingle
    CREATE OR REPLACE type Histogram as object
    -- TYPE Hist10 IS TABLE OF pls_integer INDEX BY varchar2(10),
    -- retval hist10;
    -- retval number,
    retval noam.const.hist10,
    static function ODCIAggregateInitialize (sctx IN OUT Histogram)
    return number,
    member function ODCIAggregateIterate (self IN OUT Histogram,
    value IN varchar2) return number,
    member function ODCIAggregateTerminate (self IN Histogram,
    returnValue OUT varchar2,
    flags IN number) return number,
    member function ODCIAggregateMerge (self IN OUT Histogram,
    ctx2 IN Histogram) return number
    CREATE OR REPLACE type body Histogram is
    static function ODCIAggregateInitialize(sctx IN OUT Histogram) return
    number is
    begin
    sctx := const.Hist10();
    return ODCIConst.Success;
    end;
    member function ODCIAggregateIterate(self IN OUT Histogram, value IN
    varchar2)
    return number is
    begin
    if self.retval.exist(value)
    then self.retval(value):=self.retval(value)+1;
    else self.retval(value):=1;
    end if;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateTerminate(self IN Histogram,
    returnValue OUT varchar2,
    flags IN number)
    return number is
    begin
    returnValue := self.retval;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateMerge(self IN OUT Histogram,
    ctx2 IN Histogram) return number is
    begin
    i := ctx2.FIRST; -- get subscript of first element
    WHILE i IS NOT NULL LOOP
    if self.retval.exist(ctx2(i))
    then self.retval(i):=self.retval(i)+ctx2.retval(i);
    else self.retval(value):=ctx2.retval(i);
    end if;
    i := ctx2.NEXT(i); -- get subscript of next element
    END LOOP;
    return ODCIConst.Success;
    end;
    end;
    CREATE OR REPLACE type stringCat as object
    retval varchar2(16383), -- concat of all value to now varchar2, --
    highest value seen so far
    static function ODCIAggregateInitialize (sctx IN OUT stringCat)
    return number,
    member function ODCIAggregateIterate (self IN OUT stringCat,
    value IN varchar2) return number,
    member function ODCIAggregateTerminate (self IN stringCat,
    returnValue OUT varchar2,
    flags IN number) return number,
    member function ODCIAggregateMerge (self IN OUT stringCat,
    ctx2 IN stringCat) return number
    CREATE OR REPLACE type body stringCat is
    static function ODCIAggregateInitialize(sctx IN OUT stringCat) return
    number is
    begin
    sctx := stringCat('');
    return ODCIConst.Success;
    end;
    member function ODCIAggregateIterate(self IN OUT stringCat, value IN
    varchar2)
    return number is
    begin
    if self.retval is null
    then self.retval:=value;
    else self.retval:=self.retval || ',' || value;
    end if;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateTerminate(self IN stringCat,
    returnValue OUT varchar2,
    flags IN number)
    return number is
    begin
    returnValue := self.retval;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateMerge(self IN OUT stringCat,
    ctx2 IN stringCat) return number is
    begin
    self.retval := self.retval || ctx2.retval;
    return ODCIConst.Success;
    end;
    end;
    CREATE OR REPLACE FUNCTION StrCat (input varchar2) RETURN varchar2
    -- PARALLEL_ENABLE
    AGGREGATE USING StringCat;

    GraphicsConfiguration is an abstract class. You would need to subclass it. From the line of code you posted, it seems like you are going about things the wrong way. What are you trying to accomplish? Shouldn't this question be posted in the Swing or AWT forum?

  • Need help in transforming a customer structure to an analytical structure

    I have the following tables. I am interested in studying the manager's attributes that will predict the sales of Product A. In addition, I want to study whether the client attributes have an impact of the sales of product A. Can someone give me an example on how to combine these three tables so that I can do my analysis?
    Sales Managers -> (1-n) Clients -> (1-n) Products
    Sales Managers Attributes
    Manager ID
    Region
    State
    Years of experience
    Age
    Clients Attributes
    Client Type (Technology / Finance, etc)
    Client Size (Large / Medium / Small)
    Product Attributes
    Products (A / B / C)
    Revenue
    Sale Date
    1. If I transform the data into the following structure, will this structure allow me to study the relationship between the attributes of sales managers/clients and the likelihood of selling product A?
    My concern is that each sales manager is repeated multiple times. For example if a sales manager have clients of Technology and of Finance and clients of all sizes (Large/Medium/Small). That's six rows of data for each unique manager.
    Manager ID, Region, State, Years of Experience, Age, Client Type, Client Size, # Clients, Revenue, Sale Year, # Products Sold, Product A Sold (yes/no)
    123, East, MA, 5, 25, Technology, Large, 2, $10000, 2007, 1000, Yes
    123, East, MA, 5, 25, Technology, Medium, 5, $6000, 2007, 500, Yes
    123, East, MA, 5, 25, Technology, Small, 10, $4000, 2007, 200, No
    123, East, MA, 5, 25, Finance, Large, 1, $5000, 2007, 500, Yes
    123, East, MA, 5, 25, Finance, Medium, 1, $5000, 2007, 500, Yes
    123, East, MA, 5, 25, Finance, Small, 1, $5000, 2007, 500, No
    124, West, CA, 10, 35, Technology, Large, 2, $10000, 2007, 1000, No
    124, West, CA, 10, 35, Technology, Medium, 5, $6000, 2007, 500, Yes
    2. What algorithm should I use to study the input variables to predict the likelihood of selling product A?

    HI,
    You could use Association Rules for coming up with the probability of sales given the sale of a specific item.
    ODM will handle transactional input of sales information for this purpose.
    As for data preparation, you should take a look at Data Miner Workflow's Aggregation Node. This node allows you generate "nested tables' of the type DM_NESTED_NUMERICAL and DM_NESTED_CATEGORICAL. Using this approach, you can create structures to capture the 1 to many (as in your case of sales managers to clients). However you can not use it to capture 1 to many to many ( as in your sales managers to clients to products). See simple script below showing how nested data is creates using the SH.SALES sample data.
    For demos on how to do either AR or Aggregration, you can download SQL Dev 3.0 and use it against a DB of 11.2.0.1 or higher. There are OBE (Oracle By Example, available on OTN) tutorials and Cue Card Tutorials (shipped in SQL Dev).
    Thanks, Mark
    WITH
    /* START OF SQL FOR NODE: SALES */
    "N$10071" as (select /*+ inline */ "SALES"."PROMO_ID",
    "SALES"."TIME_ID",
    "SALES"."CHANNEL_ID",
    "SALES"."CUST_ID",
    "SALES"."PROD_ID",
    "SALES"."QUANTITY_SOLD",
    "SALES"."AMOUNT_SOLD"
    from "SH"."SALES" )
    /* END OF SQL FOR NODE: SALES */
    /* START OF SQL FOR NODE: Aggregate */
    "N$10072" as (SELECT t0."CUST_ID",
    t0."QUANTITY_SOLD_COUNT",
    t1."AMOUNT_SOLD_COUNT",
    t2."QUANTITY_SOLD_SUM",
    t3."AMOUNT_SOLD_SUM",
    t4."AMOUNT_SOLD_SUM_1"
    FROM
    (SELECT "CUST_ID", COUNT("QUANTITY_SOLD") "QUANTITY_SOLD_COUNT"
         FROM "N$10071"
         GROUP BY "CUST_ID") t0,
    (SELECT "CUST_ID", COUNT("AMOUNT_SOLD") "AMOUNT_SOLD_COUNT"
         FROM "N$10071"
         GROUP BY "CUST_ID") t1,
    (SELECT "CUST_ID", SUM("QUANTITY_SOLD") "QUANTITY_SOLD_SUM"
         FROM "N$10071"
         GROUP BY "CUST_ID") t2,
    (SELECT "CUST_ID", SUM("AMOUNT_SOLD") "AMOUNT_SOLD_SUM"
         FROM "N$10071"
         GROUP BY "CUST_ID") t3,
    (SELECT "CUST_ID",
         CAST(COLLECT(DM_Nested_Numerical(ID5, "AMOUNT_SOLD_SUM_1")) AS DM_Nested_Numericals ) "AMOUNT_SOLD_SUM_1"
         FROM
              (SELECT "CUST_ID",
              ("PROD_ID") ID5,
              SUM("AMOUNT_SOLD") "AMOUNT_SOLD_SUM_1"
              FROM "N$10071"
              GROUP BY "CUST_ID", (("PROD_ID")) )
         GROUP BY "CUST_ID") t4
    WHERE t0."CUST_ID" = t1."CUST_ID"(+)
    AND t0."CUST_ID" = t2."CUST_ID"(+)
    AND t0."CUST_ID" = t3."CUST_ID"(+)
    AND t0."CUST_ID" = t4."CUST_ID"(+)
    /* END OF SQL FOR NODE: Aggregate */
    select * from "N$10072"

  • Different way to aggregate Essbase: Cannot aggregate to parent level

    Hi,
    I used to use the following statement in BR to aggregate dimensions:
    FIX("Segment Allocation Base")
    FIX(@IDESCENDANTS("YearTotal"))
    @IDESCENDANTS("Total Company"); /*Total Company is member of spare dimension: Market Segments */
    ENDFIX
    ENDFIX
    The tree structure is like this:
    Market Segments (Never Share)
    |__ No Market Segments (Never Share)
    |__ Total Company (Never Share)
    |__Total Market Segments (Dynamic Calc)
    |__SC (Store) -> Level 0
    |__SE (Store) -> Level 0
    1. However, I found that the parent level cannot be aggregated. After I enter a value for Total Company (I think I am creating a block for Total Company), the above BR works.
    2. If I change the BR to the following statement, it works. I don't need to create the block first.
    FIX("Segment Allocation Base")
    FIX(@IDESCENDANTS("YearTotal"))
    AGG("Market Segments");
    ENDFIX
    ENDFIX
    Any gurus knows the difference about the above 2 different ways to aggregate? What's wrong with the first one?
    Thank you in advance.
    Casper

    Unless you wrote the first statement wrong. The reason the first one doesn't calculate is because it does absolutely nothing
    @idescendant is a declaration not a calculation
    Definition of @idescendant is: Returns the specified member and either (1) all descendants of the specified member or (2) all descendants down to a specified generation or level. You can use this member set function as a parameter of another function, where that parameter is a list of members.
    AGG is a calculation.
    So unless I'm reading it wrong you declare the children of Total Company and then do absolutely nothing. Therefore it has nothing to do with dense/sparse or blocks existing or not.

  • Problem in displaying sub-totals using a structure displayed as hierarchy

    In a report, i'd like to display in rows a characteristic and a structure. All of that displayed as a 2 levels hierarchy.
    Results on structure rows doesnt aggregate themselves on characteristic rows. I have no result on my first level of hierarchy?

    Hi,
       Write as :
       loop at <internal_table> ....
          V_TBL2 = V_TBL2 + SBL1.
          V_PTBL2 = V_PTBL2 + PSBL1.
          if....
          ELSEIF ( PSCH EQ 'N' OR PSCH EQ 'O' OR PSCH EQ 'P' ).
           AT END OF SG.
              TBL2  = V_TBL2 .
              PTBL2 = V_PTBL2 .
              clear : V_TBL2 , V_PTBL2.
           ENDAT.
       ENDIF.
      ENDLOOP.
    Regards,
    Srini.

Maybe you are looking for

  • How do I listen to Live365 stations in iTunes 11?

    Hello everyone, I had been listening to various Live365 internet radio stations via iTunes and I had been having a few problems using iTunes 10, and so earlier today, I upgraded to iTunes 11.  The only problem I face is that, after looking everywhere

  • What is the use of change pointer technique in IDOC

    hi could anybody suggest me what is the use of change pointer technique and how to use it thank you kalyan.

  • Problems with Quicktime audio in web browser

    Hello. In December 2007 I designed a web site to promote a musical show I wrote. I included mp3 audio files of some of the songs. I designed the site using Dreamweaver on my Mac. The site was working perfectly in February, the last time I (and others

  • Wave view build up

    -When editing in the arrange, making a region smaller, pulling the left side to the right, the wave view doesn't bild up immediately. You have to let gothe mouse and than you can resumewith dragging, in 7 this wasn't like that. Because it's 80% of wh

  • [Request] Gorg

    http://gentoo.neysx.org/mystuff/gorg/gorg.xml I am looking to use gorg (the XSLT processor for gentoo's website / handbook) to create an Arch Linux handbook. Hopefully someone will know how to easily change the ebuild into a PKGBUILD. Many Thanks