Largest  Aggregates

Hi BW Experts,
I need your help on this ?
"Large aggregates need high runtime for maintenance like change runs and rollup of new data "
Can any one help on this like how to split this large Aggregates ?

Hi Lakshmi,
You have to have a eye on the aggregates all the time.. whether its been used by queries or not, how much time is taking for the rolup and change run and how effective it is in improving the query performance. If you include very less characteristic inside your aggregates, then the number of records will be almost equal to your cube.. so this will be a bad design. So try to keep your aggregates as records summarized give you a very gud view of how the aggregate is. Look into all these points and design your aggregates
Sriram

Similar Messages

  • OBIEE bypasses smaller aggregate table and queries largest aggregate table

    Hello,
    Currently we are experiencing something strange regarding queries that are generated.
    Scenario:
    We have 1 detail table and 3 aggregate tables in the RPD. For this scenario I will only refer to 2 of the Aggregates.
    Aggregate 1 (1 million rows):
    Contains data - Division, Sales Rep, Month, Sales
    Aggregate 2 (13 milliion rows):
    Contains data - Division, Product, Month, Sales
    Both tables are set at the appropriate dimension levels in the Business Model. Row counts have been updated in the physical layer in the RPD.
    When we create an answers query that contains Division, Month and Sales, one would think that OBIEE would query the smaller and faster of the two tables. However, obiee wants to query the table with 13 million records completely bypassing the smaller table. If we make the larger aggregate inactive, then OBIEE queries the smaller table. We can't figure out why OBIEE wants to immediately go to the larger table.
    Has anyone experienced something such as this? Any help would be greatly appreciated.
    Edited by: gwb on Aug 19, 2009 7:45 AM

    Have you try to change the sort order of the logical table source in your logical table ?
    !http://gerardnico.com/wiki/_media/temp/obiee_logical_table_sources_sort.jpg!
    Set the Aggregate 1 first.
    Cheers
    Nico

  • What is "Source ID" in Netflow V9 Packet Header

    Hi,
    My question is regarding the "Source ID" field that appears in Netflow V.9 packet header. Following Cisco link (http://www.cisco.com/en/US/technologies/tk648/tk362/technologies_white_paper09186a00800a3db9.pdf) gives Source ID definition as -
    "The Source ID field is a 32-bit value that is used to guarantee uniqueness for all flows exported from a particular device. (The Source ID field is the equivalent of the engine type and engine ID fields found in the NetFlow Version 5 and Version 8 headers). The format of this field is vendor specific. In the Cisco implementation, the first two bytes are reserved for future expansion, and will always be zero. Byte 3 provides uniqueness with respect to the routing engine on the exporting device. Byte 4 provides uniqueness with respect to the particular line card or Versatile Interface Processor on the exporting device."
    I am using "Source ID" (combined with template id) to uniquely identify options templates exported by different routers. At our new lab setup where we have more than one routers configured to export Netflow, I observed that all the routers were exporting "Source ID" value as "0"(zero). It failed my assumption that I had formed based on definition from above Cisco doc.
    I assumed -
    SourceID    Template Id  Unique Key
    source1       256              source1-256
    source1       257              source1-257
    source2       256              source2-256
    source3       258              source3-258
    But, I observed
    SourceID    Template Id  Unique Key
    0                  256              0-256
    0                  257              0-257
    0                  256              0-256
    0                  258              0-258
    Thus, same template id(256) from different routers(source1, source3) eventually form same unique key and breaks my code.
    I would like to know if my interpretation that Source ID can be used to uniquely identify templates in this manner is correct or not ? 
    Is "Source ID" user configurable attribute ? How does it comply to the definition given in above Cisco doc ?
    Thanks,
    Deepak

    Deepak,
    Consider these quotations from the same RFC 3954:
    Section 2: Terminology:
    Observation Point
    An Observation Point is a location in the network where IP packets
    can be observed; for example, one or a set of interfaces on a network
    device like a router. Every Observation Point is associated with an
    Observation Domain.
    Observation Domain
    The set of Observation Points that is the largest aggregatable set of
    flow information at the network device with NetFlow services enabled
    is termed an Observation Domain. For example, a router line card
    composed of several interfaces with each interface being an
    Observation Point.
    Section 7: Template Management:
    A NetFlow Collector that receives Export Packets from several
    Observation Domains from the same Exporter MUST be aware that the
    uniqueness of the Template ID is not guaranteed across Observation
    Domains.
    Section 9: The Collector Side:
    At any given time the Collector SHOULD maintain the following for all
    the current Template Records and Options Template Records: Exporter,
    Observation Domain, Template ID, Template Definition, Last Received.
    Note that the Observation Domain is identified by the Source ID field
    from the Export Packet.
    So in other words, the Source ID is an identifier of the Observation Domain (and in fact, the IPFIX RFC calls this header field directly as Observation Domain ID). Template IDs are unique per Exporter and per Observation Domain, and if a single Exporter uses multiple templates in its different Observation Domains, the IDs of these templates could overlap even in a single Exporter. Observation Domain IDs (that is, Source IDs) identify only the internal structure of a single Exporter, and no provisions are done to preserve their uniqueness across multiple Exporters - for this, the source IP shall be used.
    With respect to whether there can be multiple NetFlow instances on a single router, I am getting a feeling that with decentralized, distributed platforms, multiple linecards in a single router could run their own NetFlow analysis for data that pass through them, so each one provides a separate NetFlow collection. Thus, each linecard or each feature card doing its own NetFlow analysis should be assigned its own unique Observation Domain ID.
    If it is not user configurable then system should automatically form the value based on router engine and line card. But what I have observed, at more than one routers, is that this value is always 0(zero).
    I believe this is strongly dependent to the hardware construction of the router. As a remotely-related example, old 2600 series routers had two WIC slots. If you inserted two WIC-2T modules into these slots, you'd expect that they would be numbered Serial0/0, Serial0/1, Serial1/0, Serial1/1. Very surprisingly, however, these routers considered both slots to be internally connected to a single bus, and the interfaces were named Serial0/0, Serial0/1, Serial0/2 and Serial0/3 - as if they all were installed in a single slot '0'. Something similar may happen to the Observation Domains and their IDs. You would believe that each single linecard constituted a separate Observation Domain. However, the reality may be different, and the whole router can act as a single Observation Domain to the outside world. It's just the way it is constructed - and programmed.
    It is not clear why Cisco doc says that one should use both "Source ID" and "Source IP Address" to properly distinguish between flows.
    I think it's a poor wording in the RFC. I think what they want to say is that if you use the duplet <Source IP, Source ID> to distinguish between flows, then you're fine both for multiple flows from the same Exporter, and for multiple flows from different Exporters.
    Moreover, isn't "Source IP Address" good enough to distinguish between flows from different sources ?
    If an Exporter could truly be partitioned into multiple Observation Domains then the source IP would not be sufficient. I am just making up examples with no real-life backup here, but think of, say, a multi-chassis router with each chassis being one Observation Domain, or each linecard of a distributed switch being a standalone Observation Domain, or one router virtualized to several different contexts and virtual routers, each of them being a unique Observation Domain, reporting about the flows using the same source IP... I think you get the point.
    I would put it this way... The existence of Source ID in NetFlow v9 (and Observation Domain ID in IPFIX) allows these protocols to nicely cope with situations in which a single physical device can be partitioned into several Observation Domains and perform independent reporting on them using a single source IP. However, the fact that these protocols have this ability does not mean that each and every device, even a Cisco router/switch, must necessarily make use of it.
    Best regards,
    Peter

  • 8i personal : error when Create user defined aggregate function

    Hi,
    I have problem on creating user defined aggregate function.
    I try to create the sample aggregate function "SecondMax" from 9i developer guide(append at the end of this post).
    It's work to create object type and the type body, but
    there is error when I create the aggregate function..
    "CREATE FUNCTION SecondMax (input NUMBER) RETURN NUMBER
    PARALLEL_ENABLE AGGREGATE USING SecondMaxImpl;"
    I am using 8i personal now.. is that the syntax of create function in 9i is different from that in 8i?
    Example: Creating and Using a User-Defined Aggregate
    This example illustrates creating a simple user-defined aggregate function SecondMax() that returns the second-largest value in a set of numbers.
    Creating SecondMax()
    Implement the type SecondMaxImpl to contain the ODCIAggregate routines.
    create type SecondMaxImpl as object
    max NUMBER, -- highest value seen so far
    secmax NUMBER, -- second highest value seen so far
    static function ODCIAggregateInitialize(sctx IN OUT SecondMaxImpl)
    return number,
    member function ODCIAggregateIterate(self IN OUT SecondMaxImpl,
    value IN number) return number,
    member function ODCIAggregateTerminate(self IN SecondMaxImpl,
    returnValue OUT number, flags IN number) return number,
    member function ODCIAggregateMerge(self IN OUT SecondMaxImpl,
    ctx2 IN SecondMaxImpl) return number
    Implement the type body for SecondMaxImpl.
    create or replace type body SecondMaxImpl is
    static function ODCIAggregateInitialize(sctx IN OUT SecondMaxImpl)
    return number is
    begin
    sctx := SecondMaxImpl(0, 0);
    return ODCIConst.Success;
    end;
    member function ODCIAggregateIterate(self IN OUT SecondMaxImpl, value IN number)
    return number is
    begin
    if value > self.max then
    self.secmax := self.max;
    self.max := value;
    elsif value > self.secmax then
    self.secmax := value;
    end if;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateTerminate(self IN SecondMaxImpl, returnValue OUT
    number, flags IN number) return number is
    begin
    returnValue := self.secmax;
    return ODCIConst.Success;
    end;
    member function ODCIAggregateMerge(self IN OUT SecondMaxImpl, ctx2 IN
    SecondMaxImpl) return number is
    begin
    if ctx2.max > self.max then
    if ctx2.secmax > self.secmax then
    self.secmax := ctx2.secmax;
    else
    self.secmax := self.max;
    end if;
    self.max := ctx2.max;
    elsif ctx2.max > self.secmax then
    self.secmax := ctx2.max;
    end if;
    return ODCIConst.Success;
    end;
    end;
    Create the user-defined aggregate.
    CREATE FUNCTION SecondMax (input NUMBER) RETURN NUMBER
    PARALLEL_ENABLE AGGREGATE USING SecondMaxImpl;
    Using SecondMax()
    SELECT SecondMax(salary), department_id
    FROM employees
    GROUP BY department_id
    HAVING SecondMax(salary) > 9000;

    This could be a x64/x86 problem. Try following this thread
    [GetCompanyService|GetCompanyService] and recompile your code for the platform you need.

  • Calculate PSAPTEMP size before creating aggregate?

    Hi Experts,
    While i am creating aggregate job is ended with error.
    SQL Error: 12801
    I checked in SM21 and meaning of error is "Overflow of database objects " so while creating aggregate PSAPTEMP become overflow.
    My question is how much exact size it requires while creating aggregate, now we our PSAPTEMP table space size is 10 GB.
    How to calculate the required size to increase PSAPTEMP tablespace?
    Thanks &&& Regrs,
    KD.

    hi,
    PSAPTEMP is primarily used  temporarily to store results of the aggregation and final data is typically stored in a user tablespace. So if your PSAPTEMP tablespace is too small you can only increase its size.
    i think that will solve your purpose
    Also, there can be a scenario when are unable to do the same, so you need to rearrange your aggregates.
    In that case the aggregates will be activated. First the aggregates with more data  and then with less data
    generally PSAPTEMP tablespace should be at least twice as large as the largest index.
    please refer SAP Note 659946 and 600513 for more details
    regards
    laksh.

  • Index's on cubes or Aggregates on infoobjects

    Hello,
    Please tell me if it is possible to put index's on cubes; are they automatically added or is this something I put on them?
    I do not understand index's are they like aggregates?
    Need to find info that explains this.
    Thanks for the hlep.
    Newbie

    Indexes are quite different from aggregates.
    An Aggregate is a slice of a cube which helps the data retrival on a faster note when a query is executed on a cube. Basically it is kind of a snapshot of KPI's and Business Indicators (Chars) which will be displayed as the initial query run result.
    Index is a process which is inturn will reduce the query response time. While an object gets activated, the system automatically create primary indexes. Optionaly, you can create additional index called secondary indexes.Before loading data, it is advisable to delete the indexes and insert them back after the loading.
    Indexes act like pointers for quickly geting the Data.When u delete it will delete indexes and when u create it will create the indexes.
    When loading we delete Bcs during loading it has to look for existing Indexes and try to update so it will effect the Data load performence so we delete and create it will take less time when compared to updating the existing ones.
    There is one more issue we have to take care if u r having more than 50 million records this is not a good practice insteah we can delete and create during week end when they r no users.

  • How can I see the data in the aggregates

    how can see the data available in the aggregates.
    Jay

    Hi Jay,
    its so simple,
    please goto the manage aggregates screen and copy the technical name of the aggregate and add
    /bic/exxx  xxx is the aggregate technical name, and for f fat table use /bic/fxxx, and go to se16 and enter the table name and thats it ur data is with u.
    R

  • Aggregates, VLAN's, Jumbo-Frames and cluster interconnect opinions

    Hi All,
    I'm reviewing my options for a new cluster configuration and would like the opinions of people with more expertise than myself out there.
    What I have in mind as follows:
    2 x X4170 servers with 8 x NIC's in each.
    On each 4170 I was going to configure 2 aggregates with 3 nics in each aggregate as follows
    igb0 device in aggr1
    igb1 device in aggr1
    igb2 device in aggr1
    igb3 stand-alone device for iSCSI network
    e1000g0 device in aggr2
    e1000g1 device in aggr2
    e1000g2 device in aggr3
    e1000g3 stand-alone device of iSCSI network
    Now, on top of these aggregates, I was planning on creating VLAN interfaces which will allow me to connect to our two "public" network segments and for the cluster heartbeat network.
    I was then going to configure the vlan's in an IPMP group for failover. I know there are some questions around that configuration in the sense that IPMP will not detect a nic failure if a NIC goes offline in the aggregate, but I could monitor that in a different manner.
    At this point, my questions are:
    [1] Are vlan's, on top of aggregates, supported withing Solaris Cluster? I've not seen anything in the documentation to mention that it is, or is not for that matter. I see that vlan's are supported, inluding support for cluster interconnects over vlan's.
    Now with the standalone interface I want to enable jumbo frames, but I've noticed that the igb.conf file has a global setting for all nic ports, whereas I can enable it for a single nic port in the e1000g.conf kernel driver. My questions are as follows:
    [2] What is the general feeling with mixing mtu sizes on the same lan/vlan? Ive seen some comments that this is not a good idea, and some say that it doesnt cause a problem.
    [3] If the underlying nic, igb0-2 (aggr1) for example, has 9k mtu enabled, I can force the mtu size (1500) for "normal" networks on the vlan interfaces pointing to my "public" network and cluster interconnect vlan. Does anyone have experience of this causing any issues?
    Thanks in advance for all comments/suggestions.

    For 1) the question is really "Do I need to enable Jumbo Frames if I don't want to use them (neither public nore private network)" - the answer is no.
    For 2) each cluster needs to have its own seperate set of VLANs.
    Greets
    Thorsten

  • Aggregates on Non-cumulative InfoCubes, stock key figures, stock, stocks,

    Hi..Guru's
    Please let me know if  anybody has created aggregates on Non-Cumulative Cubes or key figure (i.e. 0IC_C03 Inventory Management.)
    I am facing the problem of performance related at the time of execution of query in 0IC_C03.( runtime dump )
    I have tried lot on to create aggregate by using proposal from query and other options. But its not working or using that aggr by query.
    Can somebody tell me about any sample aggr. which they are using on 0ic_c03.
    Or any tool to get better performance to execute query of the said cube.
    One more clarification req that what is Move the Marker pointer for stock calculation. I have compressed only two inital data loading req. should I compress the all req in cube (Regularly)
    If so there would be any option to get req compress automatically after successfully load in data target.
    We are using all three data sources 2lis_03_bx,bf & um for the same.
    Regards,
    Navin

    Hi,
    Definately the compression has lot of effect on the quey execution time for Inventory cubes <b>than</b> other cumulated cubes.
    So Do compression reqularly, once you feel that the deletion of request is not needed any more.
    And ,If the query do not has calday characterstic and need only month characterstic ,use Snap shot Info cube(which is mentioned and procedure is given in How to paper) and divert the month wise(and higher granularity on time characterstic ,like quarter & year) queries to this cube.
    And, the percentage of improvement in qury execution time in case of aggregates is less for non cumulated cubes when compared to other normal(cumulated) cubes. But still there is improvement in using aggregates.
    With rgds,
    Anil Kumar Sharma .P
    Message was edited by: Anil Kumar Sharma

  • Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube

    Hi BW Guru's,
    I have unresolved issue and our team is still working on it.
    I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
    I have requested for OSS note and searching myself but still could not found.
    Finally i have executed one of the cube in RSRV with the database selection
    "Database indexes of an InfoCube and its aggregates"  and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
    ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
    ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated     
    ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
    ORACLE: Index /BIC/D1001072~010 has possibly degenerated
    ORACLE: Index /BIC/D1001132~010 has possibly degenerated
    ORACLE: Index /BIC/D1001212~010 has possibly degenerated
    ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
    ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
    ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
    ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
    i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
    every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
    Thanks and Regards,
    Venkat

    hi,
    check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
    The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
    If you use "like" in your sql then forget indexes....
    For more informations about indexes check google or your Dba .
    Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
    ex...
    logiacal dimensions
    year-half-day
    company-department
    fact
    quantity
    instead of making one...make 3,
    year - department - quantity
    half - department - quantity
    day - department - quantity
    and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
    Do you use partioning functionality???
    i hope i helped....
    http://greekoraclebi.blogspot.com/
    ///////////////////////////////////////

  • Questions regarding aggregates on cubes

    Can someone please answer the following questions.
    1. How do I check whether someone is re-bilding aggregates on a cube?
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
                            A. activating an aggregate?
                            B. switching off/on an aggregate?
                            C. rebuilding an aggregate?
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Regards,
    Srinivas.

    1. How do I check whether someone is re-bilding aggregates on a cube?
    If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
    2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
    Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
    3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
    A. activating an aggregate?
    this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
    B. switching off/on an aggregate?
    Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
    C. rebuilding an aggregate?
    Reloading data into the aggregate
    3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
    Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
    4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
    Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance.

  • Need help returning largest of three numbers....

    Hello,
    I'm stuck in an essential part of a program. The objective of the program is suppose to call a static method that will find and return the largest of three numbers. Im suppose to test the program with the following three sets of three numbers: (2, 5, 8), (2, 10, 5), (67, 23, 17).
    After lots of thinking, I decided to write a program that will do exactly what the objective requires, but without calling a static method that will find and return. I tried calling a static method but it's real confusing and I couldnt do it right.
    Below is my program, the top part is right I think, but calling a static method and returning the value is the part Im having trouble in.
    // The "LargeNum3" class.
    import java.awt.*;
    import hsa.Console;
    public class LargeNum3
        static Console c;           // The output console
        public static void main (String [] args)
            c = new Console ();
            int num [] = new int [3];
            int largest;
            largest = num [0];
            for (int i = 0 ; i < 3 ; i++)
                c.println ("Please enter in 3 values");
                for (int a = 0 ; a < 3 ; a++)
                    num [a] = c.readInt ();
                for (int b = 0 ; b < num.length ; b++)
                    if (num > largest)
    largest = num [b];
    c.println ();
    c.println ("The largest number is " + largest);
    c.println ();
    } // LargeNum3 class

    // The "LargeNum3" class.
    import java.awt.*;
    import hsa.Console;
    public class LargeNum3
        static Console c;           // The output console
        // Declare a new static method to return the
        // largest number in the array passed to it
        public static int findLargest(int[] nums)
            int j, largest=nums[0];
            for (j=1; j<nums.length; j++)
                if (nums[j]>largest) largest=nums[j];
            return largest;
        public static void main (String [] args)
            c = new Console ();
            int num [] = new int [3];
            // int largest;
            // largest = num [0]; It was wrong to put this here in
            // in the first place
            for (int i = 0 ; i < 3 ; i++)
                c.println ("Please enter in 3 values");
                for (int a = 0 ; a < 3 ; a++)
                    num [a] = c.readInt ();
                c.println ();
                c.println ("The largest number is " + findLargest(num));
                c.println ();
    } // LargeNum3 class

  • Back end activities for Activation & Deactivation of Aggregates

    Hi ,
    Could any body help me to understand the back-end activites performed at the time of activation and deactivation of aggregates.
    Is filling of Agreegate is same as Roll up?
    What is the diffrence between de-activation and deletion of Aggregate?
    Thanks.
    Santanu

    Hi Bose,
    Activation:
    In order to use an aggregate in the first place, it must be defined activated and filled.When you activate it, the required tables are created in the database from theaggregate definition. Technically speaking, an aggregate is actually a separate BasicCube with its own fact table and dimension tables. Dimension tables that agree with the InfoCube are used together. Upon creation, every aggregate is given a six-digit number that starts with the figure1. The table names that make up the logical object that is the aggregate are then derived in a similar manner, as are the table names of an InfoCube. For example, if the aggregate has the technical name 100001, the fact tables are called: /BIC/E100001 and /BIC/F100001. Its dimensions, which are not the same as those in the InfoCube,have the table names /BIC/D100001P, /BIC/D100001T and so on.
    Rollup:
    New data packets / requests that are loaded into the InfoCube cannot be used at first for reporting if there are aggregates that are already filled. The new packets must first be written to the aggregates by a so-called “roll-up”. In other words, data that has been recently loaded into an InfoCube is not visible for reporting, from the InfoCube or aggregates, until an aggregate roll-up takes place. During this process you can continue to report using the data that existed prior to the recent data load. The new data is only displayed by queries that are executed after a successful roll-up.
    Go for the below link for more information.
    http://sapbibw2010.blogspot.in/2010/10/aggregates.html
    Naresh

  • Aggregate build suggestions

    Hello Experts,
    We are trying to do performance tuning on a BW3.5 setup ... Currently, we are focusing on queries and looking at the feasibility of building aggregates for performance improvement. We find that most queries see data at the very granular level as follows:
    Rows
    Profit centre group -> Profit Centre-> Product hierarchy level 1
    Columns
    Actual , Planed, difference.
    Now the proposal generated for the queries do not give profit centre group but only profit centre in the first dimension. Please note that profit centre group is a navigational attribute of profit centre( There is a message that says profit centre cannot be in aggregate because of presence of profit centre group)
    Now my question is , in order for these queries to hit the aggregate, should I introduce both Profit centre group and Profit centre in the aggregate? In doing so, will I risk creating a bigger aggregate?
    I am not sure if I am making sense. But please feel free to ask questions I will explain more if needed. My question to be precise is, whether all the characteristics required to view the data in a query needs to be present in the aggregate?
    Many thanks in advance for all your inputs..
    Regards,
    Solomon

    Hi Solomon,
    If your query has to hit the aggregate, then all the characteristics in the selection, filter, default values, rows, columns, used in RKFs, used in exceptional aggregation should be present in the aggregate.
    You can execute the query in RSRT ->Execute+Debug-> with Display aggregate found option.
    This will tell you exactly what are the characteristics those should be present so that your default view hits the aggregate.
    Needless to say, if you are planning to drilldown the report with any characteristic from the free characteristics, even that should be present in the aggregate.
    Now coming to your confusion on Profit centre and Profit Centre Group, Since profit centre group is already navigational attribute of profit centre you need not ( can not ) place that in the aggregate, when profit centre is already present.
    However, if the query is executed with this nav attribute, it will certainly hit the aggregate ( Can be checked in RSRT ).
    Thanks,
    Krishnan

  • Key Figure Aggregate in Bex Query

    Hi Gurus
    I am using BI7.0; but 3.5x BEx tools
    I am loading 6 fields from a flat file.  I am loading data for tickets.  I have create an InfoObject that counts the number of tickets.  No problem.  Also I also have key figures that I am assigning the same value to all Charactersitics: 10,30 per ticket.
    The Key figures are (Sum) with a Summation aggregation type. 
    In my Query, the 10,30 aggregate up based on the number of tickets (characteristic) that are available.
    Question:  How do I remove/stop my key figures from aggregating up (sums) the values of 10, 30 based on the charactersitic?  I want only 10, 30 to be present regardless of the number of tickets (constant value applied to the key figure).
    Should I change my aggregation type?  If so, to what?  I see a number of options, such as Last Number, No aggregation, etc
    Thank you

    I found a solution to my requirement.

Maybe you are looking for