Regarding rollup of dimension

Hi,
I have time dimension and a measure.
when i select more then two months in filter panel like Jan & Feb the report is displaying the sum of measure of jan & feb but i want to display the maximum of the two measure value.

I am explaining more to will clear my problem
Suppose, we have a dimension (say RiskClassCode) and a measure (say face-amount).
RiskClassCode Face-Amount Month
RCC-A 100 January
RCC-B 200 February
RCC-C 300 March
Along-with this data in report, we have a filter (say month) consisting of 3 values viz., January, February & March. This filter is in the form of a check-box. Suppose, the user clicks on month-January, the output should be:
RiskClassCode Face-Amount
RCC-A 100
Suppose, the user clicks on month-February, the output should be:
RiskClassCode Face-Amount
RCC-B 200
Suppose, the user clicks on month-March, the output should be:
RiskClassCode Face-Amount
RCC-C 300
However, if the user selects the check-box options January as well as February together, the output should be:
RiskClassCode Face-Amount
RCC-B 200
Our requirement is that in-case of multiple selections the maximum value should be displayed. However, the output that we are getting is cumulative

Similar Messages

  • Regarding line item dimension

    Hi all,
    what are the necessary prerequisities will u take regarding line item dimension.
    for eg., for sd cubes we r using sales doc no., i.obj as a line item dimension? why can't the other i.obj?
    plz explain me clearly?
    Thanks & Regards,
    V.Vijay.

    HI,
    Line Item and High Cardinality
    When compared to a fact table, dimensions ideally have a small cardinality. However, there is an exception to this rule. For example, there are InfoCubes in which a characteristic document is used, in which case almost every entry in the fact table is assigned to a different document. This means that the dimension (or the associated dimension table) has almost as many entries as the fact table itself. We refer here to a degenerated dimension. In BW 2.0, this was also known as a line item dimension, in which case the characteristic responsible for the high cardinality was seen as a line item. Generally, relational and multi-dimensional database systems have problems to efficiently process such dimensions. You can use the indicators line item and high cardinality to execute the following optimizations:
    Line Item Dimensions
    Line item: This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table. Removing the dimension table has the following advantages:
    ¡        When loading transaction data, no IDs are generated for the entries in the dimension table. This number range operation can compromise performance precisely in the case where a degenerated dimension is involved.
    ¡        A table- having a very large cardinality- is removed from the star schema. As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
    Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions.
    High Cardinality
    If your dim table size exceeds the 20% of your fact table then you can say it as high cardinality, for ex: your fact table contains 100 records and your customer dimension contains 25 records means this dim is with high cardinality. you can check with your client for the expected records for those dimensions or for the info objects which you define in one dimension. to know the sizes of the dimension tables and fact tables you can runa a program in SE37 SAP_INFOCUBE_DESIGNS, it displays all your info cubes fact and dimension tables with sizes, if any dimension exceeds the more than 10% to 20% it will be in RED.
    It means that the dimension is to have a large number of instances (that is, a high cardinality). This information is used to carry out optimizations on a physical level in depending on the database platform. Different index types are used than is normally the case. A general rule is that a dimension has a high cardinality when the number of dimension entries is at least 20% of the fact table entries. If you are unsure, do not select a dimension having high cardinality.
    http://help.sap.com/saphelp_nw04/helpdata/en/b2/fbb859c64611d295dd00a0c929b3c3/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/a7/d50f395fc8cb7fe10000000a11402f/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/5c/d14d3c306f090ae10000000a11405a/frameset.htm
    Note: In SAP BW 3.0, the term line item dimension from SAP BW 2.0 must a) have precisely one characteristic and b) this characteristic must have a high cardinality. Before, the term line item dimension was often only associated with a). Hence the inclusion of this property in the above. Be aware that a line item dimension has a different meaning now than in SAP BW2.0.
    SAP recommends that you use ODS objects, where possible, instead of InfoCubes for line items.
    Tarak

  • Regarding Line item dimensions

    Hi all,
    In our project for one of the cube 3 line item dimensions are maintained already , But still we have to create one more line item dimension for that infocube because the percentage for the one of the  dimension and fact table is reached to 90%.So is it preferable to have more  line item dimensions? If not please provide the alternative suggestion.
    we maintained relative infoobects in dimensions.
    Thanks & Regards,
    Swarna.P

    Hi,
    If there is no other alternative to swap the Chars in between Dimensions, then go for Line Item Dimension. It won't be a problem if you have more then 2 or 3 Line Item Dimensions. I Used 4 Line Item Diemsions becasue the Ratio is crossing 90%. So Line Item is solution for my problem, first check the Chars in Dimensions and re-arrange it then check the Ratio, if it doesn't work then go for Line Item Dimension.
    Thanks
    Reddy

  • A bug in the warehouse builder regarding mappings of dimensions

    Hi there
    I do not know if this is the right place to submit a possible bug. Anyways - here goes.
    Warehouse builder client: 10.1.0.2.0
    Warehouse builder repository: 10.1.0.1.0
    I'm creating a dimension, having one level attributes property as a varchar2(30). Thus it has a length of 30. Now, later on I decide this was an error. I update the property to a number, deploys, and reconcile the dimension inbound in the mapping. From now on, it will give an error, since it remembers the length. And a number has a size, not a length. If I remove the dimension from the mapping, and inserts it again, the error still occurs.
    By the way, I am able to reproduce this.
    Anyone who has experienced this, and can blame the stupid user (me) or classify this as a bug?
    Yours
    Kim Andersen

    Hi Igor
    The "error" is actually a "warning":
    Warning: VLD-1004: Column length of ASD_ASDASD is longer than the target column length.
    where ASD_ASDASD is the dimension property, which I altered. It is annoying, though, to have warnings on the list, that aren't needed. And I also spent considerable time thinking that was the error in a mapping, whereas it was a stupid mistake made by me somewhere else :)
    If it's a registered bug, do I get to win some branded Oracle candy?! :)
    Yours
    Kim

  • Follow-up on u0093Fact Table VS. Dimension relative sizeu0094

    You don’t have to read the background info. below, it is only for reference. I have already gone though several links as such your touch in your own words will be much appreciated.
    Can you help me understand these which came from the many links the Experts referred me .
    1. For better performance: “Use line item dimension where applicable, but use the "high cardinality" option with extreme care.”  Any clarification on “high cardinality”? And why with “extreme care”?
    2.  For better performance:  “1. Make the Index perfect and create secondary index based on requirement..”  I don’t seem to follow this index issue being raised here.
    Thanks.
    --Background: References from these past postings ---
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas
    There are some simple things by which we can maintain the performance.
    1> Make the Index perfect and create secondary index based on requirement.
    2> Make statistics perfect.
    3> Based on data size in dimenssion table use lineitem dimenssion.
    HOpe this will help you.
    Suneel

    Hi Caud..
    1] High Cardinality..
    High Cardinality means that this dimension contains a large number of characteristic values. This information is used in accordance with the individual database platform in order to optimize performance. For example, an index type other than the standard may be used. Generally a dimension is perceived to have high cardinality when the dimension is at least 20% the size of the fact tables in terms of the number of entries each contain. Avoid marking a dimension as having high cardinality if you are in any doubt.
    Cardinality creates indexes on the Dimension table entries and there by you would see an improvement in performance.
    With Oracle DB, setting the Cardinality option causes a b-tree index to be built instead of a bitmap index even if it is not a line item dim.
    Setting it as a Line Item dim also causes a b-tree index to be built instead of a bitmap index, but also embeds SID directly in the fact table, eliminating the dimension table.
    As it changes from bit map to B-tree, u have to be careful as the SAP recommends the Bit map.
    2]As indices increased the performance tremendously, u have to consider when ever the performance is main factor. it is very easy for query to access the data in the data target if it already indexed.
    Refer this link..
    http://help.sap.com/saphelp_nw04/helpdata/en/a7/d50f395fc8cb7fe10000000a11402f/frameset.htm
    Hope it helps-
    Regards-
    MM
    Assign points if it is useful, it is right way to say thanks.
    Message was edited by: vishnuC

  • Error When Loading a Degenerate Dimension

    Hi,
    I have received the link > http://blogs.oracle.com/warehousebuilder/entry/owb_11gr2_degenerate_dimensions#comments
    from you to create a degenerate dimesion .
    I have create a DIM_DD_LOAN with two attribue loan_no_business with idenitifier business and a description then I deploy the table related to this dim is called DIL_DD_LOAN_TAB but I got this error
    ORA-37162: OLAP error
    XOQ-02102: cannot find object "BANK_STG.DIM_DD_LOAN"
    XOQ-02106: invalid property "Dimension" with value "BANK_STG.DIM_DD_LOAN" for object "BANK_STG.CUBE_PAID_LOAN.DIM_DD_LOAN" in XML document
    XOQ-02106: invalid property "ConsistentSolve" with value "SOLVE ( SUM OVER DIM_BANKS HIERARCHIES (STANDARD), SUM OVER DIM_BRANCHES HIERARCHIES (STANDARD), SUM OVER DIM_CUSTOMERS HIERARCHIES (STANDARD), SUM OVER DIM_REQUESTS HIERARCHIES (STANDARD), SUM OVER DIM_CURRENCIES HIERARCHIES (STANDARD), SUM OVER DIM_BONDS HIERARCHIES (STANDARD), SUM OVER DIM_COLLATERALLS HIERARCHIES (STANDARD), SUM OVER DIM_CONTRACTS HIERARCHIES (STANDARD), SUM OVER DIM_OWNERSHIPS HIERARCHIES (STANDARD), SUM OVER DIM_SEGMENTS HIERARCHIES (STANDARD), SUM OVER DIM_USAGE_CODES HIERARCHIES (STANDARD), SUM OVER DIM_LOAN_DIVISIONS HIERARCHIES (STANDARD), SUM OVER DIM_LOAN_PURPOSES HIERARCHIES (STANDARD), SUM OVER DIM_NOMINALS HIERARCHIES (STANDARD), SUM OVER DIM_LOCATIONS HIERARCHIES (STANDARD), SUM OVER DIM_DATES HIERARCHIES (STANDARD), SUM OVER DIM_DD_LOAN)" for object "BANK_STG.CUBE_PAID_LOAN" in XML document
    XOQ-02005: The Dimension "BANK_STG.DIM_DD_LOAN" referenced from object "BANK_STG.CUBE_PAID_LOAN" is not found.
    XOQ-02100: cannot parse server XML string
    ORA-06512: at "SYS.DBMS_CUBE", line 433
    ORA-06512: at "SYS.DBMS_CUBE", line 465
    ORA-06512: at "SYS.DBMS_CUBE", line 523
    ORA-06512: at "SYS.DBMS_CUBE", line 486
    ORA-06512: at "SYS.DBMS_CUBE", line 475
    ORA-06512: at "BANK_STG.OWB$XMLCLOB_TAT_BANK_STG_DW", line 513
    ORA-06512: at line 3
    can you help me??
    REgards,
    Sahar

    The degenerate dimension best practice I mention in the blog will not work for cube MVs, the cube MV will expect a dimension defined in the AW for any dimension defined in the cube. So if you followed the blog post to the word, you did not deploy the dimension so when the cube is deployed it is looking for the dimension object (which does not exist). If you did deploy it, then you would have to maintain the dimension...and it would not be degenerate.

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • Information related to dimension dependant condition

    hello gurus,
    can anybody tell me what is dimension dependant condition..??
    regards
    amit agrawal

    Hi,
    Dimension dependent conditions are those which are being changed frequently as their dimensions are changed,often.
    . For example, for the tax on sales, the total of the tax values of the individual items might differ from the actual tax value of the total net value of a document.
    The rounding difference comparison does not work correctly when you use dimension dependent conditions.
    U also can see the Note 201261.
    Reward points if helpful.
    Regards..
    Yogi..

  • FAST refresh not working for Cube & Dimensions in AWM

    Hi,
    My doubt is regarding refreshing cube/dimension using FAST refresh method in AWM     
    1     My dimension (MVIEW refresh enabled in AWM) is refreshed without an error when I pass the refresh method parameter as *'F'* in DBMS_CUBE.BUILD() script, although there is no MVIEW log present on the dim table.
         In ALL_MVIEWS.LAST_REFRESH_TYPE, a *'COMPLETE'* refresh is logged.
    2. My CUBE doesn't allow to select refresh_type=FAST when there is no MVIEW log built.
         The same CUBE (MVIEW refresh enabled, refresh_type=FAST in AWM) throws following error even when I create Mview logs for all fact and dimension tables in the DB.
    java.lang.NullPointerException
    at oracle.olap.awm.dataobject.DatabaseDO.commitOLAPI(Unknown Source)
    at oracle.olap.awm.dataobject.aw.WorkspaceDO.commitOLAPI(Unknown Source)
    at oracle.olap.awm.dataobject.olapi.UModelDO.commitOLAPI(Unknown Source)
    at oracle.olap.awm.dataobject.olapi.UModelDO.update(Unknown Source)
    at oracle.olap.awm.dataobject.olapi.UCubeDO.update(Unknown Source)
    at oracle.olap.awm.dataobject.dialog.PropertyViewer.doApplyAction(Unknown Source)
    at oracle.olap.awm.dataobject.dialog.PropertyViewer$1ApplyThread.run(Unknown Source)     
    If I continue with this error, CUBE mview vanishes from the DB.
    Please help - How to do a FAST refresh for CUBE and Dimensions.
    Edited by: 861346 on May 26, 2011 12:12 AM
    Edited by: 861346 on May 26, 2011 12:13 AM
    Edited by: 861346 on May 26, 2011 12:14 AM

    If your object is to process the cube as quickly as possible, MV refresh of the cube is probably not required. As an alternative, you can do the following:
    - Map the cube to a view and use a filter control what data is presented to the cube during a refresh.
    - Avoid dimension maintenance (adding new members, dropping members, changing parent-child relationships).
    Let's say you update your cube daily with sales data coming from 10,000 stores. You could add a LAST_UPDATED column to the source fact table, timestamp rows whenever the fact table is updated and then filter on that column in a view to present only the new or changed records to the cube (or whatever filtering scheme you might like).
    Dimensions are always entirely processed (compiled) regardless of what data is presented to them, so there isn't any advantage to timestamping the records in the dimension table and filtering on them. What is important to understand is that any change to a hierarchy (adding members, deleting members, changing parentage) will trigger re-aggregation of the cube. If you can batch those changes periodically you can limit how much of the cube is processed during a refresh.
    Continuing with the example of the daily update of the sales cube, we can examine two scenarios. In both cases, the cube is partitioned by month and a fact view filters for only the new or updated fact records (let's say there are new records every day).
    Scenario 1
    New records are added to the sales fact table and new stores are added to the store dimension table each day. The store dimension will be updated with new stores, new records will be loaded from the fact table and all partitions will be processed (loaded and solved/aggregated).
    Scenario 2
    New records are added to the fact table, but new stores loaded into the store dimension only once a week (e.g., Saturday). The fact view filters for only new or changed records and stores that currently exist in the store dimension. For Sunday through Friday, new or changed records will be loaded from the fact table and only those partitions in the cube that have new or updated data will be solved / aggregated. (If there are no changes to hierarchies and no records are loaded into a partition, that partition is not solved / aggregated). On Saturday, new stores are added to the store dimension table and the store dimension and the cube are updated. Because the store dimension has changed, all partitions of the will be processed.
    With scenario 1, data for new stores are available each day but the entire cube might be solved each day (if there are new stores). In scenario two, new stores are not available until Saturday but the processing of the cube will be limited to only those partitions where there is new fact data.

  • HDD died, need a new one - Satellite A35-S209

    I have a a35 s209 satellite (2.80GHz, 2048MB of ram) and my HD just died and took with it 4 months of my life. there is nothing like listening to the wonderous billy Holyday one sec and the next listening to the grinding suicide of an HD full of work that I had only selectivelly backed up.
    I am currently in Lisbon Portugal and I need to know ASAP (as in yesterday) where I can find a new HD for my satellite and have it and windows installed with a partition. It is saturday night right now and obviously, there is no place I can call but I need to have this taken care of as of tomorrow. PLEASE HELP!
    Jp
    [Edited by: admin on 21-Jan-06 21:07]

    Hi JP,
    You can obtain a replacement hard drive from any Computer retailer/repairer. Just make sure you get one which matches the characteristics of your existing drive with regard to physical dimensions.
    You can also obtain one from many internet retailers although you will then have to wait for delivery.
    regards,

  • Current page (not Document) width and height with JSX

    Hi,
    I am trying to create a script that saves every page of the document and add its width and height to the name.
    So far I managed to get the dimensions of the document using:
    app.activeDocument.pages.documentPreferences.pageWidth;
    app.activeDocument.pages.documentPreferences.pageHeight;
    That's ok when all the pages in the document have the same dimensions (Document Preferences).
    However, when I use custom pages that does not work.
    I tried the code below but does not work either.
    for(i=0; i<nb;i++)
         app.activeDocument.pages[i].documentPreferences.pageWidth;
         app.activeDocument.pages[i].documentPreferences.pageHeight;
    I have done some research and I don't seem to find any information regarding custom pages dimensions.
    Can you please help with this?
    Thanks in advance,
    Isko

    Main();
    function Main() {
        var doc = app.activeDocument,
        pages = doc.pages,
        page, width, height;
        for (var i = 0; i < pages.length; i++) {
            page = pages[i];
            width = page.bounds[3] - page.bounds[1];
            height = page.bounds[2] - page.bounds[0];
            $.writeln("Page: " + page.name + " - " + "width = " + width + ", height = " + height);

  • How to deploy a RAC with less than 5 disks in OVM 3.0.3

    Hello,
    Ive already successfully deployed two RACs based on the corresponding template.
    Now I want to deploy another RAC but only with 3 shared disks (instead of five).
    Of course Ive read the Appendix C of the template description but its not clear for me how to change the params.ini when Im not able to start the VM.
    When Im booting the VM on the console Im getting the message that there are not enough disks and Im not able the get a console login.
    Thanks & regards
    Axel D.

    Hi,
    MUST have 5 disks configured for each VM-RAC (D1, D2, D3, D4, D5) - disk order!.
    After first step - network configuration finish, login and edit params.ini, install RAC on first 3 disks (D1, D2, D3).
    After finishing edit VM and remove last 2 disks (D4, D5).
    For Production
    D1, D2, D3 = LUN's on shared storage
    D4, D5 = Virtual Disks in Repository
    For Test
    D1, D2, D3 = Virtual Disks in Repository
    D4, D5 = Virtual Disks in Repository
    D1, D2, D3 must meet requirements regarding estimated DB dimension.
    D4, D5 = 1GB can be deleted after RAC instalation.
    Regards

  • Logic of queries in OBIEE

    I understand the logic of OBIEE queries as following.
    There is a fact table with measures and there are dimension tables connected to that fact table.
    In a query we want to have some aggregations of measures over several dimension attributes.
    If we have another fact table connected to the same dimensions (as the first fact table) we can also include measures from it in the query.
    But in real life I doubt that architectures of databases are always as simple.
    Actually we can have a fact table with connected dimensions. But there can be another fact table connected only to one or two dimensions.
    A classic example: fact table - "sales", dimension tables - "customers", "salesrooms", "dates", "products". And another fact table "insurances" can be connected to "customers". The table "insurances" contains facts about insurances which customers have (they can purchase different insurances of different types, so one customer can have many insurances, other can have no insurances; they purchase insurances on some date, so the table is updated often - so it cannot be regarded as a dimension).
    And then we want to run a query like (also classic example):
    Customer Name | Product Name | Shop Name | Sum(Sales, $)
    This query is correct and runs perfectly.
    But also we want to include insurance facts in the query:
    Customer Name| Count(Insurance ID) | Product Name | Shop Name | Sum(Sales, $)
    Count(Insurance ID) can be considered as a new virtual attribute of "Customers" dimension (each customer have several insurances - maybe zero).
    But this query will not run, because OBI server wants to consider insurances connected not only to customers, but also to other dimensions which are in the query (products, salesrooms).
    How to build a repository to allow such queries?
    I don't believe that all repositories have only very simple star schemas and all the queries are like the query in the classic example. And I don't believe noone has encountered a problem like this.
    Thanks in advance!

    Thank you very much, it helped, but another problem appeared.
    In queries all customers (and their count(insurances)) are shown despite all the filters on other dimensions.
    For example I can set filter to Product Name = Product1 AND Shop Name = Shop1. Then the result wiil be like this:
    Customer Name| Count(Insurance ID) | Product Name | Shop Name | Sum(Sales, $)
    Cust1 | 3 | Product1 | Shop1 | 1234
    Cust2 | 1 | Product1 | Shop1 | 4321
    Cust3 | 2 | <empty> | <empty> | <empty>
    Cust4 | 1 | <empty> | <empty> | <empty>
    That is, Cust3 and Cust4 didn't by Product1 in Shop1 but they are in the query results.
    How to tackle this problem?
    Thanks in advance!

  • Cumulative kf with load aggr

    hello BW Experts,
    What is a cumulative kf with 'load' aggregation behavior regarding the time dimension.
    scenario:
    fi-sl line-item extractor, 0balance is cumulative with 'load' aggregational behavior regarding the time dimension.
    Suggestions appreciated.
    Thanks,
    BWer
    Message was edited by: BWer
    Message was edited by: BWer

    Hi,
    what i understood from the message again is ur mapping the filed currency to withdrawal keyfigure.
    chk the columns order in datasource and flat file and assign the proper mapping pls.
    lets say if u have Deposit,withdrwal, balance,currency as your fields in u r datasource then do have the same fields in your Flat file as well so that they are in same order as in datasource.
    pls chk this and try again.

  • Partitioning rules

    Should partitioning be performed at the Cube and RDBMS level? Is this an AND or an OR issue where you can do both or only either one?
    What are the things to look out for when partitioning?
    What are some of the rules?

    First - Partioning concept is related to Performance in BI.
    Nor mallally it leads to Data loading Performnce & Query Performnce.
    Plz find some of the points below related to Cube Partioning.
    F-Fact tables and partitioning :
    Similar to PSA tables the partitioning of F-fact tables is done automatically. The key difference is the partitioning key. SAP BI will create a new partition for every new load job which inserts data into the F-fact table of an InfoCube. The so-called request id is included in the InfoCube in the form of the package dimension. Therefore the key in the F-fact table which is used to join to the package dimension is also the partitioning key. In the example under b, 18 load jobs inserted test rows into the PSA table. These rows were loaded into the InfoCube via a Data Transfer Process DTP. Figure 11 shows 4 load requests in the InfoCube and figure 12 shows that there are 4 corresponding table partitions in the F-fact table. But how did we get 4 requests in the F-fact table while there are 18 requests in the PSA Table ? Here are a two things to consider in order to understand how a DTP inserts data into a F-fact table :
    1. when a DTP in delta mode will be started immediately after a request was loaded into the PSA table it will create a new partition in the F-fact table to store the data.
    This is what happened with the first three requests ( ID 413, 415 and 417 ) which can be seen on figure 11. It also shows that depending on the load pattern there might be partitions of very different size within the same fact table.
    2. when <n> requests are loaded into the PSA table before the DTP starts then the
    DTP will combine all of them into one request in the F-fact table. This is what happened with the other 15 PSA requests in the PSA table. Each of them had 110K rows. Figure 11 shows the total number of rows under u201CTransferredu201D for request ID 433. In addition to this the DTP will aggregate the data from the PSA requests by its key columns. In the sample the 15 PSA requests which were loaded into the InfoCube via one single DTP had 110K rows each and were identical regarding the customer dimension as well as the time dimension ( only two dimensions in the cube ). Aggregation in this case means that all rows with
    the same dimension values will be combined and the u201Ckey figuresu201D will be calculated ( depending on the aggregation type u2013 typically u201Csumu201D ). That‟s why we see 1650000 rows transferred but only 110000 rows inserted into the InfoCube.
    E-Fact tables and partitioning :
    For E-fact tables a customer has the freedom to define the partitioning strategy based on the time dimension. Either month or fiscal year can be used to specify the partitions. The option can be found under u201CExtrasu201D -> u201CDB Performanceu201D when editing an InfoCube in the Administrator Workbench ( transaction RSA1 ). This can only be done as long as the InfoCube is empty and no data was loaded. NetWeaver 7.x offers a re-partitioning tool which allows to change the partitioning for InfoCubes which contain data.
    Table partitioning and delete performance:
    The main benefit of range table partitioning in SAP BI is the maintenance of huge tables. Especially when it‟s necessary to delete requests the difference might be factor 10, 100, 1000 or even more depending on the amount of data. With SQLServer2000 you found the appropriate delete statements in a ST05 trace for getting rid of a request. Now u2013 with table partitioning activated u2013 a ST05 trace will include the corresponding alter table / switch commands to delete a partition ( see figure 23 and 24 ). What took minutes or hours before now will be done within seconds. To get rid of a partition it‟s necessary to u201Cswitchu201D it out. This is a pure meta data operation and will convert the partition into a normal table which can be dropped. Afterwards a merge command is required to adapt the partition function. In the current implementation this might take some time which is still much less than deleting all the rows. There is a workaround available to avoid moving data during the merge by switching out the next partition too and switch it back in afterwards. But this is not feasible for SAP. The deletion of a request in a F-fact table will automatically delete the corresponding requests in all aggregates which were built on top of this InfoCube. But this works only as long as the u201Ccompression flagu201D ( described under item 4, ) is turned off. Otherwise all aggregates have to be recalculated or completely rebuilt which has a massive performance impact on the whole system. Therefore it‟s recommended to turn the flag off in case requests are deleted on a regular basis.
    Figure
    Regards
    Ram.

Maybe you are looking for