Optimize infocube

hi all,
Due to performance issue i want to optimize infocube 0IC_C03.
Can anyone suggest if aggregates are created then on which infoobject should the aggregates be created.
Or is there any other ways to optimize the infocube.
Note: Data loading done is fine. my issue is when fetching the data in a query from infocube 0IC_C03 it takes huge amount of time to display output. So how can I optimize the performance.
Thanks,
Kumkum
Edited by: kumkum basu on Jun 17, 2010 7:11 AM

Hi Kumkum:
   See the paper below for recommendations on optimizing performance on a InfoCube with non-cumulative Key Figures (0IC_C03-Material Movements).
"Performance Tuning For SAP BW"
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10fb4502-111c-2a10-3080-df0e41d44bb3?quicklink=index&overridelayout=true
Regards,
Francisco Milán.

Similar Messages

  • Infocube line item dimension and performance optimization

    Hi,
    I remodelled an infocube and line item dimension contains only one characteristics set as line item dimension.
    previously the dimension as one characteristics but it wasn't set as line item dimension.
    and when I checked the SAP_INFOCUBE_DESIGNS from SE38  it looks ok.
    /SAP/CUBE   /SAP/CUBE3   rows:        8663  ratio:          3  %
    After setting it as line item the rows is now minus but it is showing red which means that there is problem with the dimension
    /SAP/CUBE   /SAP/CUBE3   rows:          1-   ratio:          0  %
    Its this a performance problem since it is showing red.
    thanks

    hi,
    No,its not performance issue.
    for a dimension to be line item dimension...the dimension size shouldn't be more than 20% size of the fact table size.
    when a dimension is set as line item dimension,the regarding SID will be placed in Fact Table,but not the DIM ID.
    may be that is the reason when your dimension is not line item dimension,it shows the number of rows and when it is made line item dimension,its not showing any rows and the ratio also null.
    hope this is clear for you.
    Regards
    Ramsunder

  • Bad reporting performance after compressing infocubes

    Hi,
    as I learned, we should compress requests in our infocubes. And since we're using Oracle 9.2.0.7 as database, we can use partitioning on the E-facttable to still increase reporting performance. So far all theory...
    After getting complaints about worse reporting performance we tested this theory. I created four InfoCubes (same datamodel):
    A - no compression
    B - compression, but no partitioning
    C - compression, one partition for each year
    D - compression, one partition for each month
    After loading 135 requests and compressing the cubes, we get this amount of data:
    15.6 million records in each cube
    Cube A: 135 partitions (one per request)
    Cube B:   1 partition
    Cube C:   8 partitions
    Cube D:  62 partitions
    Now I copied one query on each cube and with this I tested the performance (transaction rsrt, without aggregates and cache, comparing the database time QTIMEDB and DMTDBBASIC). In the query I selected always one month, some hierarchy nodes and one branch.
    With this selection on each cube, I expected that cube D would be fastest, since we only have one (small) partition with relevant data. But reality shows some different picture:
    Cube A is fastest with an avg. time of 8.15, followed by cube B (8.75, +8%), cube C (10.14, +24%) and finally cube D (26.75, +228%).
    Does anyone have an idea what's going wrong? Are there same db-parameters to "activate" the partitioning for the optimizer? Or do we have to do some other customizing?
    Thanks for your replies,
    Knut

    Hi Björn,
    thanks for your hints.
    1. after compressing the cubes I refreshed the statistics in the infocube administration.
    2. cube C ist partitioned using 0CALMONTH, cube D ist partitioned using 0FISCPER.
    3. here we are: alle queries are filtered using 0FISCPER. Therefor I could increase the performance on cube C, but still not on D. I will change the query on cube C and do a retest at the end of this week.
    4. loaded data is joined from 10 months. The records are nearly equally distributed over this 10 months.
    5. partitioning was done for the period 01.2005 - 14.2009 (01.2005 - 12.2009 on cube C). So I have 5 years - 8 partitions on cube C are the result of a slight miscalculation on my side: 5 years + 1 partion before + 1 partition after => I set max. no. of partitions on 7, not thinking of BI, which always adds one partition for the data after the requested period... So each partition on cube C does not contain one full year but something about 8 months.
    6. since I tested the cubes one after another without much time between, the system load should be nearly the same (on top: it was a friday afternoon...). Our BI is clustered with several other SAP installations on a big unix server, so I cannot see the overall system load. But I did several runs with each query and the mentioned times are average times over all runs - and the average shows the same picture as the single runs (cube A is always fastest, cube D always the worst).
    Any further ideas?
    Greets,
    Knut

  • What is the use of Indexes in ODS&Infocube??

    What is the use of Indexes in ODS&Infocube??

    An Index can improve the reading performance when data is searched for values of fileds contained in the index.
    The fact whether the Index is used or not depends on the database optimizer, which decides it after taking the following under consideration:
    - Size of the table
    - Fields in the index compared to the fields in the statement
    - The quality of the index (its clustering factor)
    One backdraw of Index is that they decrease writing performance, as they are maintained during the write to the table.
    Hope it helps you,
    Gilad

  • Problem creating a join in a MultiCube between an ODS and an InfoCube

    Hi there,
    I have a MultiCube that I am trying to Build a Report from. The MultiCube links together an ODS and an InfoCube.
    The difficulty I am running into is that the ODS contains Document Change Data and Header Level Data and the InfoCube holds data for example GL Account at Item Level.
    The join has not been properly configured as the report is skewed - the Identification in the MultiCube is correct - but the Join options are very limited.
    Has anyone come across a solution to a similar problem - i.e. where you have to link together two sets of related data at different Levels of Granularity - preferably without using an InfoSet (for reporting reasons)?
    Thanks, Adrian

    When I am creating a row in tableC I am using existing value of TableA idA in refIdTableA (so there is the value on master table) , while I want that other attributes of TableC are editable.
    The associations that links tableA --> tableC and tableB-->tableC are composition association optimize for database cascade delete, Lock top-level container.
    I don't know what is the right way to realize it... declarative? programmatically? if programmatically, how can implement it? using tableAEOImpl.java, or tableAVOImpl.java or myAppModuleImpl.java?
    I have tried in a declarative way: using the createWithParams of tableC, passing as parameter the value of refIdTableA (I have this value because I am in a master/detail), but when I click on it, I have this error:
    Impossibile to find or invalidate the property entity: entity TableCEO, row Key oracle.jbo.Key[-158 ]. oracle.jbo.InvalidOwnerException
    I think I have this error because I have two composition relationship on TableC. Is there a solution? I need both two composition relations
    Thank you
    Edited by: Andrea9 on 24-mar-2010 07:35

  • InfoCube Design

    dear gurus, I am familiar with creating standard Infocube's in BI7. I have 2 dimensions in this cube and 3 infoobjects that I have inserted in these dimensional. This is a flat file loading. Now how will I know which infoobject will be acting a primary key/foreign key in the fact table?

    Hi.....
    Look in the Dimension table...........key field will be DIM id.......
    And in Fact table key field ........Combination of DIMids......
    Actually Dimension table will contain........DIM id and SID...with this SID............DIM table will be connected to the SID table of the Master data.........It is a concept of Extended star schema.........
    So u don't need ti think about the key field...........Since u hav three Infoobjects only.............use Line item Dimension...........This means the dimension contains precisely one characteristic. This means that the system does not create a dimension table. Instead, the SID table of the characteristic takes on the role of dimension table.................ie create three Dimension and include one infoobject in each Dimension......
    Removing the dimension table has the following advantages:
    a) When loading transaction data, no IDs are generated for the entries in the dimension table.  This number range operation can compromise performance precisely in the case where a degenerated dimension is involved. 
    b) A table- having a very large cardinality- is removed from the star schema.  As a result, the SQL-based queries are simpler. In many cases, the database optimizer can choose better execution plans.
    Nevertheless, it also has a disadvantage: A dimension marked as a line item cannot subsequently include additional characteristics. This is only possible with normal dimensions........So remember this point...........but it will improve performance.........
    Check this..........
    http://sapbwneelam.blogspot.com/2007/10/extended-star-schema.html
    http://syedtayyabali.blogspot.com/2007/11/sap-bwbi-extended-star-schema.html
    Regards,
    Debjani.....

  • Best Practice of Maximum number of InfoCube Supported in a MultiCube

    Hi Experts,
    I would kindly like to ask what is the maximum number of InfoCubes that should be added in a MultiCube that will not hamper the performance of the query? can you kindly provide some link if possible? MultiCube is Union of all InfoCubes right?
    Many Thanks and Hope to Hear from you Soon guys
    Best Regards,
    Chris

    While this Note does mention 10 as a maximum, it really depends on what you are trying to do.
    If system resources are a an issue, you can specify an RSADMIN entry that limits the number of parallel database queries that get spawned by a query on a multiprovider.  I think the system default might be 20, which is a lot unless you have lots and lots of cpus.
    There is also a table where you can enter the logical partitioning criteria of the multiprovider, which can also be used to restrict the number of DB queries that get spawned, e.g.
    - you create a 10 cubes, one to for each Bus Area in you rogranization.  By default, this would spawn 10 DB queries even when you only wanted data for two specific Bus Areas. By setting the partitioning criteria as 0BUS AREA, the system is smart enough to only query the two underlying cubes that contain the Bus Areas you want.  In this environemnt, where you only query one or a few Bus Areas, you could have many cubes in your multiprovider.  See Note 911939 - Optimization hint for logical MultiProvider partitioning

  • What is the use of parllalization in loading infocube by the help of dtp

    what is the use of parllalization in loading infocube by the help of dtp

    An Index can improve the reading performance when data is searched for values of fileds contained in the index.
    The fact whether the Index is used or not depends on the database optimizer, which decides it after taking the following under consideration:
    - Size of the table
    - Fields in the index compared to the fields in the statement
    - The quality of the index (its clustering factor)
    One backdraw of Index is that they decrease writing performance, as they are maintained during the write to the table.
    Hope it helps you,
    Gilad

  • IC Compression optimization

    Hi all,
    can some experts give me tips to optimize the InfoCube compression process.
    Does it helps to clear the indexes and then restructurate them ?
    thx in advance

    As far as i know it is recomended first to delete the indexes, than to compress only the initial update (use F4 for that) than compress all the other updates .
    Hope it helps...
    David

  • Infocube Modelling- what stops you from defining many dimensions in an info

    Hi friends
    I am new to BI and would like to know the answer to the foll0wing question.
    When an Infocube is modeled, are there any advantages of maintaining less number of dimensions. In short, why cant we just club multiple infoobjects in a dimension or maybe have one dimension per infoobject. What are the considerations that one should keep in mind whilst deciding on the dimensions/Infoobject grouping. Is there any performance advantage. The SID for the Infoobject is anyways created whether it is grouped with other infoobjects in one dimension or whether it is in a separate dimension. Can the experts please explain the flow and if there are any advantages which are not very obvious to me.
    Thanks
    Kevin Quadros

    Hey,
    see, sure, you can always use all free 13 dimensions. But keep in mind, every dimension mean an additional join. So how can you optimize this?
    ->  If you distribute your InfoObjects (IOs) to the 13 dimensions you maybe will have dimensions with just a few entries. Why don't take some of these IO and put it in one dimension because this dimension won't be large in the end compared to the fact table.
    ->  If you have some IO which have 1:1 relationship (like a master data relationship), dimension won't growth if you put them together. Why don't put them together an save a join?
    -> If you will be sure, certain IO will normaly be used together in reporting. They are not in a n:m relationship. Why don't put them together?
    Maybe there are some additional consideration...
    One additional hint. If you have an aggregate with less than 13 IO, the model will automatically create 13 line Item dimensions.
    At the end, it is not good to save a dimension getting a much worse dimension. In doubt, take a additional dimension. Don't model n:m relationships, look that your dimension is less than 10 % of you facttable, never think you have to put 2 IO together because the are from Sales area or Controlling area and so on.
    Best regards,
    Peter

  • RSRV for InfoCube

    Hi all,
    I have one Multicube query which brings the data from 5 infoprovider. When i run the query it takes lot of time to bring the result. So i run RSRV for the 2 cubes which are giving some error like
    "The indices of the infocube are incorrectly set".
    I just want know how can i resolve this issue. Also I need to optimize the query run time.
    amit

    Hi,
    May be you can try to repair the index for those cube.
    Go to Cube-> manage -> performance tab- repair index .
    Regards,
    Siva.

  • Optimize the data load process into BPC Cubes on BW

    Hello Gurus,
    We like to know how to optimize the data load process and our scenario for this is that we have ECC Classic Ledger,  and we are looking for the best way to load data into the BW Infocubes from an ECC source.
    To complement the question above, from what tables the data must be extracted and then parsed to BW so the consolidation it´s done ?  also, is there any other module that has to be considered from other modules like FI or EC-CS for this?
    Best Regards,
    Rodrigo

    Hi Rodrigo,
    Have you looked at the BW Business Content extractors available for the classic GL? If not, I suggest you take a look. BW business content provides all the business logic you will normally need to get data out of ECC and into BW for pretty much every ECC application component in existence: [http://help.sap.com/saphelp_nw70/helpdata/en/17/cdfb637ca5436fa07f1fdc0123aaf8/frameset.htm]
    Ethan

  • Regarding the dataload from the Psa to infocube

    Hi Experts,
                           I am having the dought when i am loding the data into the infocube the data is not updating into the infocube .But the data is present in the PSA abt 4 millions of records are there when i am running the DTP from PSA to the Infocube the data is not updating but it is showing the yellow color signal and in PSA totally 20 packets are there each packet is containning abt 50,000 records Can you suggest me what are all the steps i have to take if i want to load the data into an infocube from PSA threw DTP Because i am new to this field .
                                                           Bye.

    Hi Vinay,
    Check this link...this solves all ur worries..
    http://help.sap.com/saphelp_nw2004s/helpdata/en/42/f98e07cc483255e10000000a1553f7/frameset.htm
    Also,
    Performance Tips for Data Transfer Processes  
    Request processing, the process of loading a data transfer process (DTP), can take place in various parallelization upgrades in the extraction and processing (transformation and update) steps. The system selects the most appropriate and efficient processing for the DTP In accordance with the settings in the DTP maintenance transaction, and creates a DTP processing mode.
    To further optimize the performance of request processing, there are a number of further measures that you can take:
    ●      By taking the appropriate measures, you can obtain a processing mode with a higher degree of parallelization.
    ●      A variety of measures can help to improve performance, in particular the settings in the DTP maintenance transaction. Some of these measures are source and data type specific.
    The following sections describe the various measures that can be taken.
    Higher Parallelization in the Request Processing Steps
    With a (standard) DTP, you can modify an existing system-defined processing by changing the settings for error handling and semantic grouping. The table below shows how you can optimize the performance of an existing DTP processing mode:
    Original State of DTP Processing Mode
    Processing Mode with Optimized Performance
    Measures to Obtain Performance-Optimized Processing Mode
    Serial extraction and processing of the source packages (P3)
    Serial extraction, immediate parallel processing (P2)
    Select the grouping fields
    Serial extraction and processing of the source packages (P3)
    Parallel extraction and processing (P1)
    Only possible with persistent staging area (PSA) as the source
    Deactivate error handling
    Serial extraction, immediate parallel processing (P2)
    Parallel extraction and processing (P1)
    Only possible with PSA as the source
    Deactivate error handling
    Remove grouping fields selection
    Further Performance-Optimizing Measures
    Setting the number of parallel processes for a DTP during request processing.
    To optimize the performance of data transfer processes with parallel processing, you can set the number of permitted background processes for process type Set Data Transfer Process globally in BI Background Management.
    To further optimize performance for a given data transfer process, you can override the global setting:
    In the DTP maintenance transaction, choose Goto ®  Batch Manager Setting .Under Number of Processes, specify how many background processes should be used to process the DTP. Once you have made this setting, remember to save.
    Setting the Size of Data Packets
    In the standard setting in the data transfer process, the size of a data packet is set to 50,000 data records, on the assumption that a data record has a width of 1,000 bytes. To improve performance, you can increase the size of the data packet for smaller data records.
    Enter this value under Packet Size on the Extraction tab in the DTP maintenance transaction.
    Avoid too large DTP requests with a large number of source requests: Retrieve the data one request at a time
    A DTP request can be very large, since it bundles together all transfer-relevant requests from the source. To improve performance, you can stipulate that a DTP request always reads just one request at a time from the source.
    To make this setting, select Get All New Data in Source by Requeston the Extraction tab in the DTP maintenance transaction. Once processing is completed, the DTP request checks for further new requests in the source. If it finds any, it automatically creates an additional DTP request.
    With DataSources as the source: Avoid too small data packets when using the DTP filter
    If you extract from a DataSource without error handling, and a large amount of data is excluded by the filter, this can cause the data packets loaded by the process to be very small. To improve performance, you can modify this behaviour by activating error handling and defining a grouping key.
    Select an error handling option on the Updating tab in the DTP maintenance function. Then define a suitable grouping key on the Extraction tab under Semantic Groups. This ensures that all data records belonging to a grouping key in a packet are extracted and processed.
    With DataStore objects as the source: Do not extract data before the first delta or during full extraction from the table of active data
    The change log grows in proportion to the table of active data, since it stored before and after-images. To optimize performance during extraction in the Fill mode or with the first delta from the DataStore object, you can read the data from the table of active data instead of from the change log.
    To make this setting, select Active Table (with Archive) or Active Table (without Archive) on the Extraction tab in Extraction fromu2026 or Delta Extraction fromu2026 in the DTP maintenance function.
    With InfoCubes as the source: Use extraction from aggregates
    With InfoCube extraction, the data is read in the standard setting from the fact table (F table) and the table of compressed data (E table). To improve performance here, you can use aggregates for the extraction.
    Select data transfer process Use Aggregates on the Extraction tab in the DTP maintenance transaction. The system then compares the outgoing quantity from the transformation with the aggregates. If all InfoObjects from the outgoing quantity are used in aggregates, the data is read from the aggregates during extraction instead of from the InfoCube tables.
    Note for using InfoProviders as the source
    If not all key fields for the source InfoProvider in the transformation have target fields assigned to them, the key figures for the source will be aggregated by the unselected key fields in the source during extraction. You can prevent this automatic aggregation by implementing a start routine or an intermediate InfoSource. Note though that this affects the performance of the data transfer process
    Hope this helps u..
    VVenkat..

  • How can I optimize just the video on a project timeline?

    Hi everyone,
    I've been working on a 1 hour documentary using original non-optimized media, now that I'm approaching the final steps of the edit I would like to optimize all the video in the timeline, but NOT all the footage I have in the events.
    I did NOT optimized my media on import: all my events and project are made up of video non-converted, just imported. I did that because I didn't have enough storage to transcode to ProRes the whole 40 hours of footage.
    The folders now full of media in Final Cut Events are the "original media" ones.
    Now I'll add some title, subtitles and color-correction and I want to be a little more fast. Then I'll step in the "export" zone, and I know it is much better to export from optimized media than from original media, that's why I want a 'ProRes opimized media' project timeline.
    Thank you in advance to anyone with advices!

    Thank you Tom,
    at least I know there is no need I keep on wondering "WHY?"...
    This impossibility to transcote footage on timeline seem to me a big downside to this new version... I just have in mind all the options for managing media in FCP 7...
    Thank you again,
    always read your tips: veru useful!

  • Error while activating Infocube

    Following error while activating after creating new Infocube:
    No active nametab exists for /BIC/DDPCW14DFD1     
    Termination due to inconsistencies     
    Table /BIC/DDPCW14DFD1 (Statements could not be generated)     
    Enhancement category for table missing     
    The cube is being made as a exact copy of an existing activated and correctly working cube.
    Can anyone shed some light on this?
    ======================================
    MORE detailed error:
    RED- Error
    GREEN-Fine
    YELLOW-Warning
    RED     Error/warning in dict. activator, detailed log  --> Detail     
    GREEN     Activate table /BIC/DDPCW14DFD1     
    YELLO     Enhancement category for table missing     
    YELLO     Enhancement category for include or subtype missing     
    GREEN     A table called /BIC/DDPCW14DFD1 exists in the database     
    RED     No active nametab exists for /BIC/DDPCW14DFD1     
    RED     Termination due to inconsistencies     
    RED     Table /BIC/DDPCW14DFD1 (Statements could not be generated)     
    GREEN     Error number in DD_DECIDE (9)     
    YELLO     Flag: 'Incorrect enhancement category' could not be updated     
    GREEN     Table /BIC/DDPCW14DFD1 was not activated     
    GREEN     Activate table /BIC/DDPCW14DFD2     
    YELLO     Enhancement category for table missing     
    YELLO     Enhancement category for include or subtype missing     
    GREEN     Table /BIC/DDPCW14DFD2 must be created in the database     
    =============
    Thanks,
    Mohnish

    Can i know in which environment are you working? BW 3.5 OR BI 7?

Maybe you are looking for