Indexes creation for cubes

Hi all,
I cretaed a Cube in the BW system. When I choose infoprovider>cube>manage>performance, we have two indexes options: "check indexes" and "check aggregate indexes", I would like to know the difference between them.
Below we have the "data base statistics section", when I click the "create statistics (batch) button", a new screen is shown, and there you can select two options: "refresh DB statistics after each data load" and "also refresh statisctics after delta upload", I would like to know the meaning of both options.
Kindly Help,
Thanks
Laura Gómez

Hi,
Infocube Index
InfoCubes (fact, dimension tables) are fully indexed and usually do not require additional indices.
The indexing as follows: Fact Table (consists of dimension IDs and key figures) B-tree indices for InfoCube dimensions (“high cardinality”) should be set only in special cases. The dimension should be very large (in comparison to the fact table) and the maintenance time for bitmap indices (deletion and re-build) should have significant impact. You should be aware of the fact that the B-tree index cannot be used for the star join and hence, reduces query performance. In the Performance Tab of the InfoCube use drop and recreate the system defined indices to perk up the performance.http://help.sap.com/saphelp_nw04/helpdata/en/83/0ac9be421f874b8533f3b315230b32/frameset.htm

Similar Messages

  • Spatial index creation for table with more than one geometry columns?

    I have table with more than one geometry columns.
    I'v added in user_sdo_geom_metadata table record for every column in the table.
    When I try to create spatial indexes over geometry columns in the table - i get error message:
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13203: failed to read USER_SDO_GEOM_METADATA table
    ORA-13203: failed to read USER_SDO_GEOM_METADATA table
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 8
    ORA-06512: at line 1
    What is the the solution?

    I'v got errors in my user_sdo_geom_metadata.
    The problem does not exists!

  • What would be the optimal index creation for this horribly written sql in a third party app that I cannot change?

    SQL:
           SELECT *      FROM ORDERS WHERE ORDERNO = {STR} AND ITEMNO = {##} AND STEP = {##}

    That's almost certainly the wrong question.  You don't want the index(es) that will optimize that single query.  You want the index(es) that will optimize your system as a whole.
    The absolute best choice for that query would be a clustered index on ORDERNO, ITEMNO, STEP.  But each table can only have one clustered index so you would have to drop any clustered index you currently have and then create the new one.  But creating
    or dropping a clustered index can be time consuming on a large table.
    So in the absence of any other information, I would recommend trying a nonclustered index on ORDERNO, ITEMNO, STEP.  That will be faster to do and faster to drop if you decide it's not helping.
    Tom

  • INDEX Creation for Query Optimization

    Hi Experts
    I have the below query:
    SELECT A~VBELN A~POSNR A~MATNR A~KWMENG A~KBMENG
    A~ERDAT A~ERZET A~PSTYV D~AUART E~ETTYP E~EDATU
    INTO TABLE INT_RES
    FROM VBAK AS D INNER JOIN VBAP AS A
    ON D~VBELN EQ A~VBELN
    INNER JOIN VBEP AS E
    ON E~VBELN EQ A~VBELN AND
    E~POSNR EQ A~POSNR
    WHERE D~VBELN IN S_VBELN AND
    D~AUART IN S_AUART AND
    D~VKORG IN S_VKORG AND
    D~VBTYP EQ 'C' AND
    ( ( A~MATNR LIKE C_PREFIX_SP AND ZZ_MSPOSNR NE 0 AND KBMENG EQ 0 ) OR
    ( MATNR LIKE C_PREFIX_FP AND KWMENG NE A~KBMENG ) ) AND
    A~ABGRU EQ SPACE AND
    A~MTVFP IN R_MTVFP AND
    A~PRCTR IN R_PRCT AND
    E~ETENR EQ '1'.
    I tried many ways of re-writing the query - yet performance could not be improved. Now I need to create an INDEX and check if it works out?
    Could you please help me suggesting upon which table and which fields in which order I can create an effective INDEX?
    Please Help

    Hi,
    Check which selection criteria are polulated when you run the query.
    Also one issue might be with using the LIKE keyword in the where clause.
    ( ( A~MATNR LIKE C_PREFIX_SP AND ZZ_MSPOSNR NE 0 AND KBMENG EQ 0 ) OR
    ( MATNR LIKE C_PREFIX_FP AND KWMENG NE A~KBMENG ) ) AND
    Instead of using this, I would suggest you get all the materials from material master using the like operator and pass them in a range to this query.
    select matnr from mara into it_mara
    where matnr like c_prefix_sp.
    select matnr from mara appending table it_mara where matnr like c_prefix_fp.
    "Loop at it_mara and populate a range. Which will then be used in the select query
    That might help improve the performance.
    regards,
    Advait

  • Regarding the dead lock while index creation

    Hi,
    We have a requirement where we are trying to load and create the index for few cubes simulatneously, but it has resulted in dead lock situation, can you please tell us about this problem in detail.
    Thanks

    Hi Mahesh,
    This kind of problem will arrive when the index creation for ur data targets are happening and those data targets shares some common data like master data, since both will be refering to the same, usually dead locks situation comes into picture
    thanks,
    vinay
    Edited by: vinaykumarhs on Mar 20, 2011 12:26 AM

  • Delete cube request - before or after index creation?

    Hi Folks,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    I guess it is after the index is created otherwise it would take longer to find the requests that should be deleted. The index will be slighly degenerated due to the deletion but should be only marginal.
    I am right or wrong?
    Thanks for all replies in advance,
    Axel

    hi,
    a quick question. I plan to delete for one cube requests that are older than 3 days (cube only hosts short term reporting). Now I wonder if this should be done aftter the new load but before the index is created or after the index is created.
    The delete should be done before index creation, as once the index are created then even though the data corresponding to those index is deleted the index still remains. This unnecessarily increases the index table size
    regards,
    Arvind.

  • From where i can downlaod AWM for cubes creation

    Hi experts
    From where i can download AWM for cube creation.
    Thanks in advance
    Regards
    Frnds

    See the "Downloads" portlet on this page: http://www.oracle.com/technology/products/bi/olap/index.html

  • I am looking for a way to automate index creation using Adobe Reader Pro without having to use the screen user interface, as the indexing has to be run by a batch process.

    I am looking for a way to automate index creation using Adobe Reader Pro without having to use the screen user interface, as the indexing has to be run by a batch process.

    [discussion moved to Creating, Editing & Exporting PDFs forum.]

  • Issue with index creation of an infocube.

    Hi,
    I have an issue with creation of index's for a info cube in SAP BI. when i create index's using
    a process chain for the cube it is getting failed  in the process chain. when i try to check the index's  for this
    cube  manual the following below massage is shown.
    *The recalculation can take some time (depending on data volume)*
    *The traffic light status is not up to date during this time*
    Even i tried to repair the index's using the standard program "SAP_INFOCUBE_INDEXES_REPAIR" in SE38
    to repair the index so it is leading to dump in this case.
    Dear experts with the above issue please suggest. 
    Regards,
    Prasad.

    Hi,
    Please check the Performance tab in the Cube manage and try doing a repair index from there.
    This generates a job so check the job in SM37 and see if it finishes. If it fails, check the job log which will give you the exact error.
    These indices are F fact table indices so if nothing works, try activating the cube by the pgm 'RSDG_CUBE_ACTIVATE' and see if that resolves the issue.
    Let us know the results.

  • Index creation of 0figl_o02 taking too long.

    Hi,
    we load data to a ODS 0FIGL_O02 approx .5 million records daily. and recreate the index using the function module - RSSM_PROCESS_ODS_CREA_INDEXES.
    The index creation job used to last for 3 -4 hours six months ago. but now the job runs for 6 hours, is there a way to decrese the  job time.
    the no of records in the active table of ODS  is 424 million.

    hi,
    this DSO is based on DS 0FI_GL_4 which is delta enabled.
    Do you mean to say that you are receiving .5 million data daily?
    if yes then there is not much that you can do, as the program will try to create the incremental index and will have to find the current index of 240 million records. One thing you can do is that as a regular Monthly activity you can completely delete the index of the cube and recreate it(this may take huge time but it would correct some of the corrupt indexes).
    this below sap note might help you.
    Note 1413496 - DB2-z/OS: BW: Building indexes on active DSO data
    If you are not using to report or lookup on this DSO then please do not create secondary indexes.
    regards,
    Arvind.
    Edited by: Arvind Tekra on Aug 25, 2011 5:18 PM

  • Function module(s) for Cube Collapsing/Compression

    Hi Experts,
    can anybody tell me if there's a SAP Function module availbale for Collapsing/Compressing requests of a cube ?
    Background is we want to automate collapsing of cubes by an APAB report instead of using a process chain or RSA1.
    Any suggestions? Any pitfalls for such an implementation ? Any expieriences ?
    We're on SAP BI 7.00 19, Oracle 10.2.0.4
    Best regards,
    yk

    Hi Srinivas,
    thanks for the quick answer, I will check the FM you mentioned.
    We think of a cube exceptions list wich should NOT be condensed, and all OTHER cubes should be condensed.
    With a process chain we have to maintain these OTHER list manually. In an ABAP report we could exclude the exceptions and condense the rest.
    Developers tend to "forget" to add the CONDENSE step. So with time more and more cubes store only in F-tables and nothing in the E-tables (producing more workload as query runtime, DB maintanence like index creation runtime , statistic runs ...) and last but not least occupy disk space wich is expensive if you have a mirrored high performance disk system .
    Best regards,
    yk

  • BIA Index creation error due to invalid characters

    Hi All,
    While creating BIA index on an Infocube the index creation process fails with the following message.
    Index for table '/BIC/SSPSEGTXT' is being processed
    A character set conversion is not possible.
    Parallel indexing process terminated (Task: '4')
    Turns out this field is a text field and has characters such as * and + in the description,which is causing this problem.
    How do I proceed with the index creation.
    Options I have are:
    a) Get rid of this field.. Impossible since there are a few queries that require this information.
    b) Run the Database scan tool RSI18N_Search to eliminate foreign characters ....but this does not work for me.
    Can anyone suggest some other option or a resolution to this problem. Or If anyone has prior experience of working with the program RSI18N_Search please let me know.
    Regards
    VK

    Venkat, Andres,
    You can try several things here;
    1) This is pretty ugly but in case you need a quick way around. Create another cube w/o that char, load all the data to that cube from the first cube, and start sending daily deltas to both cubes. Of course put the second cube to BWA
    2)  Delete the master data! When you try deleting the master data (a particular record or records that you identified), mostly you will find out that it's used in a transactional data(in ODS or Cube) therefore it will not allow you to delete those records, unless you delete the records from the cubes first, then try it again. OR you might be lucky that it's not stored in anycube and it will delete it right away!
    The right way is the second option, however when you delete data from cubes, ods, it will lock the cube + invalidate aggregates if you have + BWA indexes etc. You need to recreate them again. So try doing it in non-business hours or in your maintenance window.
    Cheers
    Tansu

  • Long running index creation

    Hi,
    We have an InfoCube that has more than 8,5 milion records. When we execute a delta load from its underlying ODS we delete and recreate index. But index creation is taking a very long time (more than 8 hours!) which is unacceptable.
    How can we improve index creation on this InfoCube? It's a very simple one (no aggregates, no partitioning, just compression). I have been trying to find similar issues on the forums, and some SAP notes, but I didn't get anything good.
    Thanks and best regards,
    David.

    Hi Pizzaman,
    Thanks for your reply. Our DB and BW is Oracle 9.2.0.2 and BW 3.1 SP14.
    Indeed, our Cube only has 3 requests: Init of about 8,5 milion records and 2 more requests with 0 entries.
    Our first approach has been not to remove the indices daily but rebuild them on a weekly basis, cause, as you say, Cube won't be updated with lots of information. We would like to improve that scenario, though.
    Right now we're trying to balance InfoCube dimensions (there were two huge dimensions that have been decreased in size). Do you believe that huge dimensions can impact on Cube index creation? Or is index creation only related to the fact table (that won't change)?
    Thanks,
    David.

  • Index's on cubes or Aggregates on infoobjects

    Hello,
    Please tell me if it is possible to put index's on cubes; are they automatically added or is this something I put on them?
    I do not understand index's are they like aggregates?
    Need to find info that explains this.
    Thanks for the hlep.
    Newbie

    Indexes are quite different from aggregates.
    An Aggregate is a slice of a cube which helps the data retrival on a faster note when a query is executed on a cube. Basically it is kind of a snapshot of KPI's and Business Indicators (Chars) which will be displayed as the initial query run result.
    Index is a process which is inturn will reduce the query response time. While an object gets activated, the system automatically create primary indexes. Optionaly, you can create additional index called secondary indexes.Before loading data, it is advisable to delete the indexes and insert them back after the loading.
    Indexes act like pointers for quickly geting the Data.When u delete it will delete indexes and when u create it will create the indexes.
    When loading we delete Bcs during loading it has to look for existing Indexes and try to update so it will effect the Data load performence so we delete and create it will take less time when compared to updating the existing ones.
    There is one more issue we have to take care if u r having more than 50 million records this is not a good practice insteah we can delete and create during week end when they r no users.

  • Help needed in index creation and its impact on insertion of records

    Hi All,
    I have a situation like this, and the process involves 4 tables.
    Among the 4 tables, 2 tables have around 30 columns each, and the other 2 has 15 columns each.
    This process contains validation and insert procedure.
    I have already created some 8 index for one table, and an average of around 3 index on other tables.
    Now the situation is like, i have a select statement in validation procedure, which involves select statement based on all the 4 tables.
    When i try to run that select statement, it takes around 30 secs, and when checked for plan, it takes around 21k memory.
    Now, i am in a situation to create new index for all the table for the purpose of this select statement.
    The no.of times this select statement executes, is like, for around 1000 times of insert into table, 200 times this select statement gets executed, and the record count of these tables would be around one crore.
    Will the no.of index created in a table impacts insert statement performace, or can we create as many index as we want in a table ? Which is the best practise ?
    Please guide me in this !!!
    Regards,
    Shivakumar A

    Hi,
    index creation will most definitely impact your DML performance because when inserting into the table you'll have to update index entries as well. Typically it's a small overhead that is acceptable in most situations, but the only way to say definitively whether or not it is acceptable to you is by testing. Set up some tests, measure performance of some typical selects, updates and inserts with and without an index, and you will have some data to base your decision on.
    Best regards,
    Nikolay

Maybe you are looking for

  • Can't send any emails from various accounts set up in mail 4.4

    I have a number of different email accounts (3 POP, 1 IMAP) set up in mail 4.4. Lately i have had some problems sending emails from some of the accounts. Usually, I get the outgoing mail server reject message, and then I try another server, and then

  • How to change the frame title

    hello i m using oracle forms varsion 9.0.4.0...... in the formsweb.cfg i hv set the parameters seperate frame=true also given the pagetitle=my application but when i run form in the browser ....the browser title is coming and then again a seperate wi

  • Downloading purchased songs

    I have purchased the entire Beatles catalog from iTunes.  Since buying an iPhone 4S, I have decided to download everything from the cloud and never hook the phone up to iTunes.  All songs will not download to my device.  No matter what I do, 83 track

  • Differences between the types of DSO's and DTP's

    Hi Friends, 1. pls help me in telling different types of DSO's and their differences? 2. pls help me in telling different types of DTP's and their differences? Thanks in Advance. Jose Reddy

  • Join 2 tables with a reference table

    I have two large tables that I am trying to join together by two columns. Below are the tables with two samples in each table, the reference table and also the final result that i want. Can anyone help me and any pointers. Thank you so much.  Join on