InfoCube index?

Hi,
I need to load data into some InfoCubes during the work hours. Therefor, InfoCube indexes should be deleted as usual.
The data load will take approx. 4-5 hours. Since, the InfoCube indexes are deleted, does BIA indexes already cover this performance need when users run their reports? Or InfoCube indexes are still needed for query performance?
Regards,
Erdem

Hi,
If the InfoCube is available through BIA you can delete the secondary indexes from the InfoCube without any problems. The reason is that the InfoCube tables are not needed for reporting purposes. Apart from that it is not always necessary/wise to delete the secondary indexes before a data load. This depends on factors like database system, parallel loading and cube compression.
Stefan

Similar Messages

  • Delete InfoCube Index Question

    Hi,
         Before my data loads I have always dropped InfoCube Indexes.  I recently Partitioned our largest InfoCube and now it seems that dropping the indexes takes forever so now I can't really drop the indexes during our daytime loads because it would take too long to delete and rebuild the indexes on this InfoCube.
    Does anyone know if this is normal behavior for a partitioned InfoCube to take so long to drop and rebuild indexes?
    Thanks for any ideas or thoughts!

    Hey Kenneth,
    Since you stated that this infocube was large, and you recently partitioned it, make sure you compress your requests . Also consider compressing with zero elimination as much as possible - this will reduce data size for indexing.
    How big is the DB server this is running on?  Are the DBAs looking at DB settings regarding this which could impact index creation time, e.g. ora init settings, temp space, index tablespaces, etc.
    You might also need to delete unused indexes which are still existing on your cube.
    Also, these index rebuilds should be running with No logging specified - which is default. But it might help to check and confirm once.
    Other than this, OSS note 323090 might be worth checking for you.
    Hope this helps!
    Thanks,
    Sheen

  • InfoCube Indexing

    Hi All,
    Could you please share about the table which will get filled during DB Indexing for an InfoCube. I would like to know when was the last date for DB indexing creation.
    Regards,
    Jo

    Hi Jo,
    From SE14 transaction we will get indexes details are when created.
    Screen shots sharing may helpful to you for further understanding.
    Last screenshot contains the indexing tables are available, so you can check in those table when the indexes changed for that Cube(Fact Table).
    Thanks & Regards,
    srinu.Rapolu.

  • Need info about Infocube Index

    Hi All,
    Can any one please explain me about Index.
    What will happen in cube upon Index?
    How to create/Delete Index?
    Use of Index?
    Regards,
    Jackie.
    Edited by: Jackie on Sep 12, 2011 6:58 AM

    Hi Jackie,
    Indexes are mainly used to improve the performance of the data retrieval, also it helps improving the reporting performance.. Indexing can be easily done by Right Click inforcube -> Manage -> Performance tab -> Create indexes. You can try checking it in your case itself by trying to fetch the data from the cube or try running the report with / without indexes. You will automatically find the difference.
    Alternatively you can also go for Aggregates & BWA Accelerators for better query performance. Check out these basic links for info on the indexes & BWA Accelarators.
    http://help.sap.com/saphelp_bw30b/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    http://help.sap.com/saphelp_nw73/helpdata/en/4c/2c87e2477f51e6e10000000a42189b/content.htm
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b04a79b4-bbea-2b10-da86-bf0fe933fe34?quicklink=index&overridelayout=true
    Thanks

  • DBIF_RSQL_SQL_ERROR - while dataloading from PSA to InfoCube!

    Hi Friends,
    I know this topic has been posted before too and i have gone through almost all the threads related to this topic but could not find the solution for this issue. Here is the details:
    I am in Production and loading data to one the cube through PSA. Data comes fine till PSA (arround 1 million records). But when I am updating data to Infocube through DTP I am getting the following error:
    Dump: ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    when i click on the error it says the following:
    DBIF_RSQL_SQL_ERROR
    CX_SY_OPEN_SQL_DB
    An exception occurred that is explained in detail below.
    The exception, which is assigned to class 'CX_SY_OPEN_SQL_DB'
    According the previous threads on this issue its a Table space issue in Database. But I am sure its not table space issue as my other loads are successfull with much more number of records. I checked in DB02 too and see no problem with the table space.
    I tried selective loading through DTP and my data load was successfull. But this is not the way we want to load the data. we dont want to give any selections in InfoPacakge.  
    I tried to see in ST22 and see the following dumps:
    CONNE_IMPORT_WRONG_COMP_TYPE    CX_SY_IMPORT_MISMATCH_ERROR
    OBJECTS_OBJREF_NOT_ASSIGNED
    CX_SY_REF_IS_INITIAL
    DBIF_RSQL_SQL_ERROR
    CX_SY_OPEN_SQL_DB
    DBIF_RSQL_SQL_ERROR
    CX_SY_OPEN_SQL_DB
    I tried to delete infocube index first before load from PSA. but it did not work for me.
    i would highly appriciate any help from ur side and will awarded with points.
    Thanks,
    Manmit

    Thanks Srinivas. I can not reduce the package size because i am loading data from PSA to InfoCube through DTP. This DTP is created upon a data source which is a genereted export datasource i.e. <b>8XYZABC</b> which does not allow me to change the package size and this is decided at run time. I can see the following under the Extraction Tab of DTP :
    <b>The package size corresponds to package size in source.
    It is determined dynamically at runtime.</b>
    Any idea how to solve this prob?

  • BIA - Non-Cumulative InfoCubes

    Anyone have experience with the BIA and non-cumulative InfoCubes ? Is the validity table (L-table) of the InfoCube indexed making it more feasible to add more validity-determining characteristics to the cube ?

    Hi,
    BI accelerator ( BIA ) allows you to improve the performance of BI queries when data is read from only possible with InfoCubes that have cumulative key figures...
    Note provided by Marc is really nice............Anyways check this.....
    http://help.sap.com/saphelp_nw04s/helpdata/en/a8/48c0417951d117e10000000a155106/frameset.htm
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/11c4b71d-0a01-0010-5ca0-aadc2415b137#top
    Regards,
    Debjani.....

  • Repartitioning Problem - Indexes Rebuild

    Hi, I decided a QA Repartitioning Test prior to any Production activities and I am ending up with an index problem on my InfoCube.  The Repartitioning Monitor Shows everything Green as a final result.  Deeper in the logs there is a yellow warning that all of the indexes could not be rebuilt.  Message RSDU_REPART450.
    So I have tried everything to rebuild the indexes on the infocube, but now the index status always shows up as red.  I am now trying a compression and then I'll try to rebuild the indexes again.  After the compression the E Fact Table is populated, but the F Fact Table still has records!  SAP_INFOCUBE_DESIGNS showst that both the F and E Tables have 2+ million records!  The compression Logs say everything went fine and I verified the performance tab shows all requests as compressed!  What Gives...
    /BIC/EZ_P_C003      rows:  2,812,700    ratio:         50  %
    /BIC/FZ_P_C003      rows:  2,856,833    ratio:         50  %
    Any ideas or advice what I should do next?

    Found Note 1283322 - Repartitioning corrupts the indexes
    Note sure if this fixes it.  The Note is only valid for SPS 16 and above and we are only at SPS 15.
    If you read the note it says that your InfoCube Indexes get corrupt if your infocube name is more than 7 characters.  Who tested that functionality??? 
    Thanks,

  • Indexes creation for cubes

    Hi all,
    I cretaed a Cube in the BW system. When I choose infoprovider>cube>manage>performance, we have two indexes options: "check indexes" and "check aggregate indexes", I would like to know the difference between them.
    Below we have the "data base statistics section", when I click the "create statistics (batch) button", a new screen is shown, and there you can select two options: "refresh DB statistics after each data load" and "also refresh statisctics after delta upload", I would like to know the meaning of both options.
    Kindly Help,
    Thanks
    Laura Gómez

    Hi,
    Infocube Index
    InfoCubes (fact, dimension tables) are fully indexed and usually do not require additional indices.
    The indexing as follows: Fact Table (consists of dimension IDs and key figures) B-tree indices for InfoCube dimensions (“high cardinality”) should be set only in special cases. The dimension should be very large (in comparison to the fact table) and the maintenance time for bitmap indices (deletion and re-build) should have significant impact. You should be aware of the fact that the B-tree index cannot be used for the star join and hence, reduces query performance. In the Performance Tab of the InfoCube use drop and recreate the system defined indices to perk up the performance.http://help.sap.com/saphelp_nw04/helpdata/en/83/0ac9be421f874b8533f3b315230b32/frameset.htm

  • BI Loading to Cube Manually with out creating Indexes.

    BW 3.5
    I have a  process chain schedules overnight which loads data from the InfoCubes from the ODS after loading to the staging and transformation layer
    The data loaded into the InfoCube is scheduled in the process chain as
    delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
    Tha above process chain load to cube normally takes 5 - 6 hrs.
    The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
    After rectifying the error, now I have to load the data to the Cube.
    I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
    Kindly let me know in the above case where I have short of time to load data to the cube via process chain
    can I manually delete the contents of the cube and load the data to the cube. Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
    Can I do the above at times and what are the impacts
    If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
    Also deleting contents of the cubes deletes the indexes.
    Thanks
    Note: As far I understand that Index are created to improve the performance at loading and query performance level.
    your input will be appreciated.

    Hi Pawan,
    Please find below my views in bold
    BW 3.5
    I have a process chain schedules overnight which loads data to the InfoCubes from the ODS after loading to the staging and transformation layer
    The data loaded into the InfoCube is scheduled in the process chain as
    delete Index > delete contents of the cube> Load Data to the Cube --> Create Index.
    I assume you are deleting the entire contents of the cube. If this is the normal pattern of loads to this cube and if there are no other loads to this cube you may consider configuring a setting in the infocube which " Delete InfoCube indexes before each data load and then refresh" .This setting you would find in Performance tab in create index batch option. Read F1 help of the checkbox. It will provide with more info.
    Tha above process chain load to cube normally takes 5 - 6 hrs.
    The only concern I have is at times if the process chain fails at the staging layer and transformation layer then I have to rectify the same manually.
    After rectifying the error, now I have to load the data to the Cube.
    I have only left with couple of hours say 2-3 hrs to complete the process chain of the load to the cube because of business hours.
    Kindly let me know in the above case where I have short of time to load data to the cube via process chain
    can I manually delete the contents of the cube and load the data to the cube. YES, you can Here I will not be deleting the existing index(s) and create Index(s) after loading to the Cube because creation of Index normally takes a long time which I can avoid where I am short of time.
    Can I do the above at times and what are the impacts Impacts :Lower query performance and loading performance as you mentioned
    If the load to the InfoCube schedules via process chain the other normal working days. Is it going to fail or it will go through
    I dont probably understand the question above, but i assume you mean, that if you did a manual load will there be a failure next day. - THERE WOULDNT
    Also deleting contents of the cubes deletes the indexes.
    YES it does
    Thanks
    Pavan - You can skip creating indices, but you will have slower query performance. However if you have no further loads to this cube, you could create your indices during business hours as well. I think, the building of indices demands a lock on the cube and since you are not loading anything else u should b able to furnish it. Lastly, is there no way you can remodel this cube and flow...do you really need to have full data loads?
    Note: As far I understand that Index are created to improve the performance at loading and query performance level. TRUE
    your input will be appreciated.
    Hope it helps,
    Regards,
    Sunmit.

  • Bad loading performance CO-PA Extractor (PSA) to InfoCube

    Hello Experts,
    I activated a CO-PA Extractor for one Controlling Area. I activated alle VV* Keyfigures (appr. 200 KFs) and around 30 InfoObjects.
    I have loaded the Fiscal Year 2010 from the Source System into BW. When looking at the PSA the Package Size of each Package is around 5000 Data Records (in total appr. 14 000 000 Data Records).
    When I load the data into the InfoCube via Transformation (1:1 Mapping) and DTP the load takes more than 14 hours because the DTP loads the data packagewise (around 2800 Packages) into the InfoCube.
    I already increased the number of Batch Processes and deleted the InfoCube Index but the load takes still more than 12 hours.
    I guess the solution would be the increase of the number of Data Records per Package but when I load data from the DataSource into the InfoCube you cannot choose the Package Size.
    Can you help me, please
    Thank you
    Stefan

    Hi,
    refer below thread
    For Cost Center accounting, the most important datasources are 0CO_OM_CCA_9 (actual costs - line items) [http://help.sap.com/saphelp_nw04s/helpdata/en/8f/d93c00e39d5b4fb6eaa3bf16e6fb09/frameset.htm]
    and 0CO_OM_CCA_1 (plan and actual costs - totals per fiscal period) [http://help.sap.com/saphelp_nw04s/helpdata/en/ae/8876724e671341a91ff3ba711bacd0/frameset.htm]
    Thank,s

  • Process Chains Problem

    Hi,
    I’m facing a problem with the triggering to the Process Chains.
    When I’m creating a new process chain and scheduling it for immediate run using ‘Activate & Schedule’ (F8) the chains fails to start.
    In SM37 I can see the following status for
    BI_PROCESS_TRIGGER in the job log:
      Job started
      Step 001 started (program RSPROCESS, variant XXXXX,
      user ID ALEREMOTE)
    The RSPROCESS variant is correct with WAIT = 0 & TYPE = TRIGGER
    The status of BI_PROCESS_TRIGGER remains ACTIVE & it seems to continue to infinity. The process never seems to continue beyond this point.
    This problem happens only to the newly built chains, existing chains in the system are working fine.
    A similar problem exists with Meta chains.
    I’ve a existing chain to which I’ve attached a new chain as ‘Local Process Chain’. The main chain is scheduled for immediate run & the local chain as Start using Meta Chain or API’
    When I run the main chain (F8) it executes fine but as it enters the local chain it seems to hang. (The other branches of the main chain run to completion).
    SM37 shows the following status for the local chain:
    BI_PROCESS_CHAIN as Active 
    BI_PROCESS_TRIGGER as Active  (Job log shows 'Job Started' and no further info)
    The Job log for BI_PROCESS_CHAIN shows the following:
    Job started
    Step 001 started (program RSPROCESS, variant &0000000007829, user ID ALEREMOTE)
    Communication buffer deleted from previous run 3YD14KALESRSTNM5MHPQ9Z5OW (status X)
    Version 'M' of chain SP_VENDOR_EVAL_INIT was saved without checking
    Version 'A' of chain SP_VENDOR_EVAL_INIT was saved without checking
    The Chain Was Removed From Scheduling
    Program RSPROCESS successfully scheduled as job BI_PROCESS_DROPINDEX with ID 07460801
    Program RSPROCESS successfully scheduled as job BI_PROCESS_INDEX with ID 07460801
    Program RSPROCESS successfully scheduled as job BI_PROCESS_LOADING with ID 07460801
    Program RSPROCESS successfully scheduled as job BI_PROCESS_LOADING with ID 07460901
    Program RSPROCESS successfully scheduled as job BI_PROCESS_ODSACTIVAT with ID 07460901
    Chain  <local chain> Was Activated And Scheduled
    (The  <local chain> has variants for deleting & recreating InfoCube Indexes, loading data to Cube & ODS, and activating ODS )
    It seems the Trigger process of the local chain never completes and so the local chain never runs.
    Any help to solve this issue will be greatly appreciated. I closed & reopened the transaction & even logged out of saplogon & logged in again. But these steps don’t seem to help much! Except that in my second case mentioned above the status in SM37 showed only BI_PROCESS_CHAIN  as Active initially ( without the job BI_PROCESS_TRIGGER ) and after I did my logout & login the BI_PROCESS_TRIGGER  seems to have started. But nothing more than that so far!!
    Thanks in advance,
    Melwyn

    Hi All,
    The problem is resolved by deleting the lock in SM37.
    The following lock appears (and stays!) when the chain is triggered-
    ALEREMOTE <time> E RSPCLOGCHA <chain name> #########################
    The lock is released if you leave the RSPC transaction & the chain executes for a normal Process Chain.
    In the case of meta chains (as explained by me earlier) this lock appears as the execution of the local chain begins BUT even if you leave RSPC the lock remains for the local chain.
    In both these scenarios deletion of this lock kicks of the normal execution of the Process Chain.
    Thanks everyone for your inputs.
    Cheers,
    Melwyn

  • Caller 70 is missing time out (short dump)

    when I was trying to load transaction data into cube , it was failed
    the error message was :
    short dump : caller 70 is missing
    I checked in st22 , it shows that <b>time out</b> .
    when check in r/3 side , extraction was over and job was finished.
    data is coming upto PSA & is not updating into data target and geting failed after some time.
    can any one give me suggestion.
    thanks in advance
    Ram

    Hi Ram.
    We were getting the same message.  SAP recently published note 631668.  It suggests dropping the InfoCube indexes before the load and then rebuilding them afterwards.
    I set our system up to do that last night.  We didn't get the error last night, but a single set of loads is not enough to make me say that it works.  But it might be worth trying in your case.
    Regards,
    Adam

  • PSA Performance

    Hi All,
    We have one particular load that pulls from our APO system to the PSA and finally pushes to a cube. The pull from APO and load to PSA take the expected time, but the push from PSA to the cube takes up to 16 hrs. When I view the process in sm66 I see it is stuck on a Sequential Read of table /BIC/B0000250000. I'm assuming this is a partition in the PSA.
    Anyone have an idea why it would take so long to read from the PSA for just this load? We have other loads that do the same thing from the same system that run in a normal amount of time. For some reason it looks like the partition in the PSA gets screwed up for this load. I'm assuming that it is missing an index or the stats are not up to date. Can anyone let me know how to check those for a partition? And why would the indexes and stats not get updated for just this load when others work fine?
    Thanks in advance.

    hi Jason,
    try to check the database statistics are current for that PSA table, transaction DB20, you can update to current with 'create statistics' (blank page icon).
    if above not help, try to consult with basis, check setting rsdb/max_in_blocking_factor (default value = 1000, try 100 ?)
    also make sure infocube index is deleted before data loading and re-create after loading.
    may useful oss note 567747 - Composite note BW 3.x performance: Extraction & loading
    hope this helps.

  • BIA Dummy Cube to load master data

    Hi everyone,
    We've initiated a project to implement BIA and the racks will arrive in the next few weeks. Something I've heard discussed, but not found documented, is that some companies built a "dummy cube" consisting of all the master data involved in the cubes to be loaded to BIA. Apparently, this is to avoid the potential for locking the master data indexes as multiple cubes are indexed in parallel. Having the master data indexed in advance of indexing several cubes in parallel is apparently much faster, too.
    See "Competing Processes During Indexing"
    [Activating and Filling SAP NetWeaver BI Accelerator Indexes|http://help.sap.com/saphelp_nw2004s/helpdata/en/43/5391420f87a970e10000000a155106/content.htm]
    My questions are: Is this master data "dummy cube" approach documented somewhere? Is this only for the initial build, or is this used for ongoing index rebuilds such that new master data objects are consistently added to the dummy cube? Is this the right approach to avoid master data index locking job delays/restarts, or is there a better/standard approach to index all master data prior to indexing the cubes?
    Thanks for any insight!
    Doug Maltby

    Hi Doug - I'm not aware of this approach documented anywhere. Personally, I'm not sure a "dummy" cube buys you much. The reason I say that is because this "dummy" cube would only be used upon initial indexing. The amount of time to construct this cube, process chain(s), etc. would be close to the equivalent time to do the indexing. The amount of time it takes to do the initial build of the indexes depends on data volumes. From what I've seen in the field this could vary on average from 4-8 hours.
    Locking is a possibility, however, I don't believe this is very prevalent. One of the most important pieces to scheduling the initial builds is timing. You don't want to be loading data to cubes or executing change runs when this takes place. In the event locking does occur, that index build can simply be restarted. Because a lock takes place, it does not mean all of your indexes will fail. The lock may cause a single index build to fail. Reviewing the logs in SM37 or the status of the infocube index in RSDDV will also show the current status. Simply restart any that have failed.
    Hope this helps.
    Josh

  • Disable BWA for Updates

    Hello BWA Experts:
    We are planning to do updates to our BWA appliance in a couple of weeks.  Last time we did this, I remember that we had issues with our users getting error messages about BWA being unavailable when they ran queries against things that were typically indexed in BWA (because the appliance was down for maintenance).
    I know that you can exclude using BWA for a given query, or for an entire InfoCube ... but is there a "Best Practice" way to take BWA globally out-of-service to do maintenance on it?  (FYI -- We have several InfoCubes indexed in BWA, as well as several "Query as InfoProvider" query snapshots indexed in the Explorer part of our BWA appliance).
    Also, I recall last time we did an update that the vendor said that the BWA appliance would be left in a "usable state" at the end of each business day, so that each morning we could do our regular loads to BWA.  This sounded great, because it would save us having to dump and reload everything in BWA after the updates were completed.  In order to take advantage of this approach, will I need to put BWA back "In Service" at the end of each day, and take BWA back "Out of Service" before the vendor begins doing updates each morning?
    Thank you!
    Laurie Reid

    Hi Laurie,
    Well I have created a SAP note that mentions several ways that you can turn off BWA:
    2016832 - How to disable BWA for queries                          
    As of any other way to disable BWA as you wish I don't see. Maybe first disable for all queries and infocubes and also check to disable the connection from the BW to BWA could help. However, that's up to your business requirements.
    Thanks,
    Diego.

Maybe you are looking for

  • Resolution to error - Unable to update Database Statistics

    OK, If there's one thing i hate most about SAP, its the dreaded installation, I am always so paranoid, as to something will definitely go wrong, Lately i realized that it aint my mind , but the damn installation itself is pretty damn stupid. Why cant

  • Help setting up an HP Laser printer to work via the USB

    I Would like to use my airport extreme as my wifi connection for my HP CP-1025 laser printer. I've tried to get my MacBook Pro retina 13" to see the printer when it is plugged into the USB at the back of my AirPort Extreme ac but I'm unable to get it

  • Stuck with a partially downloaded song

    On my 3rd generation iPad, I am stuck with a song I partially downloaded. In iTunes>Purchases, I clicked on the cloud next to the song title. It started to download on my iPad. Somehow, I thought the better of it, I didn't need the song on my iPad, s

  • Cannot Open Attachments within a pdf document

    Dear Staff, In my windows 7 netbook I can open files attached to  PDF documents is there a reason why I cannot open these files within windows 8.1 RT with the adobe Reader Touch? is there any setting to adjust to have this function active or Adobe Re

  • No audio is coming from my subwoofer, X-fi Platnum edit

    No audio is coming from my subwoofer, instead its coming from the center speaker. This was in the diagnostics test for my speaker set up. But, in the THX Surround Sound setup, I would get audio out of the subwoofer, like it should be. I have double c