BIA-Cubes

hello experts,
I have included one new navigational attribute in one infoobject...this infoobject is used in two cubes. and I understood that these two cubes are in BIA..and I choose Navigational attribute checkbox in cubes as ON to enable it for reporting.But the new navigational attribute is not used in reporting today.
Do I need to do anything in CUBES from BIA perspective(Accelerator) or do something in BIA because of this new navigational attribute change I did in CUBES?
Thanks.
Hayward.

Hello,
kinldy refer to note below for more information regarding your question:
[ 926609 |https://service.sap.com/sap/support/notes/926609 ]- BIA index and metadata changes
"An InfoCube has a BIA index. A navigation attribute is activated
for a characteristic of the InfoCube and the InfoCube is
activated again. A message appears indicating that the BIA index
was set to the "inactive" status and the BIA index must be
adjusted. This adjustment can occur either in the corresponding
RSRV repair or with the RSDDTREX_INDEX_ADJUST program. During the
adjustment, the active metadata of the InfoCube is compared with
the metadata of the BIA index and the definition of the
individual tables is compared with the index definitions."
I hope this can help you.
Best Regards,
Ricardo

Similar Messages

  • Effect of Cube Compression on BIA index's

    What effect does cube compression have on a BIA index?
    Also does SAP recommend rebuilding indexes on some periodic basis and also can we automate index deletes and rebuild processes for a specific cube using the standard process chain variants or programs?
    Thank you

    <b>Compression:</b> DB statistics and DB indexes for the InfoCubes are less relevant once you use the BI Accelerator.
    In the standard case, you could even completely forgo these processes. But please note the following aspects:
    Compression is still necessary for inventory InfoCubes, for InfoCubes with a significant number of cancellation requests (i.e. high compression rate), and for InfoCubes with a high number of partitions in the F-table. Note that compression requires DB statistics and DB indexes (P-index).
    DB statistics and DB indexes are not used for reporting on BIA-enabled InfoCubes. However for roll-up and change run, we recommend the P-index (package) on the F-fact table.
    Furthermore: up-to-date DB statistics and (some) DB indexes are necessary in the following cases:
    a)data mart (for mass data extraction, BIA is not used)
    b)real-time InfoProvider (with most-recent queries)
    Note also that you need compressed and indexed InfoCubes with up-to-date statistics whenever you switch off the BI accelerator index.
    Hope it Helps
    Chetan
    @CP..

  • ABAP program to switch On/Off a cube from BIA

    We have some repost jobs on a transactional cube that fail because of BIA.(insufficient memory)
    these jobs run fine when BIA is turned off for this cube. And once they have finished successfully we turn BIA back on. But turning on/off is a manual process.
    As far as I know, there is no ABAP available to do this.
    Has anyone else come across where you need to automate turning on/off BIA for a cube?
    any workarounds??
    I know i can create a new user id to run these jobs and then switch off BIA for this user id.
    But this is not an option given the security policies in our organization.

    Hi Saurabh,
    it should not be necessary to switch off BIA for planning functions unless you read very high data volumes.
    Did you turn on package-wise read (see SAP Note <a href="http://service.sap.com/sap/support/notes/1157582">1157582</a>)? Are you on a recent revision (see SAP Note <a href="http://service.sap.com/sap/support/notes/1308131">1308131</a>)?
    Regards,
    Marc
    SAP NetWeaver RIG

  • How the data is fetched from the cube for reporting - with and without BIA

    hi all,
    I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
    I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
    CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
    Basic InfoProvider     *****Table type      ***** Viewed at      ***** Records, Selected     *****Records, Transported
    Cube A     ***** blank ***** 0.624305      ***** 8,087,502      ***** 2,011
    Cube B     ***** E ***** 42.002653 ***** 1,669,126      ***** 6
    Cube B     ***** F ***** 98.696442 ***** 2,426,006 ***** 6
    CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
    Basic InfoProvider     *****Table Type     *****Viewed at     *****Records, Selected     *****Records, Transported
    Cube B     *****E     *****46.620825     ****1,669,126****     6
    Cube B     *****F     ****106.148337****     2,426,030*****     6
    Cube A     *****E     *****61.939073     *****3,794,113     *****3,499
    Cube A     *****F     ****90.721171****     4,293,420     *****5,584
    now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
    Can someone pls clarify on this difference in records being selected.

    Hi,
    yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
    What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
    Regards,
    Jens

  • Error when creating BIA INDEX FOR CUBE

    Hi
    I am trying to create BIA index for a cube and I am getting error
    "An error occurred. Choose "Continue" to start again from the beginning"  in the second step. Could any body explain what this error mean and How to correct it.
    and when I press BIA Moniter tab  I am getting following message.
    An error occurred. Choose "Continue" to start again from the beginning
    "BIA Monitor Is Called for First Time
    The RFC destination for the BI accelerator is not yet specified in the
    system. Without the relevant entry in RSADMINA, the BIA monitor cannot
    be executed. Do you want to enter the RFC destination now?"
    Thanks in Advance
    Sarath
    Edited by: sarath kumar on Aug 21, 2008 9:42 AM

    Hi,
    Is there any way i can check bi accelerator installed or not for our bi server.I contacted basis team but they do not have any idea regarding this. but I heard from my ex colleage it is installed. and Reports from one cube is running very fast compared to recently created cube.
    Thanks
    Sarath

  • BIA Dummy Cube to load master data

    Hi everyone,
    We've initiated a project to implement BIA and the racks will arrive in the next few weeks. Something I've heard discussed, but not found documented, is that some companies built a "dummy cube" consisting of all the master data involved in the cubes to be loaded to BIA. Apparently, this is to avoid the potential for locking the master data indexes as multiple cubes are indexed in parallel. Having the master data indexed in advance of indexing several cubes in parallel is apparently much faster, too.
    See "Competing Processes During Indexing"
    [Activating and Filling SAP NetWeaver BI Accelerator Indexes|http://help.sap.com/saphelp_nw2004s/helpdata/en/43/5391420f87a970e10000000a155106/content.htm]
    My questions are: Is this master data "dummy cube" approach documented somewhere? Is this only for the initial build, or is this used for ongoing index rebuilds such that new master data objects are consistently added to the dummy cube? Is this the right approach to avoid master data index locking job delays/restarts, or is there a better/standard approach to index all master data prior to indexing the cubes?
    Thanks for any insight!
    Doug Maltby

    Hi Doug - I'm not aware of this approach documented anywhere. Personally, I'm not sure a "dummy" cube buys you much. The reason I say that is because this "dummy" cube would only be used upon initial indexing. The amount of time to construct this cube, process chain(s), etc. would be close to the equivalent time to do the indexing. The amount of time it takes to do the initial build of the indexes depends on data volumes. From what I've seen in the field this could vary on average from 4-8 hours.
    Locking is a possibility, however, I don't believe this is very prevalent. One of the most important pieces to scheduling the initial builds is timing. You don't want to be loading data to cubes or executing change runs when this takes place. In the event locking does occur, that index build can simply be restarted. Because a lock takes place, it does not mean all of your indexes will fail. The lock may cause a single index build to fail. Reviewing the logs in SM37 or the status of the infocube index in RSDDV will also show the current status. Simply restart any that have failed.
    Hope this helps.
    Josh

  • Cubes in process variant Aggr/BIA ROLLUP processed serially or parallelly

    Hello,
    are Cubes added to a process variant Aggr/BIA ROLLUP processed serially or parallelly?
    We want to avoid interferences due to shared characteristics.
    Thanks

    Hi Axel,
    Please have a look at the wiki content here:
    http://wiki.sdn.sap.com/wiki/display/BI/GlobalParametersforBIAIndexing
    -Vikram

  • Transport error failure with return code 12 for BIA indexed Cube

    Hello,
    I was trying to transport few cubes from the Dev to the QA system. However, the transport failed repeatedly with return code 12. I noticed that the version of Cubes in the target system had BIA indexes loaded on it.   So, I deleted those indexes and re-transported the cubes.To my surprise, the transport went fine without the BIA indexes. This now opens up a new avenue for discussion of dropping and recreating BIA indexes for those cubes that needs to be transported.
    Any thoughts on this new aspect before. Has anyone faced similar problems. I want to know your experiences before we can take this issue to SAP.
    Thanks,
    Rishi

    Rishi/Vitaly/Marc,
    How do you transport cubes with BIA indexes.
    Do you drop/recreate the BIA index before the transport.
    In my case, the transport kicked off adjustment job exactly as described in [Note 1012008|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1012008]
    Indexes looks fine once this jobs completes successfully. The transport does not fail.
    Is this approach fine.
    I see that most of the customers drop/recreate indexes before transporting cubes.
    Can I run into data consistency issues with this approach?
    Input required.
    Thanks,
    Saurabh

  • Slow DTP extraction running on top of a cube with BIA

    Hi Experts,
    Ive a DTP, which is running on top of a cube, without aggregates. But I have the BIA built for the cube. Its taking 2 n half hour to extract 2 million records which is quite slow. Initial 1 hour of the extraction, it doesnt do anything. I have checked and confirmed, its not hitting BIA. The "Use Aggregates" option doesnt really help as there are no aggregates. (Pls dont suggest to build aggregates)Tried changing the "Settings for Batch Manager" option from 3 to 1, but did not help. Is there anyway, I can force the extraction to use BIA?
    Thanks for looking
    Shrirama

    Hi Tibolo,
    Thanks for the reply. There are no Symantic grouping.Delta cannot be used as the system where i'm sending information is not capable of handling Delta.
    Thanks
    Shrirama

  • Can we add a cube to BIA if the requests are deleted daily

    Hello gurus,
    We are facing a question with what cubes should be the candidates for BIA.
    We have few infocubes where the data is loaded for the current period daily and the previous request is deleted. This is how this cube handles the delta. Can we add this type of a cube to BIA without having to re-index the whole cube every day?
    Most forums and experts say we need to re-index the cube once you delete data out of the cube selectively. Is this procedure considered selective deletion and does it not work like a regular delta and if we roll up BIA after the load, does the BIA automatically load the delta indexes?
    Please let me know your views on this.
    Thanks in advance.
    Raju Kosuru

    Hi ,
    Yes, you can do the rollup and it will take care of filling the indexes.
    We have a similar scenario, where we are using "Delete Overlapping Request " for Budget and Target Data for few cubes. We have indexed the cubes and use rollup to fill the BWA Indexes in our daily process chains.So , i would suggest you that there is no harm in this process.
    But over the time , depending on the data volume, you need to delete unused BWA index data as a part of maintainance activities.
    And as you mentioned in your case, you have few cubes , it would be best if you automate the process of deleting,creating and filling the BWA indexes of the cubes using a process chain that can be scheduled to run on a weekly/required frequency depending on the data volume.This will save a lot of manual effort.
    For steps on the same, please see my response on the below thread:
    Delete and recreate BWA index of a cube in process chain
    -Vikram

  • Comparing load times w/ and w/o BIA

    We are looking at the pros/cons of BIA for implementation.  Does anyone have data to show a comparison between loads, loads with compression, vs BIA Index time?

    Haven't seen numbers comparing load times.  Loads to your cubes and compression continue whether you have BIA or not.  Rollup time would be eliminated as you would no longer have the need to have aggregates.  No aggregates should also reduce Change Run time, perhaps a lot, or only a little, depending on whether you have large aggregates with Nav Attrs in them. All of that is offset to some degree by the time to update the BIA.
    Make sure you understand all the licensing costs, not just SAP's, but the hardware vendors per blade licensing costs.  Talked to someone just the other day that was not expecting a per blade licensing, list price of the license per blade was $75,000

  • Fact Table index vs BIA Index

    BIA gurus..
    Prior to our BIA implementation we had the drop and rebuild index process variants in our process chains.
    Now after the BIA implementation we have the BIA index roll-up process variant included in the process chain.
    Is it still required to have the drop and rebuilt index process variants during data load ?
    Do the infocube fact table indexes ever get hit after the BIA implementation ?
    Thanks,
    Ajay Pathak.

    I think you still need the delete/create Index variants as it not only helps in query performance but also speeds up the load to your cubes.
    Documentation in Perfomance tab:
    "Indices can be deleted before the load process and after the loading is finished be recreated. This accelerates the data loading. However, simultaneous read processes to a cube are negatively influenced: they slow down dramatically. Therefore, this method should only be used if no read processes take place during the data loading."
    More details at:
    [http://help.sap.com/saphelp_nw70/helpdata/EN/80/1a6473e07211d2acb80000e829fbfe/frameset.htm]

  • How to check if BIA has effect on a BPS function

    Dear Experts,
    As I know BIA can also improve DB read time (data selection) when performing a BPS function. Currently we are using a BIA test installation to see if it will help us. We have some BPS function with long data selection times.
    I know created BIA indexes on all involved real time cubes. By just executing the concerned BPS functions I can see no effect. I now want to check if the BPS functions do access the BIA at all. For queries this is quite easy to check with RSRT.
    So my question is:
    Is there any tool available to check wether BIA is used by a BPS function or not? Some kind of RSRT for BPS.
    Thanks & Regards,
    Ulrich Meier

    Hi Jens,
    Thanks. I tried this. Strange result is that I get no data in that trace when running a BPS function. I do get data when running that trace during execution of a query which runs on the same InfoCubes as the planning function.
    In the meantime I found table RSDDSTATBIAUSE which gets updated each time a InfoProvider is accessed if it has a BIA Index. Here my InfoProviders get a +1 in field BIA_USED when accessing them with the BPS function.
    Furthermore I came across OSS-Note 990000 and installed the latest version of report ZBPPOBPS70 (I already knew that from BW 3.5). Here I found that field 'BW:Used Aggregate' is not populated, although within the coding of ZBPPOBPS70 there is indication that it should either show the used aggregate or BIA Index. The data is read from table RSDDSTATDM field AGGREGATE. There I found entries in AGGREGATE for 'normal' query calls but none for BPS calls.
    With alll that in mind I'm more and more confused...
    Regards,
    Ulrich

  • Report on Virtual cube is not working.

    Hi,
    I  created a report on virtual cube, Virtual cube is based on Function module which will pick data from Multiprovider, In the FM mention the import parameter name as " Muli Provider" Name.
    Multiprovider built on SPO object , On SPO object BIA is created, If can de activate the BIA on SPO then the report built on virtual cube is working in the portal. If I can activate the BIA the report is not working in the portal.
    The report is working fine in RSRT and Analyzer either BIA is active and deactive.
    Regards
    GK

    Hi
    The multi cube you created must be comprising of Info cubes, DSO and Info objects. Now this error which you are getting is it for
    - a specific characteristics or
    - any characterisitcs that is entered fourth in the series of filter selections or
    -  the fourth value in a specific characteristics
    Is the characteristics and the value which you use in filter present in all the underlying objects included in the multi cube ? You can check it in each of the objects associated in the multi cube, independently. This will give you an idea as to whether the error reported by the system is genuine or not.
    Cheers
    Umesh

  • BIA and process chain

    Hi Gurus
    Would you please advise the steps I have to follow on BW side to make BIA starts working?
    Can I use process chain to include the functionality of BIA? Also as I know we have to create one Index per cube for BIA. Do we need to fill them manually of when we load cube its going to be filled in automatically?
    Do we need to delete the BIA index and recreate it when we load data to the same cube again?
    Do we need to do some settings on BEx analyser for query which we run to execute through BIA?
    Appreciate your quick reply..Thanks for the help
    Thanks

    1)if you drop the InfoCube the BIA index should be dropped automatically.
    This means you have to re-build the InfoCube BIA index.
    Process type -Initial Activation and Filling of BIA Indexes can be used as of SP-13.
    2)Program - RSDDTREX_AGGREGATES_FILL
    When you execute this step, the system starts a process in the background that reads the data in the tables of the InfoCube star schema from the database and writes them to the corresponding indexes on the BI accelerator server. If the index of a master data table (S/X/Y tables) has already been created and filled by another BI accelerator index, only those records that have been subsequently added have to be indexed (read mode/fill mode "D" during indexing).
    If the aggregate was filled successfully, the status in the "Object Status" column on the Index Info tab page switches to GREEN
    https://forums.sdn.sap.com/click.jspa?searchID=5022975&messageID=3855004
    Hope it Helps
    Chetan
    @CP..

Maybe you are looking for