BIA index issues.
Hi All,
i have an BIA-Index issue,The query-result is wrong when i executes with BIA-Index in transaction RSRT .
But when i checked the master data of infoobject is consistent. I have checked this in transaction RSRV.
Could you please give any ideas how to correct this?
Thanks,
Vikram.
Hi,
Most likely you need to run an attribute change run. Attribute change run updates BIA index according to data stored in a InfoObject. You can do it from both a Process Chain or manually through Administrator Workbench -> Tools -> Hierarchy/Attribute Changes.
Regards,
Adam
Similar Messages
-
How to delete BIA indexes - manually via program?
Dear all
We are currently facing the issue where our BIA indexes are growing at a tremendous rate, due to our use of Full Load and Delete Overlapping Requests activities. Checks shows that our fact indexes are 6999% above the Fact tables and growing by day.
As per suggestion from the SAP Best Practice, we should be deleting and rebuilding the BIA indexes on a regular basis (we plan to do this everytime the loading completes). However, the SAP Best Practice only mention the program to initially activate and fill the BIA indexes (RSDDTREX_BIA_ACTIVATE_FILL), and fail to mention a program to delete the BIA indexes automatically.
Since we are scheduling this activity, is there a program for us to delete the BIA indexes? We will subsequently rebuild it using the RSDDTREX_BIA_ACTIVATE_FILL program.
We don't mind function module as well, as we will create a wrapper program around it.
Thanks for the advise
ChrisHi,
See if these notes are of help.
SAP Note 926609 - BIA index and metadata changes
SAP Note 917803 - Estimating the memory consumption of a BIA index
1012008 - BI accelerator index after activating the InfoCube
There may be few related notes attached to these notes.
check them also.
Hope this helps.
Thanks,
JituK -
ABAP to set "Switched on/off BIA indexes for queries" flag?
Hi everyone,
Does anyone know if the "Switch on/off BIA Indexes for Queries" can be set for a specific cube by a delivered ABAP?
Here's my scenario. We go live with BIA next Monday 9/22. I've indexed our cubes in production BIA this past weekend and they're rolled up nightly. Until the go-live, I've manually set the "Switched Off for Queries" flag via RSDDBIAMON > BI Accelerator > Index Settings > Switch on/off BIA Indexes for Queries".
However, one indexed cube is deleted fully every night by the flag in the full load infopackage. In testing, I saw that the BIA fact table index was fully deleted when the E and F tables of the cube were truncated, and then reindexed when the full package is loaded and rolled up. This is all ok.
The issue is that after the delete, load and rollup, the "Switched Off for Queries" flag is not set. I have had to manually reset to off in the morning. This won't be an issue (hopefully) once we go live, but it does seem like a bug and I'll likely submit a customer message. I would think the delete, index and rollup process should not change the status of this flag.
Does anyone know if that flag can be programatically set?
Thanks for any advice,
DougThanks, Vitaliy!
I think you're exactly right. Its seems to be a bug in processing of just this specific type of index reprocessing, e.g. dropping and reindexing. The indexing process properly retains INA status on the other delta indexed cubes.
I'm barely ABAP OO literate, but do understand enough to find, review and test the class and method code. Thanks for finding it!
Also, I reviewed table RSDDTREXDIR and it is correct/current. It does spawn another question, though. The field ITYPE (Type of BIA Index) is consistently ICB exept for 2 cubes/indexes, which are typed as PA2 and PA9. When I check possible values on the domain, RSDDTREX_TAGGR_TYPE, it only lists ICB and ICF (Infocube - flat).
Any idea what types PA2 and PA9 are, and why these 2 cubes would be different? From my perspective, they're basic infocubes, same as all the others.
Thanks again,
Doug -
Error in creating BIA index after SPS upgrade
Hi BIA experts,
we got issues in BIA after our BI SPS upgrade (current level SPS18):
when wir try to create new BIA index for a cube (normal basic cube), we got error RSD_TREX121 (index type ICB). This issue occurs since we completed SPS18 upgrade.
Any help and feedback are highly appreciated!!!
Regards,
SallyHi Sally,
"BIA index has type ICB" is not an error message. It just indicates that the index type has been initialized or changed. Can you describe the symptoms more clearly or provide more details of the job and/or application log?
Regards,
Marc
SAP NetWeaver RIG -
Transport error failure with return code 12 for BIA indexed Cube
Hello,
I was trying to transport few cubes from the Dev to the QA system. However, the transport failed repeatedly with return code 12. I noticed that the version of Cubes in the target system had BIA indexes loaded on it. So, I deleted those indexes and re-transported the cubes.To my surprise, the transport went fine without the BIA indexes. This now opens up a new avenue for discussion of dropping and recreating BIA indexes for those cubes that needs to be transported.
Any thoughts on this new aspect before. Has anyone faced similar problems. I want to know your experiences before we can take this issue to SAP.
Thanks,
RishiRishi/Vitaly/Marc,
How do you transport cubes with BIA indexes.
Do you drop/recreate the BIA index before the transport.
In my case, the transport kicked off adjustment job exactly as described in [Note 1012008|https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1012008]
Indexes looks fine once this jobs completes successfully. The transport does not fail.
Is this approach fine.
I see that most of the customers drop/recreate indexes before transporting cubes.
Can I run into data consistency issues with this approach?
Input required.
Thanks,
Saurabh -
Are BIA indexes transportable?
We are planning on implementing BIA for our BI system and are in the initial investigation stage. I wanted to know if BIA indexes created in Dev can be transported to remaining systems (QA and Prod), or do we have to create them again in each system (like agregates). What's the correct strategy.
ThanksHi Smitha,
The correct strategy would be to recreate the indexes again from scratch on your PROD system. Practically speaking, a BIA index is considered an aggregate on BW side and so, it needs more or less the same handling. Let me give you some more explanations:
From a technical point of view, it would be possible to exchange indexes between BIA installations by exporting/importing them. However, indexes copied this way would not be usable from the BW system as this BW system will neither have nor get the necessary information about the existence of those indexes! That means, the indexes would not be recognized as such. I cannot tell which steps would also have to be taken in order to manipulate the BW system to bypass the actual indexing process. Furthermore, there would also be the problem of the different namespaces under which the indexes are kept on BIA, even though it is a minor issue. Usually, you will use a distinct sid on the QA, DEV, and PROD system, e.g., BWQ, BWD, BWP.
Seen apart from the mentioned problems, can you really make sure that the data on the DEV BW system database is exactly the same as on the PROD BW system? If not you would run into severe inconsistencies anyway.
So, the question is if all the effort to be spent (exporting indexes on DEV BIA, importing them on PROD, and manipulating the BW PROD system in order to know about the existence of the indexes) and the danger of running into new unforeseen problems is really less than just reindexing all the cubes on PROD? How much time will the indexing process take? Roughly speaking, BIA can index about 200mio. records (with 15 to 20 attributes) per hour. With certain parameter adaptations on BW, it is also possible to increase this number.
Best regards,
Sascha. -
Error while deleting from BIA index
HI EXPERTS GETTING BELOW ERROR PLZ GIVE CLARITY ON THIS:
1.A communication error occured, with the TREX TcpIp
2.Error while deleting from BIA index; reconstruction required
THANKS ®ARDS,
rAMESH,Dear Ramesh,
Please check the connection with BIA. Please ask the Basis team to check the BW to BIA connection if they maintain it.
YOu can go to rsddbiamon2 and check BIA connection availibility for more details.
If that is right please try redoing the step at which you faced error and it should help.
Please close the question if you have got the answer or solved it.
Regards,
Den -
BIA Index Loading into Mmeory (for SAP) ....
Hello all,
I went to www.sdn.sap.com and chose <b>Business Intelligence</b> from the left menu.
In the <b>BI Knowledge Center</b> (Right side), I chose <b>BI Accelerator</b>, which is under <b>Key Topics</b> section.
Then I opened a document "<b>SAP Business Intelligence Accelerator (PDF 154 KB)</b>".
Go to the section "<b>SAP BI Accelerator at Work</b>", which is at page# 5. The third point says like this -
BI accelerator indexes are loaded into memory where the query is processed. In memory, joins and aggregations are done at run time. Loading of indexes into memory happens automatically at first query request, or it can be set for <b>preloading</b> whenever new data is loaded.
I had an understanding that it will be loaded into memory only when query was first executed. But this says it can be set for preloading as well wherever new data loaded.
My question is that where this setting can be done for preloading?
It says that preloading option is available whenever new data loaded so does that mean that it can only be set for rollup of BIA Index and for initial load, it will still load data into memory when the query is first executed.
I would appreciate if somebody has knowledge of this as it is new technology.
Others, who has a knowledge can also answer.
Your help will be greatly appreciated.
Thank you to everybody in advance.
SumeHi,
I found it.
It is an option on BIA Index Property button in the wizard. It appears only after the initial load is done. There is check box for to keep the BIA Index in main store. I thin it is only applicable to Roll-up.
Thank you.
Sume -
BIA gurus..
Prior to our BIA implementation we had the drop and rebuild index process variants in our process chains.
Now after the BIA implementation we have the BIA index roll-up process variant included in the process chain.
Is it still required to have the drop and rebuilt index process variants during data load ?
Do the infocube fact table indexes ever get hit after the BIA implementation ?
Thanks,
Ajay Pathak.I think you still need the delete/create Index variants as it not only helps in query performance but also speeds up the load to your cubes.
Documentation in Perfomance tab:
"Indices can be deleted before the load process and after the loading is finished be recreated. This accelerates the data loading. However, simultaneous read processes to a cube are negatively influenced: they slow down dramatically. Therefore, this method should only be used if no read processes take place during the data loading."
More details at:
[http://help.sap.com/saphelp_nw70/helpdata/EN/80/1a6473e07211d2acb80000e829fbfe/frameset.htm] -
Effect of Cube Compression on BIA index's
What effect does cube compression have on a BIA index?
Also does SAP recommend rebuilding indexes on some periodic basis and also can we automate index deletes and rebuild processes for a specific cube using the standard process chain variants or programs?
Thank you<b>Compression:</b> DB statistics and DB indexes for the InfoCubes are less relevant once you use the BI Accelerator.
In the standard case, you could even completely forgo these processes. But please note the following aspects:
Compression is still necessary for inventory InfoCubes, for InfoCubes with a significant number of cancellation requests (i.e. high compression rate), and for InfoCubes with a high number of partitions in the F-table. Note that compression requires DB statistics and DB indexes (P-index).
DB statistics and DB indexes are not used for reporting on BIA-enabled InfoCubes. However for roll-up and change run, we recommend the P-index (package) on the F-fact table.
Furthermore: up-to-date DB statistics and (some) DB indexes are necessary in the following cases:
a)data mart (for mass data extraction, BIA is not used)
b)real-time InfoProvider (with most-recent queries)
Note also that you need compressed and indexed InfoCubes with up-to-date statistics whenever you switch off the BI accelerator index.
Hope it Helps
Chetan
@CP.. -
Delete Full Requests from an InfoCube with a BIA index on it
Hello all
I need to delete certain random Full requests from a cube which has both DB indexd and BIA index on it?
What will be the steps i should be following in regards to deleting/ rebuilding indexes ?
Regards
SanjyotThanks Vitaliy but I have multiple requests with huge amount of data to be deleted.
So I was wondering if the following steps are correct
1. delete DB indices
2. delete BIA indices
3. delete requests from cube
4. rebuild DB indices
5. rebuild BIA indices
Are there any steps to ensure that indices are rebuilt correctly?
Regards
Sanjyot -
Error when creating BIA INDEX FOR CUBE
Hi
I am trying to create BIA index for a cube and I am getting error
"An error occurred. Choose "Continue" to start again from the beginning" in the second step. Could any body explain what this error mean and How to correct it.
and when I press BIA Moniter tab I am getting following message.
An error occurred. Choose "Continue" to start again from the beginning
"BIA Monitor Is Called for First Time
The RFC destination for the BI accelerator is not yet specified in the
system. Without the relevant entry in RSADMINA, the BIA monitor cannot
be executed. Do you want to enter the RFC destination now?"
Thanks in Advance
Sarath
Edited by: sarath kumar on Aug 21, 2008 9:42 AMHi,
Is there any way i can check bi accelerator installed or not for our bi server.I contacted basis team but they do not have any idea regarding this. but I heard from my ex colleage it is installed. and Reports from one cube is running very fast compared to recently created cube.
Thanks
Sarath -
Prerequisites for an Infocube to crete BIA indexes
Hi experts,
I am very new to BIA. I have just loaded some data into a Cube and I want to create and fill BIA indexes. But before that I doubt whether we need to do any must-be-done activities such as roll up or compressing that particular request. Please guide me step by step approach to create BIA indexes starting from the load to the cube has been done. Thanks.Hi,
I would like to ad that though technically there are no pre-requisits for an InfoCube to create BWA indexes, from a business side there is. This is assuming being on BW 7.x is not considered as a pre-req.
1. In a BW only enviroment modeling is not very critical as space and selection are both ceheap and manual.
2. Move to BWA and sudeenly space is a very expensive commodity, not only at the time of ourchase but also on an annual basis. In such an environment if re-modelign can reduce the Cube footprint by 30 to 60% then that company has to spend 30-60% less on blades and annual maintenence costs. Or fit 30-60% more cubes into the BWA. This one seems to be a no-brainer.
3. A lot of time we run our queries from Multiproviders and go ahead and index all the Cubes. The in some cases user says that the query response is still the same. When we dive deeper we discover that the query, or Muptiprovider, is dependent on a reporting DSO. Well this reporting DOS must now either be converted to an InfoCube or one built on top for leveraging the BWA. This is a common 1st time mistake with some BWA installations - lessons get learnt quite rapidly
But just at a high level these is a lot of business reasons to reshape the Cubes prior to BWA, or create 'RightModeled" Cubes on top of reporting DSO's. Companies will save manyfolds more in this process and enhance prformances that the cost for deployign this solution. - It's all automated now.. -
Indexing Issue : Idc Analyzer and other tools
Hi All,
I am facing some indexing issue with my ucm instance. Some of the files get stuck in wwGen Revision Status, some show “up to date” in index, but really are not until a re-index
happens, etc.
I used IDCAnalyzer to check the indexing issues but during analysis i got only an error pop-up saying "Error checking index" with no details in log.
Are there any additional arguments/settings which can be used to get the details. Otherwise, this tool doesn't seem to be of much help in this case.
Which are the other tools that can be used to check and correct the health of index?
How can we purge unneeded revisions, history, and other in UCM to “clean up” and remove bloat (like archiver etc.)
Note: I am using ucm 11g with SSXA.
Edited by: PoojaC on Aug 1, 2012 10:26 PMHi ,
Analyzer tool should be used when there is a mismatch in the weblayout / vault files which causes indexer , archiver etc to fail .
Read more about this from the following links:
http://docs.oracle.com/cd/E23943_01/doc.1111/e10792/e01_interface.htm#CACFDIID
http://docs.oracle.com/cd/E23943_01/doc.1111/e10792/c03_processes.htm#sthref268
Hope this helps .
Thanks
Srinath -
Hi,
I was trying to add partners to a customer through XD02.
I need to add multiple Partners to a customer.
I recorded transaction but I have index issue .
For example when I recorded it is 6th line where i need to enter but for next record it should add in 7th line.
but 6th line is getting repalced.. how do i resolve this issue?
Should I code something in FIELDMAPPING?
Regards
PrasadHai Vara
I am giving Material master Upload through LSMW Direct Input Method
Just Follow The Steps
Using Tcode MM01 -- Maintain the source fields are
1) mara-amtnr char(18)
2) mara-mbrsh char(1)
3) mara-mtart char(4)
4) makt-maktx char(40)
5) mara-meins char(3)
the flate file format is like this as follows
MAT991,C,COUP,Srinivas material01,Kg
MAT992,C,COUP,Srinivas material02,Kg
AMT993,C,COUP,Srinivas material03,Kg
MAT994,C,COUP,Srinivas material04,Kg
MAT995,C,COUP,Srinivas material05,Kg
goto Tcode LSMW
give Project Name
Subproject Name
object Name
Press Enter -
Press Execute Button
It gives 13 radio-Button Options
do the following 13 steps as follows
1) select radio-Button 1 and execute
Maintain Object Attributes
select Standard Batch/Direct Input
give Object -- 0020
Method -- 0000
save & Come Back
2) select radio-Button 2 and execute
Maintain Source Structures
select the source structure and got to click on create button
give source structure name & Description
save & Come Back
3) select radio-Button 3 and execute
Maintain Source Fields
select the source structure and click on create button
give
first field
field name matnr
Field Label material Number
Field Length 18
Field Type C
Second field
field name mbrsh
Field Label Industrial Sector
Field Length 1
Field Type C
Third field
field name mtart
Field Label material type
Field Length 4
Field Type C
fourth field
field name maktx
Field Label material description
Field Length 40
Field Type C
fifth field
field name meins
Field Label base unit of measurement
Field Length 3
Field Type C
save & come back
4) select radio-Button 4 and execute
Maintain Structure Relations
go to blue lines
select first blue line and click on create relationship button
select Second blue line and click on create relationship button
select Third blue line and click on create relationship button
save & come back
5) select radio-Button 5 and execute
Maintain Field Mapping and Conversion Rules
Select the Tcode and click on Rule button there you will select constant
and press continue button
give Transaction Code : MM01 and press Enter
after that
1) select MATNR field click on Source filed(this is the field mapping) select MATNR and press Enter
2) select MBRSH field click on Source filed(this is the field mapping) select MBRSH and press Enter
3) select MTART field click on Source filed(this is the field mapping) select MTART and press Enter
4) select MAKTX field click on Source filed(this is the field mapping) select MAKTX and press Enter
5) select MEINS field click on Source filed(this is the field mapping) select MEINS and press Enter
finally
save & come back
6) select radio-Button 6 and execute
Maintain Fixed Values, Translations, User-Defined Routines
Create FIXED VALUE Name & Description as MM01
Create Translations Name & Description as MM01
Create User-Defined Routines Name & Description as MM01
after that delete all the above three just created in the 6th step
FIXED VALUE --MM01
Translations --MM01
User-Defined Routines --MM01
come back
7) select radio-Button 7 and execute
Specify Files
select On the PC (Frontend) -- and click on Create button(f5)
give the path of the file like "c:\material_data.txt"
description : -
separators as select comma radiao- button
and press enter save & come back
8) select radio-Button 8 and execute
Assign Files
Save & come back
9) select radio-Button 9 and execute
Read Files
Execute
come back
come back
10) select radio-Button 10 and execute
Display Imported Data
Execute and press enter
come back
Come back
11) select radio-Button 11 and execute
Convert Data
Execute
come back
Come back
12) select radio-Button 12 and execute
Display Converted Data
Execute & come back
13) select radio-Button 13 and execute
Start Direct Input Program
select the Program
select continue button
go with via physical file
give the lock mode as 'E'
and execute
Thanks & regards
sreeni
Maybe you are looking for
-
Hi experts 1 query regarding alv report running in background
when i run my alv report with 40 columns in background in spool when i get output disturbed 20 lines of first row is moving to second row so when i download to excel the o/p is completely disturbed and no use for me is there any way to solve this pro
-
HOW TO SOLVE THE R6034 ERROR IN ITUNES INSTALATION
I need toknow how to solve the R6034 error in itunes instalation becouse y can´t buk up muy phone
-
When safari kills your machine memory
Hi I would like to know how can i AVOID all the "background" operations that safari does when i start safari and even other times Is the top sites cache? Can i disable top sites? The only plugin i have is clicktoflash Unfortunately i dont see any opt
-
Need Urgent help on Logging error
Hi , In my project we are using Log4j 1.3 alpha. and we have commons-logging.jar(1.0.4) also. I am trying to deploying my application to stand alone OC4J from jdeveloper it was deployed successfully. I am trying to run one jsp in the browser http://l
-
My stolen i phone still uses my i cloud account
My icloud account was used by other i think my stolen iphone is what he using