Max_cells_one_index
Dear All,
Have you got any idea about how to calculate max_cells_one_index?
As a BIA parameter it is set to 125000000 (3.000.000 as default ). However, running a query to extract a large volume of the data leads to an error like below.
"The following error 6952 occurred in BIA server
Amount of data to be read from the BIA server is too large
Error reading the data of InfoProvider ZXXXXX
Error while reading data; navigation is possible "
Now, i really curious about the calculation of this max_cells_one_index. Since the number of data transferred to OLAP is 491.326. Here are the statistics thay may give you an idea.
(The numbers below is collected by running the query against DB, not in BIA since it gets an error and terminates. )
Total DBTRANS 5.765.977
Total DBSEL 12.964.300
OLAP: Data Selection 391,637331
OLAP: Read Data 7.879
OLAP: Data Transfer 491.326
Thanks in advance..
Berna
What Revision are you on?
max_cells_one_index is obsolete as of Rev 48.
Refer to [OSS Note 1002839|https://service.sap.com/sap/support/notes/1002839] for details.
New default limit is max_cells_one_index = 40000000.
this is the total number of cells in the resultset that is transferred back to OLAP.
As I understand this, this is the product of no, of rows and columns.
So in your case, BIA is sending back 5.765.977 rows. Depending upon the no. of key fiugres you have in the query (that are read from BIA), you approach this limit.
How many key figures does your cube have? If you have 20+ KFs then you are already hitting the limit if if its 125MM. (20* 5.7)
Note: My experience has been that you run out of memory even when you are close to the limit mentioned above
As for OLAP: Data Transfer 491.326, this is a different event that takes place in OLAP.
Refer to [http://help.sap.com/saphelp_nw70/helpdata/EN/45/f0488a1aa03115e10000000a1553f7/frameset.htm] for complete details of each event.
Hope this helps.
Similar Messages
-
Max_cells_one_index's volumn?
Dear Developer
I came across error when I execute query.
The content is same as below.
Error reading the data of InfoProvider ZSD_C03$X
The following error 2,703 occurred in BIA server
Error executing physical plan:aggregation: not enough memory for executi
Error executing physical plan:aggregation:not en 2703
The following errors occurred duting parallel processing of query 1
Error reading the data of InfoProvider ZSD_C21$X
Amount of data to be read from the BIA server is too large
The following error 6952 occurred in BIA server
Error executing physical plan:AttributeEngine: not enough memory:BwPopAg
Error executing physical plan:AttributeEngine: no 6952
The following errors occurred during parallel processing of query 7
Errors occurred during parallel processing of query 1, RC: 3
Error while reading data; navigation is possible
I detected SAP note '1002839'.
I'm going to change the value of 'max_cells_one_index'.
To determine value of 'max_cells_one_index', I realize that I have to know hash table size and key figures.
Becasue notes mentions that 'max_cells_one_index' = hash table size * key figures.
However, I have no idea what hash table size means.
Can I find it in file 'TrexIndexServerAlert.trc'
Is there another way?Hello,
You are trying to transfer a result set to OLAP that bigger than the parameter in max_cells_one_index.
I don't know what BWA version you have but, the number was 40M, you can find it in TREXIndexServer.ini file.
I am not sure how you can calculate but this depends on how many columns in your query not only rows - including hidden KFs
If you change it, you need to restart the trex index server. But I don't recommend changing this parameter.If you raise this number you may run out of memory that may bring down the entire blade, not only one query. So I recommend you to redesign the failing query only
Cheers
Tansu -
Error executing physical plan: aggregation: not enough memory
Hi Everyone,
I am getting error dumps while executing a report based on an Infocube using BIA.
This is the first time we are facing this error, could any body please suggest the possible solutions to resolve this error.
Thanks and regards,
Sonal Patel.Hi Sonal,
The error is due to the cell limit restriction on the BWA (40 million cells by default). Not necessary the cells you are trying to output in your BEx query but the cells created in the internal table when you are trying to read from the BWA memory.
The below note should solve your issue. If you are familiar with theTREXAdmin Python tool, you can do it yourself but if not kindly forward the below note to your BASIS team.
1740610 - BWA 7.00 and 7.20: Using dynamic max_cells_one_index
Thanks and hope it solves your issue
Sundeep -
Erro while executing query using BWA
Hello BWA experts,
I am facing a issue while running a query by using BIA. I get error message as "Program error in class SAPMSSY1 method: UNCAUGHT_EXCEPTIION.
My querz contains one basis key figure in the column and a charateristic say company code in row. In free characteristic, I have sales document no. The qeury executes without any issue at first. But when I go to 'sales document no' in the free characteristic area and select filter values, this error pops out. This document no cannot be removed due to customer requirements.
If I restrict the query output with selection, then it is able to execute & filtering can be done without and errors. Also if I activate not to use BIA in tcode 'RSDDBIAMON', the query executes & filtering can be done whithout and errors.
If I change the query & put 0doc_number in the drill down and execute for the same selection, I get error as follows,
Error subfield access to table (row3, column 0 to row 3, cloumn 0) outside of
Error Serialization/Deserialization error
Error An an exception with the type CX_TREX_SERIALIZATION occured, but was neither
Error Error reading data of infoprovider XXXXXX
Error Error while reading data, navigation is possible.
I was suggested by SAP to increase the max_cells_one_index parameter to 200 million. We are on revision version 53 and the current value set for max_cells_one_index to 40 million. They also mention with a caution that 'by changing this parameter it will bring more load to BW servers'.
I would like to know how the bw server load would increase by increasing this parameter. Will there be any another impact on the BIA or BW servers.
Secondly, how is the value arrived to give a memory out , serialization error.
Also will this kind of an issue be solved in the next revision version? I know that revision version 54 is available.
Please help.
Thanks,
SandeepHi Marc,
Thanks for the explaination.
I am not able to even evaluate the amount of data with this document no in the drilldown. When I run this report I get an abap dump "TSV_TNEW_PAGE_ALLOC_FAILED".So I guess the no of documents must be very high. Is there a way to check in BIA Monitor the no of documents returned after execution, for a particular selection.
However, I have come accross a note '1157582' which talks about spiltting the result set in different data packets. Will this be helpfull in any case? Currently I see that the parameter 'chunk_size' is set to zero
Thanks,
Sandeep -
Hello,
We have one report which consists of several complex queries and
summary details.For certian parameter values,the report output
will be more than 5000 pages.When the report is run for those
parameter values, report designer is getting crahsed
abruptly?. Our report builder version is 6.0.8.8.3.
What could be the reason for this problem?.
Early reponse is highly appreciated.
Thanks&Regards
SubbaRaoHi,
When you run the report, go to TCODE RSDDBIAMON2 and click on 'BIA LOAD MONITOR ACTIVATE' to have look at how the memory parameters change on execution of this report.
You might get this error in below cases :
1. Too much data is extracted
2. Exception aggregation used in the query.
If yes, please redesign your query.
Another option is to implement OSS notes mentioned in thread below:
Re: max_cells_one_index
-Vikram -
We had this error in our portal from a user executig a query:
not enough memory.;in executor::Executor in cube: pb1_zbm_c009
The cube has 26M rows in the fact table and we found this in the TrexIndexServer.trc file
[1199630656] 2009-08-06 09:57:36.830 e QMediator QueryMediator.cpp(00324) : 6952; Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: pb1_zbm_c009
[1199630656] 2009-08-06 09:57:36.830 e SERVER_TRACE TRexApiSearch.cpp(05162) : IndexID: pb1_zbm_c009, QueryMediator failed executing query, reason: Error executing physical plan: AttributeEngine: not enough memory.;in executor::Executor in cube: pb1_zbm_c009
[1149274432] 2009-08-06 13:13:01.136 e attributes AggregateCalculator.cpp(00169) : AggregateCalculator returning out of memory with hash table size 11461113, key figures 4
[1216416064] 2009-08-06 13:13:01.199 e executor PlanExecutor.cpp(00273) : plan <plan_1247870610064+160@bwaprod2:35803> failed with rc 6952; AttributeEngine: not enough memory.
[1216416064] 2009-08-06 13:13:01.199 e executor PlanExecutor.cpp(00273) : -- returns for <plan_1247870610064+160@bwaprod2:35803>:
Our basis group validated that we had the parameter max_cells_one_index = 40000000 was set which is the default.
So what I'm wondering is did my user actually request ( 11,461,113 x 4 ) or 45,844,452 cells and so it exceeded the 40M limit?
MikeHi Michael,
that´s possible, but can you get mor information about your memory consumption?
Go in the standalone tool trexadmin (on OS) to Landscape -> Services -> Load
If the data are too old, have a look at the latest alerts in transaction TREXADMIN or in the standalone tool mentioned above.
You can also check this query via transaction rsrt and debug it. Try query with and without BIA.
Best regards,
Jens -
BIA Server Overload, Report terminates
Dear Experts,
We are getting reading data error when execute a report, cause of bia has not enough memory,
"amount of data to be read from the bia server is too large"
How can i check how much memory used from this report via trexadmin tool,
Best Regards,
Edited by: atakan yavuz on Jan 7, 2010 11:09 AMHi,
When you run the report, go to TCODE RSDDBIAMON2 and click on 'BIA LOAD MONITOR ACTIVATE' to have look at how the memory parameters change on execution of this report.
You might get this error in below cases :
1. Too much data is extracted
2. Exception aggregation used in the query.
If yes, please redesign your query.
Another option is to implement OSS notes mentioned in thread below:
Re: max_cells_one_index
-Vikram -
Hello,
We are getting 6952 error when users are executing large report.
Our current setting max_cells_one_index = 65000000 but the error occured at
hash table size 2256097 key figures 82 which is equal to 185000000.
But if I set 185000000 as per note 1002839 BIA might be become inactive.
Please advice.
-SM.Hello SM,
We installed BWA in 2006, since then we had many errors due to large number of records in the result set. In some cases causing BWA outages.They are either due to certain users abusing the system by running wide open queries, or for some legitimate reasons - such as downloading the result set to access database!
To me, these are exceptions and not the best practices, that's why I didn't spend my time and energy to answer them. They should be using exception reporting instead, If you think about it, how can you read 16Mcells? Should they need such large result sets for example to provide to auditors, those queries can be scheduled as batch queries w/o BWA, or even use APD, open hub, abap code to dump them as CSVs etc, but not as online, adhoc.
If you must run this query from BWA, then take a look at the query to see if the filters are right, if there is any unnecessary total/subtotal/fields/hierarchy level etc that you can get rid of.
Cheers
Tansu
Maybe you are looking for
-
Can I dual boot Snow Leopard and Mountain Lion on one hard drive?
I want to make a new partition and be able to boot to either OS. Is this possible, if so, how? I would think just make a new partition and install the dmg file to that new partition.
-
Table Name or Function Module to find out all the Screens & Subscreens for
Hello Experts, Table Name or Function Module to find out all the Screens & Subscreens for all T-Codes Helpful Answer will b rewarded Arif Shaik
-
I down loaded firefox 3.6 and now I can't see any of my tool bar's or menu bar. Usually I can click on view then toolbars and check the ones I want to see. But the tool bar with file, edit, view, tool, ect.. is not visible. I have gone in and checked
-
How to determine the SAPS for Systems
HI, I have the cuestion concerning to value SAPS I need to know the value = 600 SAPS value memory = ??? value CPU = ????
-
SQL Developer Vs Aqua Studio Clob Size support
Hi, I'm trying to run a query against 10g which returns huge volume of data. In Aqua Studio the clob is returned, but in SQL Developer I didn't get any result. Is there any settings I've to do in SQL Developer to support larger volume. Your help/poin