Cube compression failing
When compressing the Inventory Movements infocube, we are getting an ORA-01422, does anybody know what could cause this error?
Help will be highly appreciated
A more specific question:
When trying to find the cause of this issue, it turns out that the DSI_IMTKST contains several DIm Id's with the value 31.12.9999 for the Calendar Date(which seems to be the marker).
The error is being returned cause it is fetching more than one Dimension ID that contains the value 31.12.9999 for the Calendar Date
Any ideas why is this happening (having more than one Dimension ID with this calendar date value)?
Any help will be highly appreciated as this is not letting us compress the cube.
Thanks,
Arturo
Similar Messages
-
Hi,
I am facing a problem while compressing the cube. The compression job is failing with the message 'insufficient input parameter supplied Exception CX_SQL_EXCEPTION occurred (program: parameter is missing'. There is a note for similar error message but it is basically for non cumulative cubes. Have anybody faced this issue before? If so please help.
Regards,
Raghavendra.Issue resolved. The problem was with Index. If it is a first compression of huge volume of data in a cube, we should delete the index and then compress.
Edited by: Raghavendra Padmanaban on Aug 10, 2010 10:24 PM
Edited by: Raghavendra Padmanaban on Aug 10, 2010 10:24 PM -
Compression failed - Inventory Cube - 2LIS_03_BX
HI Experts,
the compression of the initial request from datasource 2LIS_03_BX failed. The compression will be done with "no marker update".
On the development system it works fine, but on the test system the compression failed. The time dimension is with 0FISCPER and 0FISCYEAR.
All checks with RSDV are successfull.
Please find below the Job Log. Have anyone an idea ?
Thanks
Constantin
Message text
Message class
Message no.
Message type
Job started
0
516
S
Step 001 started (program RSCOMP1,
variant &0000000002226, user ID xxx)
0
550
S
Performing check and potential update for
status control table
RSM1
490
S
FB RSM1_CHECK_DM_GOT_REQUEST called from
PRG RSSM_PROCESS_COMPRESS; row 000200
RSM
53
S
Request '3.439.472'; DTA 'TKGSPLB16';
action 'C'; with dialog 'X'
RSM
54
S
Leave RSM1_CHECK_DM_GOT_REQUEST in row
70; Req_State ''
RSM
55
S
RSS2_DTP_RNR_SUBSEQ_PROC_SET
GET_INSTANCE_FOR_RNR 3439472 LINE 43
RSAR
51
S
RSS2_DTP_RNR_SUBSEQ_PROC_SET
GET_TSTATE_FOR_RNR 2 LINE 243
RSAR
51
S
Status transition 2 / 2 to 7 / 7
completed successfully
RSBK
222
S
RSS2_DTP_RNR_SUBSEQ_PROC_SET
SET_TSTATE_FURTHER_START_OK LINE 261
RSAR
51
S
Aggregation of InfoCube TKGSPLB16 to
request 3439472
DBMAN
396
S
statistic data written
DBMAN
102
S
Requirements check compelted: ok
DBMAN
102
S
enqueue set: TKGSPLB16
DBMAN
102
S
compond index checked on table:
/BIC/ETKGSPLB16
DBMAN
102
S
ref. points have const. pdim: 0
DBMAN
102
S
enqueue released: TKGSPLB16
DBMAN
102
S
Prerequisite for successful compression
not fulfilled
DBMAN
378
S
Collapse terminated: Data target
TKGSPLB16, from to 3.439.472
RSM
747
S
Aggregation of InfoCube TKGSPLB16 to
request 3439472
DBMAN
396
S
statistic data written
DBMAN
102
S
Requirements check compelted: ok
DBMAN
102
S
enqueue set: TKGSPLB16
DBMAN
102
S
compond index checked on table:
/BIC/ETKGSPLB16
DBMAN
102
S
ref. points have const. pdim: 0
DBMAN
102
S
enqueue released: TKGSPLB16
DBMAN
102
S
Prerequisite for successful compression
not fulfilled
DBMAN
378
S
Collapse terminated: Data target
TKGSPLB16, from to 3439472
RSM
747
S
Report RSCOMP1 ended with errors
RSM1
798
E
Job cancelled after system exception
ERROR_MESSAGE
0
564
A---> Compressing the request containing the opening stock that was just uploaded. Make
sure the "No marker update" indicator is not set.
The Problem is the time period 0FISCPER....
When I substitue it with 0CALMONTH and so on it works! -
Effect of Cube Compression on BIA index's
What effect does cube compression have on a BIA index?
Also does SAP recommend rebuilding indexes on some periodic basis and also can we automate index deletes and rebuild processes for a specific cube using the standard process chain variants or programs?
Thank you<b>Compression:</b> DB statistics and DB indexes for the InfoCubes are less relevant once you use the BI Accelerator.
In the standard case, you could even completely forgo these processes. But please note the following aspects:
Compression is still necessary for inventory InfoCubes, for InfoCubes with a significant number of cancellation requests (i.e. high compression rate), and for InfoCubes with a high number of partitions in the F-table. Note that compression requires DB statistics and DB indexes (P-index).
DB statistics and DB indexes are not used for reporting on BIA-enabled InfoCubes. However for roll-up and change run, we recommend the P-index (package) on the F-fact table.
Furthermore: up-to-date DB statistics and (some) DB indexes are necessary in the following cases:
a)data mart (for mass data extraction, BIA is not used)
b)real-time InfoProvider (with most-recent queries)
Note also that you need compressed and indexed InfoCubes with up-to-date statistics whenever you switch off the BI accelerator index.
Hope it Helps
Chetan
@CP.. -
New field added to cube, delta DTP from DSO to cube is failing
Dear all,
Scenerio in BI 7.0 is;
data source -delta IPs-> DSO -
delta DTP---> Cube.
Data load using proces chain daily.
We added new field to Cube --> transformation from DSO to cube is active, transport was successful.
Now, delta from DSO to cube is failing.
Error is: Dereferencing of the NULL reference,
Error while extracting from source <DSO name>
Inconsistent input parameter (parameter: Fieldname, value DATAPAKID)
my conclusion, system is unable to load delta due to new field. And it wants us to initialize it again ( am i right ?)
Do I have only one choice of deleting data from cube & perform init dtp again ? or any other way ?
Thanks in advance!
Regards,
Akshay HarsheHi Durgesh / Murli,
Thanks for quick response.
@ durgesh: we have mapped existing DSO field to a new field in cube. So yes in DTP I can see the field in filter. So I have to do re-init.
@ Murli: everything is active.
Actully there are further complications as the cube has many more sources, so wanted to avoid seletive deletion.
Regards,
Akshay -
Cube refresh fails with an error below
Hi,
We are experiencing this problem below during planning application database refresh. We have been refreshing the database everyday, but all of a sudden the below error is appearing in log. The error is something like below:
Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
java.io.EOFException
When the database refresh is done from workspace manually, the database refresh is happening successfully. But when triggered from unix script, its throwing the above error.
Is it related to some provisioning related issue for which user has been removed from MSAD?? Please help me out on this.
Thanks,
mani
Edited by: sdid on Jul 29, 2012 11:16 PMI work with 'sdid' and here is a better explaination of what exactly is going on -
As part of our nightly schedule we have a unix shell script that executes refresh of essbase cubes from planning using the 'CubeRefresh.sh' shell script.
Here is how our shell looks like -
/opt/hyperion/Planning/bin/CubeRefresh.sh /A:<cube name> /U:<user id> /P:<password> /R /D /FS
Here is what 'CubeRefresh.sh' looks like -
PLN_JAR_PATH=/opt/hyperion/Planning/bin
export PLN_JAR_PATH
. "${PLN_JAR_PATH}/setHPenv.sh"
"${HS_JAVA_HOME}/bin/java" -classpath ${CLASSPATH} com.hyperion.planning.HspCubeRefreshCmd $1 $2 $3 $4 $5 $6 $7
And here is what 'setHPenv.sh' looks like -
HS_JAVA_HOME=/opt/hyperion/common/JRE/Sun/1.5.0
export HS_JAVA_HOME
HYPERION_HOME=/opt/hyperion
export HYPERION_HOME
PLN_JAR_PATH=/opt/hyperion/Planning/lib
export PLN_JAR_PATH
PLN_PROPERTIES_PATH=/opt/hyperion/deployments/Tomcat5/HyperionPlanning/webapps/HyperionPlanning/WEB-INF/classes
export PLN_PROPERTIES_PATH
CLASSPATH=${PLN_JAR_PATH}/HspJS.jar:${PLN_PROPERTIES_PATH}:${PLN_JAR_PATH}/hbrhppluginjar:${PLN_JAR_PATH}/jakarta-regexp-1.4.
jar:${PLN_JAR_PATH}/hyjdbc.jar:${PLN_JAR_PATH}/iText.jar:${PLN_JAR_PATH}/iTextAsian.jar:${PLN_JAR_PATH}/mail.jar:${PLN_JAR_PA
TH}/jdom.jar:${PLN_JAR_PATH}/dom.jar:${PLN_JAR_PATH}/sax.jar:${PLN_JAR_PATH}/xercesImpl.jar:${PLN_JAR_PATH}/jaxp-api.jar:${PL
N_JAR_PATH}/classes12.zip:${PLN_JAR_PATH}/db2java.zip:${PLN_JAR_PATH}/db2jcc.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/css-9_
3_1.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/ldapbp.jar:${PLN_JAR_PATH}/log4j.jar:${PLN_JAR_PATH}/log4j-1.2.8.jar:${PLN_JAR_
PATH}/hbrhppluginjar.jar:${PLN_JAR_PATH}/ess_japi.jar:${PLN_JAR_PATH}/ess_es_server.jar:${PLN_JAR_PATH}/commons-httpclient-3.
0.jar:${PLN_JAR_PATH}/commons-codec-1.3.jar:${PLN_JAR_PATH}/jakarta-slide-webdavlib.jar:${PLN_JAR_PATH}/ognl-2.6.7.jar:${HYPE
RION_HOME}/common/CLS/9.3.1/lib/cls-9_3_1.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/EccpressoAll.jar:${HYPERION_HOME}/common/
CLS/9.3.1/lib/flexlm.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/flexlmutil.jar:${HYPERION_HOME}/AdminServices/server/lib/easse
rverplugin.jar:${PLN_JAR_PATH}/interop-sdk.jar:${PLN_JAR_PATH}/HspCopyApp.jar:${PLN_JAR_PATH}/commons-logging.jar:${CLASSPATH
export CLASSPATH
case $OS in
HP-UX)
SHLIB_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${SHLIB_PATH:-}
export SHLIB_PATH
SunOS)
LD_LIBRARY_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LD_LIBRARY_PATH:-}
export LD_LIBRARY_PATH
AIX)
LIBPATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LIBPATH:-}
export LIBPATH
echo "$OS is not supported"
esac
We have not made any changes to either the shell or 'CubeRefresh.sh' or 'setHPenv.sh'
From the past couple of days the shell that executes 'CubeRefresh.sh' has been failing with the error message below.
Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
java.io.EOFException
This error is causing our Essbase cubes to not get refreshed from Planning cubes through these batch jobs.
On the other hand the manual refesh from within Planning works.
We are on Hyperion® Planning – System 9 - Version : 9.3.1.1.10
Any help on this would be greatly appreciated.
Thanks
Andy
Edited by: Andy_D on Jul 30, 2012 9:04 AM -
Compress Fails because Final Cut Quits
Every time I try to compress my movie from FCP to Compressor there is ALWAYS a problem. It is extremely frustrating. I would estimate that it would successfully compress once in five tries.
One error I most frequently run into is that Final Cut "quits unexpectedly" while compressing and compressing fails. Final Cut says that it may have been caused by the Pro Graphics plugin.
One error I just ran into was QuickTime error 0. One Apple Knowledge Base article says you need 1GB memory or you will get this error. I have 6 GB of RAM!
I'm just starting to use FC Studio so I am still trying to understand why somethings don't work. I do, however, need this project compressed for a DVD ASAP, and these problems are not helping. Thanks in advance!Until you get the problem solved, one workaround would be to export a self-contained movie from FCP (File->Export->QuickTime Movie). This will create a file identical to your Sequence. If you've included chapter or compression markers in FCP, be sure to indicate that in the export dialog window. You can then close FCP and use the new file directly in Compressor.
-DH -
Hello Gurus,
we have some strange behaviours with cube compression.
All requests are compressed, but in F table we still have some records.
The same records are stored in E table too, but with BEx query execution we can see correct result.
If we execute query in debug on RSRT, with SQL code display, the query reads only from F table or aggregates.
How it is possible?
We just provide to insert the COMPNOMERGE object in RSADMIN table, but only after the first compression. Do you think thath with a initialization of cube and a new compression with COMPNOMERGE object could solve our problem?
Could you help us?
Thanks in advance.
Regards.Vito Savalli wrote:>
> Hi Lars, thanks for your support.
> We don't have an open support message for this issue, but if it will be necessary, we will open it.
>
> I - The same records are stored in E table too, but with BEx query execution we can see correct result.
> You - The first part of this sentence is technically impossible. At least the request ID must be different in F- and E-fact table.
>
> Ok for the request ID, I know it. But, if we don't consider request ID (of course isn't equal) and we check the characteristics values by SID analysis, we find the same complete key both in F and in E table.
>
Well, but that's the whole point - the request ID!
That's why we do compression for at all - to merge together the data for the same key figures if they exist in both tables.
It's completely normal to have this situation.
> I - If we execute query in debug on RSRT, with SQL code display, the query reads only from F table or aggregates. How it is possible?
> You - Easy - you're statement about all requests being compressed is not true and/or it reads the necessary data from the aggregates.
>
> I executed with RSRT one of record which is in both tables.
Well, obviously there was some other implicit restriction that lead to the selections made by OLAP.
Maybe the request read from the F-Facttable was neither rolled up nor compressed.
> Very helpful, thanks.
> Any others suggestions?
I'd check exactly the status of the requests and where they can be read from.
You may also try out to disable the aggregate usage in RSRT to see whether or not the data is also read from the E-facttable and check the result of the query.
regards,
Lars -
Cube Compression - How it Affects Loading With Delete Overlapping Request
Hi guys,
Good day to all !!!
Our scenario is that we have a process chain that loads a data to infocube and that has delete overlapping step. I just want to ask how does the cube compression affects the loading with delete overlapping request. Is there any conflict/error that will raise? Kindly advice.
MarshanlouHi,
In the scenario you have mentioned:
First the info cube would be loaded.
Next when it goes to the step i.e delete overlapping request: in this particular step, it checks if the request is overlapping (with the same date or accd to the overlapping condition defined in the infopackage, if the data has been loaded).
If the request is overlapping, then only it deletes the request. Otherwise, no action would be taken. In this way,it checks that data is not loaded twice resulting in duplicasy.
It has nothing to do with compression and in no way affect compression/loading.
Sasi -
Cube compression will affect any data availability
Hi,
have an issue where I have a user running exactly the same report with the same selection criteria but getting different results.
The report was run from backlog this morning at 09:56 and again at 10:23. Although the batch was delayed, the data was actually loaded prior to 09:45. However, there was a cube compress running between 09:45 and 10:11.
So, the first report was run during the compress, the second after the compress was complete.
Could the compress process affect data availability to the end users? I can find no other explanation for this behaviour.
Thanks,
R ReddyHi,
one thing in advance: The next only applies to oracle databases. I have no experience with other databases.
the compression will usually not affect the reported data. But in case of the user doing the reporting while the compression is ongoing, it is indeed possible that the query will deliver wrong results. The reason is, that the collapsing collects the data of the not yet collapsed infopackages into the F table. The query will usually start parallel processes on all the available infopackage E tables and on the F fact table. Because of the amount of data F- table is larger, so the job there will be the longest running. After collecting the results, the results are added up.
Depending on the timing of the collapse run and the timing of the query it is possible that the collapsed data package was already successfully packed in the fact table, but the deletion of the infopackage was not completed (result: Key figures to high). Or alternativly the infopackage was already deleted but the F-table not completly commited - because of the query (result Key figures to low).
All in all I would strongly recommend to do collapsing at times where no query is run on the cube.
Kind regards,
Jürgen -
Hi gurus --
In my Prod environment.
ODS to Cube Delata failed .
So wat are the corrective actions to be done to get that delta ....
plz provide Step by step...its urgent.
Thanks and Regards,
Viswanath .Generate export datasource on ODS by right-clicking it and while creating the update rules, assign the ODS....
Start extraction...full or delta as required by you
For Delta, double click ODS and under settings, tick the check box stating update automatically into the data targets...it will automatically initialize the delta for you
If this is your design and still delta failing then.....right-click on ODS...maintain export datasource...save it...replicate the datasource...activate the transfer rules (you need to go to se37 and run the program RS_TRANSTRU_ACTIVATE_ALL and give the name of ODS with 8 as its prefix...execute and then start loading again....
Assign points if relevant.
Regards
Gajendra -
Hi
The SSAS cube synch fails with below error...
<Synchronize xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Source>
<ConnectionString>Provider=MSOLAP.4;Data Source=DB1.ancag.local;ConnectTo=10.0;Integrated Security=SSPI;Initial Catalog=MW_SSCCA;Connect Timeout=600</ConnectionString>
<Object>
<DatabaseID>MW_SSCCA</DatabaseID>
</Object>
</Source>
<SynchronizeSecurity>CopyAll</SynchronizeSecurity>
<ApplyCompression>true</ApplyCompression>
</Synchronize>
Error
Backup metadata traversal started.
Error -1056833535: Backup metadata traversal failed.
Server: The current operation was cancelled because another operation in the transaction failed.
Backup and restore errors: An error occurred while synchronizing the 'MW_SSCCA' database.
Execution complete
Can you please identify whats going with this...
SumanHi Suman,
According to your description, you get the above when executing cube sychronization. RIight?
Based on your error message, it fails when dong backup for the source server. In this scenario, please check if the destination server has administrator rights on the source database. Since the destination server execute synchronize commands under its service
account, otherwise it will fail, if you use a windows authentication, you should run SSAS under this windows account on your destincation server, and you need to grant administrator rights on the source database.
Best Regards,
Simon Hou
TechNet Community Support -
Hi colleagues:
I have started a basic cube compression. Considering there are lots of delta load requisitions, the entire process will take more than 30 hours.
However I need to run the process chain to load delta in others data providers and also in the same basic cube that has being compressed.
<b>May I run the delta loads in parallel to the compression process?</b>
Best regards
WaldemarI think Jkyle has probably identified the biggest concern.
This is a good example of why you should break up large processes into smaller pieces - I can't imagine a requirement to compress everything at once on a large InfoCube.
Always process manageable chunks of data whenever possible and benchmark before running larger processes, that way you can minimize:
- impacts to system availability.
- impact to system resources of large jobs. -
Cube compression and DB Statistics
Hi,
I am going to run Cube compressions on a number of my cubes and was wondering few facts about DB Statistics. Like:
1) How does the % of Info Cube space used for DB stats helps. I know that the more % we use the bigger is the stat and faster is the access but stats run longer. But would increasing the default value of 10% make any difference or overall performance improvements.
2) I will compress the cubes on a weekly basis and most of them will have around one request per day so will probably compress 7 requests for each cube. So it is advisable to run stats also on a weekly basis or can it be run on bi-weekly or monthly basis? and what factors does it depend on?
Thanks. I think we can have a good discussion on these apart from points.What DB are we talking about?
Oracle provides so many options on when and how to collect statistics, even allowing Oracle itself to make the decisions.
At any rate - no point in collecting statistics more than weekly if you are only going to compress weekly. Is your polan to compress all the requests when you run, or are you going to leave the most recent Reqs uncompressed in case you need to back out a Req for some reason. We compress weekly, but only Reqs that are more 14 days old so we can back out a Req if there is a data issue.
As far as sampling percent, 10% is good, and I definitely would not go below 5% on very large tables. My experience has been that sampling at less than 5% results in useful indices don't get selected. I have never seen a recommendation below 5% in any data warehouse info I have seen.
Are you running the statistics on the InfoCube by using the performance table option or process chain? I can not speak to the process chain statistics aproach, but I imagine it is similar, but I know when you run the statistics collection from performance tab, it not only collects the stats on the fact and dimension tables, but it also gos after all the master data tables for every InfoObject in the cube. That can cause some long run times. -
Cube compression WITH zero elimination option
We have tested turning on the switch to perform "zero elimination" when a cube is compressed. We have tested this with an older cube with lots of data in E table already, and also a new cube with the first compression. In both cases, at the oracle level we still found records where all of the key figures = zero. To us, this option did not seem to work. What are we missing? We are on Oracle 9.2.0.7.0 and BW 3.5 SP 17
Thanks, PeggyHaven't looked at ZERO Elimination in detail in the latest releases to see if there have been changes, but here's my understanding based on the last time I dug into it -
When you run compression with zero elimination, the process first excluded any individual F fact table rows with all KFs = 0, then if any of the summarized F fact table rows had all KF = 0, that row was excluded ( you could have two facts with amounts that net to 0 in the same request or different requests where all other Dims IDs are equal) and not written to the E fact table. Then if an E fact table row was updated as a result of a new F fact table row being being merged in, the process checked to see if the updated row had all KF values = 0, and if so, deleted that updated row from the E fact table.
I don't beleive the compression process has ever gone thru and read all existing E fact table rows and deleted ones where all KFs = 0.
Hope that made sense. We use Oracle, and it is possible that SAP has done some things differently on different DBs. Its also possible, that the fiddling SAP had done over that last few years trying to use Oracle's MERGE functionality at different SP levels comes into play.
Suggestions -
I'm assuming that teh E fact table holds a significant percentage of rows have all KFs = 0. If it doesn't, it's not worth pursuing.
Contact SAP, perhaps they have a standalone pgm that deletes E fact table rows where all KFs = 0. It could be a nice tool to have.
If they don't have one, consider writing your own pgm that deletes the rows in question. You'll need to keep downstream impacts in mind, e.g. aggregates (would need to be refilled - probably not a big deal), and InfoProviders that receive data from this cube.
Another option would be to clone the cube, datamart the data to the new cube. Once in the new cube, compress with zero elimination - this should get rid of all your 0 KF rows. Then delete the contents of the original cube and datamart the cloned cube data back to the original cube.
You might be able to accomplish this same process by datamarting the orig cube's data to itself which might save some hoop jumping. Then you would have to run a selective deletion to get rid of the orig data, or perhaps if the datamarted data went thru the PSA, you could just delete all the orig data from the cube, then load datamarted data from the PSA. Once the new request is loaded, compress with zero elimination.
Now if you happen to have built all your reporting on the this cube to be from a MultiProvider on teh cube rather than directly form the this cube, you could just create anew cube, export the data to it, then swap the old and new cubes in the MultiProvider. This is one of the benefits of always using a MultiProvider on top of a cube for reporting (an SAP and consultant recommended practice) - you can literally swap underlying cubes with no impact to the user base.
Maybe you are looking for
-
Where can I buy more RAM? And is it hard to install?
Do I have to go into an Apple store to buy ram or can I buy it online. Also will I be able to install it myself or will I need to have a pro do it?
-
I am about to publish my iBook with many photographs. It is interactive in that small images will go full screen when touched. Is there a size limit on my book? If so how many megs?
-
When I try to turn on my Imac, several things have happened or can happen. Grey screen comes up, loud "clicking" sound, fan kicks in, then screen either goes grey, or blue with alternating folders (question mark/os folder), or I get the white screen
-
Facebook 1.6 App - How do you see your current status?
I just updated to the new FB 1.6 app. It is very slickj. However, I can't find where it shows my current status and I can't find where I can clear my status? Suggestions? Larry
-
DSL contract changes not possible since FIOS is available?
I've been a Verizon DSL customer for 4-5 years. Only the 1.5Mbps is available to us. FIOS became available shortly after I signed for DSL. For the past year (or 2?), I've been considering moving out of state, as a result, I've been on the monthly pla