Loading time to cube

hello all,
i am facing a unique problem.
when loading to the ODs the load takes place successfully but when activating the load it fails but in the monitor it still shows yellow and i am unable to make out where the error is occuring. Also when i am loading to the cube from the ODS it seems to get caught when inserting into the /bi0/smaterial and i am not sure what needs to be done. i ran RSRV on the infocube and checked the SID .
Any thoughts are greatly appreciated
thanks
amit

Hi all,
I checked the status tab of the monitor and it doesnt show me any errors. But it keeps failing when i trying to activate the data. I went to transasction ST22 and saw some runtime error.
it says dbif_rsql_invalid_rsql .
Also checked the rsrv for 0material and it says the following error 1 value from S table does not exist in P table. There is no option to correct the error in RSRv.
Any help is greatly appreciated.
thanks
amit

Similar Messages

  • Cube Load Times

    Hi there,<BR><BR>We have one BSO cube implemented on Essbase 7.1.2 and we load data from an SQL server source every night. My question is to do with the time taken to load/calculate the cube.<BR><BR>It started off at around 20mins however in the space of a few months this time has risen a LOT. Its now taking upto 2.5hours to complete the load/calculation process.<BR><BR>If we re-structure the cube, the next days load takes around 1hour.<BR><BR>My question was basically, can anyone reccomend any tips/tricks to optimise the load process?<BR><BR>One thing we have noticed is that the data compression on the cube is set to RLE. IS this the right one or should we be using one of the others?<BR><BR>Any help appreciated! <img src="i/expressions/beer.gif" border="0"><BR><BR>Brian

    With the assumptions that (a) you perform a full calc of the database, and (b) the full calc itself isn't an exceptionally long process, you can typically get a much better overall performance gain by doing the following:<BR><BR>1) Export the input level data (level 0 if you have agg missing turned on).<BR>2) Clear the database<BR>3) Load data from your Essbase export<BR>4) Load data from your SQL source<BR>5) Run your full calc/calc all<BR><BR>The reason you could get an overall performance gain is two-fold:<BR>1) The RLE compression tends to cause a lot of fragmentation within the page files as blocks are written and re-written.<BR>2) The calc engine tends to run faster when it doesn't have to read the upper level blocks in before calculating them.<BR><BR>Both of the above are simplified explanations, and results vary greatly based on the outline/hierarchies and the calc required. One thing to note however, is that if you run any type of input scrubbing/modification script (e.g. allocation, elimination, etc.), the optimum sequence can be quite different than the above (I won't go into the hows and whys here).<BR><BR>Now on to the other ways to optimize your load. In the Database Administrators Guide (DBAG), there is a good section on ways to optimize a file for optimum loading. Basically, the closer the order of the members appearing in your load file matches the order they appear in the outline, the better. For instance, if the columns in the file are dense, and the rows are sorted in the same order as the sparse dimensions, you load MUCH faster than if you do the opposite. This can have a big impact on the fragmentation that occurs during a load, enough so that if you can be pretty sure that the sort for your SQL export is optimum for loading, the above approach (Export/Clear/Load/Calc) won't get you any real benefit -- at least from the fragmentation aspect.<BR><BR>Of course, the outline itself can be less than optimum, so any fixes you address should start with the outline, then move to the SQL file layout, then the load process. The caution here is to be sure that changes you make to align the outline with your dataload don't adversely affect your calc and retrieval performace too much.<BR><BR>Most of this is covered in the DBAG, in greater detail, just not layed out in an order that can be easily followed for your needs. So if you need further details on any of the above, start there.<BR>

  • Data load taking very long time between cube to cube

    Hi
    In our system the data loading is taking very long time between cube to cube using DTP in BI7.0,
    the maximum time consumption is happening at start of extraction step only, can anybody help in decreasing the start of extraction timing please
    Thanks
    Kiran

    Kindly little bit Elaborate your issue, Like how is mapping between two cubes, Is it One to one mapping or any Routine is there in Transformation. Any Filter/ Routine in DTP.  Also before loading data to Cube did you deleted Inedxes?
    Regards,
    Sushant

  • Real-Time i-Cube load behavior changed after transport

    Dears
      After the transports compleleted, the Real-Time i-cube load behavior is changed. All the 'can be loaded,planning not allowed' were changed to 'can be planned, load not allowed'. What happened? How I enpackage this type R-T i-cube? Any suggestions are appreciated.
    Best regards,
    Gerald

    Hi Gerald,
    you can switch it back to the enable loadind of data, using the "Switch Transactional Infocube" option you get when you right click on the Infocube.

  • Simulate load times

    Hi all,
    Anyone here knows if we can simulate load time in BW?
    Example, i have a DSO that goes to a Cube for reporting.
    I would like to simulate how long 100,000 records will take to get from ECC to the Cube.
    I might later want to test how long 250,00 records will take.
    RSA3 is not useful as it can only test how fast the extractor will take.
    Is such as thing possible?

    Hi Ganesan,
    1) for DSO to Cube Loading:  You have to create a DTP.
       Execute the DTP with Debugging mode (simulation).
       Then in the monitor you can find the time taken.
    2) for Loading from ECC to BI : You have to create Info package.
       But info package does not give any option for simulation.
    cheers.

  • Comparing load times w/ and w/o BIA

    We are looking at the pros/cons of BIA for implementation.  Does anyone have data to show a comparison between loads, loads with compression, vs BIA Index time?

    Haven't seen numbers comparing load times.  Loads to your cubes and compression continue whether you have BIA or not.  Rollup time would be eliminated as you would no longer have the need to have aggregates.  No aggregates should also reduce Change Run time, perhaps a lot, or only a little, depending on whether you have large aggregates with Nav Attrs in them. All of that is offset to some degree by the time to update the BIA.
    Make sure you understand all the licensing costs, not just SAP's, but the hardware vendors per blade licensing costs.  Talked to someone just the other day that was not expecting a per blade licensing, list price of the license per blade was $75,000

  • Loading data from Cube to Planning area

    Hi,
             If I am loading data from a cube to a planning area using transaction TSCUBE,
    does the system load data into planning area for the combinations that exist in the cube or does it load for all CVCs?
    For example,
    I have my CVC as Plant, Material, Customer
    If there are 4 CVCs in the POS that were previously generated as
    Plant--Material--Customer
    01--M1--
    C1
    01--M2--
    C3
    01--M2--
    C2
    01--M4--
    C5
    If the cube has data like this:
    Plant--MaterialCustomer----Qty.
    01--M1C1--
    10
    01--M2C3--
    20
    01--M2C2--
    5
    (doesnot have the last combination), then if I use TSCUBE transaction to load data to Planning area from this cube,
    is the data loaded as
    Plant--MaterialCustomer----Qty.
    01--M1C1--
    10
    01--M2C3--
    20
    01--M2C2--
    5
    Only for the 3 combinations that exist in the cube and not load anything for the last one
    OR
    is the data loaded as
    Plant--MaterialCustomer----Qty.
    01--M1C1--
    10
    01--M2C3--
    20
    01--M2C2--
    5
    01--M4C5--
    0
    Load all 4 combinations and send 0 as the cube doesnot have this combination?
    Hope I am clear on this question.
    Thanks.

    Thanks a lot Vinod, Srinivas and Harish. The reason why I am asking you is that we have a scenario where we get this situation.
    We initially get data from R/3 to BW to APO like the below:
    Plant--MaterialCustomer----Qty.
    01--M1C1--
    10
    Later when the customer is changed or bought out by somebody C1 is changed to C2. Some times when the business doesnot know who the customer is initially they just put C1 as dummy and then after sometime replace it by C2. Then the new record coming in is as follows:
    Plant--MaterialCustomer----Qty.
    01--M1C2--
    10
    BW can identify changes in transaction data  but not in Master data. What I mean by this is when Qty. 10 changes from 10 to 20, the system can identify it in deltas.
    If the customer (master data) changes to C2 from C1, the system thinks it's a new record all together then if I use delta loads, it gets me the following:
    Plant--MaterialCustomer----Qty.
    01--M1C1--
    10
    01--M1C2--
    10
    If I am looking at Plant and Material Level, my data is doubled.
    So we are planning to do a full load that works like this:
    1. Initial data like the below:
    Plant--MaterialCustomer----Qty.
    01--M1C1--
    10
    The CVC is created and the planning area has Qty.10
    Then we delete the contents of cube and do a full load into the cube with changed customer
    Plant--MaterialCustomer----Qty.
    01--M1C2--
    10
    This time a new CVC is created. Then we have another 10 loaded into Planning area.
    If the system loads all CVCs, then the it would send
    Plant--MaterialCustomer----Qty.
    01--M1C1--
    0
    01--M1C1--
    10
    If the system loads only combinations in cube,
    then it loads
    Plant--MaterialCustomer----Qty.
    01--M1C2--
    10
    But the system already has another 10 for Customer C1 duplicating the values.
    We are trouble in the second case.
    We had to go fr this solution instead of realignment as our business has no way pf knowing that C1 was replaced by C2.
    Hope I am clear.

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • Last load time in Query

    Dear All,
    Can i get somehow in query what was the last load time? I mean if query is running on InfoCube and i want to see that on that InfoCube when last data was loaded and this info i want to see in Query.
    Regards,
    Sohil Shah.

    Hi,
    Do you want to know only the status of the data of that query or in general about all the cubes last load time.
    If you want to know the status of the data of that query, check the below link. You can get it from the constants of query.
    http://help.sap.com/saphelp_nw70/helpdata/EN/06/ad1578a5a9487da87495c1960f5a2d/content.htm
    If you need to view about all the cubes, then you may need to install the technical content cubes which contains data about the status of the cube.
    Hope this gives you an idea.
    Regards
    akhan

  • Loading ODS to Cube Creates two records

    I am using one ODS to load data into Cube 1 and Cube 2.
    Cube 1 is a delta load and Cube 2 is a full load. The design is that Cube 1 will have all the daily updates from R/3 and Cube 2 will be loaded with a snap-shot of the ODS data at midnight. When the snap-shot is loaded into Cube 2 an update rule will change characteristic infoobject  "Snap-Shot date" to the current date.  So, cube 2 will contain all the nightly snap-shots with different dates
    The initial load of Cube 1 runs fine and it loads 1488 records. When I run Cube 2's full load it add 2976 records (double). When I look at the Cube 2 data I see one record with the Snap-shot date that is blank and one with the current date in it.
    I have to click on a key figure that has an "Update Type = Addition" to get to the characteristics update rule for "snap-shot date". Is the fact that the key figure is additive causing the creation of two records?
    Regards,
    Mike...

    Yes, that was my problem I didn't have the update rule applied to all the key figures. When I put in the update rule and I saved it, I said "no" to the pop-up "Apply to all key figures".
    I just re-edited the update rule and this time I clicked "yes" apply to all key figures. Load is working fine now...
    Thanks,
    Mike...

  • Issue with Data Loading between 2 Cubes

    Hi All
    I have a Cube A which has huge amount of data. Around 7 years of data. This cube is on BWA. In order to empty out space from this Cube we have created a new Cube B.
    We have now starting to load data from Cube A to Cube B based on created on. But we are facing a lot of memory issues hence we are unable to load data for a week in fact. As of now we are loading based on 1 date which is not useful as it will take lot of time to load 4 years of data.
    Can you propose some alternate way by which we can make this data transfer faster between 2 cubes ? I though of loading Cube B from DSO under Cube A but that's not possible as the DSO does not have that much old data.
    Please share your thoughts.
    Thanks
    Prateek

    HI SUV / All
    I have tried running based on Parallel process as there are 4 for my system. there are no routines between Cube to Cube. There is already a MP for this cube. I just want to shift data for 4 years from this cube into another.
    1) Data packet as 10, 000 - 8 out of some 488 packets failed
    2) Data packet as 20, 000 - 4 out of some 244 packets failed
    3) Data packet as 50,000 - Waited for some 50 min. No extraction. So killed the job.
    Error : Dump: Internal session terminated with runtime error DBIF_RSQL_SQL_ERROR (see ST22)
    In ST22:
    Database error text........: "ORA-00060: deadlock detected while waiting for
      resource"
    Can you help resolving this issue or some pointers ?
    Thanks
    Prateek

  • Movie Load Time Too Slow

    Hello! 
    I've created three movies for a new website; all photo slide shows with 10-14 photos and some text.  The photos have all been optimized in Photoshop and saved for web...most are under 100kb, however the movies are taking a long time to load on the webpages.  Is there anything I can do through Edge Animate to reduce the load time?  Even with the preloader, the load time is way too long.  I've inserted the movies into the html pages using iframe.  Any suggestions are much appreciated.  Thanks!

    Hi Simon,
    I want to reduce Power View blank canvas / PowerView report load time from SSAS Tabular source in the SharePoint 2013 Portal.
    I have observed that a PowerView report with 1 View, loads faster than a PowerView report with multiple (4) Views, so I think that your statement "Power View only retrieves the data it needs at any given time for a data visualization" might
    be incorrect.
    I have read the link you have provided and have all the patches applied, besides I am not using a Power Pivot source.
    My tabular cube is complex and has about 200 measures, and the blank Power View canvas takes about 13 seconds to load in SharePoint 2013 URL from web browser. Appreciate if you can provide any insights here please.
    Thanks, Ashish Singh

  • PowerView (SharePoint 2013) Load time too slow

    I am using PowerView in SharePoint 2013 and when I access the SSAS 2012 Tabular Data Source through a "Report Data Source", the PowerView Canvas with the Field List takes a long while to load and the users get restless seeing the Blue spinning
    wheel. I tested this on different tabular models and am seeing different results based on the complexity of the model.
    Complex Model - 13 secs.
    Adventure Works Model - 9 secs.
    Very Simple Model (1 fact, 1 dim) - 8 secs.
    I think if the model is complex then the engine takes a longer time to render the field list. How to get around this limitation and bring the blank canvas load time to under 5 secs.? I am curious to know if anyone has ever seen lesser load times in their
    environment for the blank PowerView canvas and field lists.
    Alternatively, If the field list loading takes time, is it possible to disable it? Because, I have already created dashboards for the end users in PowerView and do not want to load the field list if it is slowing down the entire experience.
    Appreciate any guidance.
    Thanks, Ashish Singh

    Hi Simon,
    I want to reduce Power View blank canvas / PowerView report load time from SSAS Tabular source in the SharePoint 2013 Portal.
    I have observed that a PowerView report with 1 View, loads faster than a PowerView report with multiple (4) Views, so I think that your statement "Power View only retrieves the data it needs at any given time for a data visualization" might
    be incorrect.
    I have read the link you have provided and have all the patches applied, besides I am not using a Power Pivot source.
    My tabular cube is complex and has about 200 measures, and the blank Power View canvas takes about 13 seconds to load in SharePoint 2013 URL from web browser. Appreciate if you can provide any insights here please.
    Thanks, Ashish Singh

  • Load time depends on index files

    Hi All,
    (on BSO)
    I read some where like if more index files exists the load time will be increase to search the rite combination to load data value. Is that correct ?
    I have restrcutred(level 0) the dense members and done the calculations So my index files become 3 after i have run general load on that database it took 1hr.
    I have done sparse restructure( all data) and noticed Index files become 2 and ran general load on that database it took 30 mins time.
    Let me know whether m correct or not.
    Sorry,m not good in explaning the things. :)
    Regards,
    Prabhas
    Edited by: Prabhas on Jun 13, 2012 9:21 PM

    You say you first load into a test cube, then export and use that to load into prod. Is the dimension order and dense/sparse configuration the same? Test cube is a copy of prod cube.so everything must be same.
    If your periods (or time) is dense, then loading a single month will still cause fragmentation as it has to read the blocks and rewrite them. I'm guessing things speed up after a restructure because you are getting rid of fragmentation. What you think is a sparse restructure is actually a dense restructure.Please note that We have "years" dimension as a sparse not dense dimension and i have done fragmentation during the maintainance..and loaded but takes much time than usual.But very next day i have added new sparse member and done alldata restructure and cleared the particular month data and loaded which completes faster than y'day load.

  • Load Issue to Cube

    Hi All,
    I need to load data to cube.there are two update rules.
    Can we load data from 2 update rules at a time to the same cube.Please suggest ASAP.
    Thanks
    Karuna

    Hi,
    We can upate data through more then 1 upate rules, if you take sale i.e. 0SD_C03 infocube, it will get data from 5 to 6 DataSources/Upate rules. In this case you need to take care about Mappings in upate rules, i.e. which Field shouls map to Which Filed.
    Eg:
    0SD_C03 gets the data from the following DataSources/Update Rules...
    2LIS_13_VDITM
    2LIS_11_VAHDR
    2LIS_11_VAITM
    2LIS_12_VCHDR
    2LIS_13_VDHDR
    See the SAP help..
    http://help.sap.com/saphelp_nw70/helpdata/en/71/1769372b2b7d20e10000009b38f842/frameset.htm
    Thanks
    Reddy

Maybe you are looking for

  • I need need a copy of the cat binary that goes in /bin

    I called apple support and the server section is closed so I was wondering if anyone could help me replace the actual cat command file as I accidentally deleted mine.

  • VGA Y-Splitter cable?

    I have one that I use on my desktop PC right now, for the same image doubled and ran across my desk so when I turn around I can still see my screen. Let's I ran I connected the DVI to VGA adapater to my MBP then connected that to the Y-Splitter cable

  • How to maintain the Cache Files

    Hello Folks, Is there anyway to minimize Cache files..? Our Web reports(9i) output into PDF and has Parameter form. For each time we run the report, one html file(Parameter form) and PDF file(report output) was caching in the Cache directory. For sam

  • How to identify recurring instance of a report using Query Builder

    Hi, I am having a report scheduled for four different set of  parameters.  I want to identify the instances pertaining to each schedule.  I tried the last successful instance but it gives only the last instance detail. Kindly help me Thanks and Regar

  • What is my login and password for SQL PLUS

    I've downloaded and installed a custom version of Oracle 9i Database Release 2, when I open up SQL PLUS, I don't know whats my username and password, if I use the username and password I use to download other software from this website, it's giving m