Cache Settings - Data Load

Hello All,
Do we have to set cache while performing data laod?
Defragmentation - No setting of cache
Calculation - Set cahce to reduce the calculation time (Max 2 GB- Index Cache + Data Cache)
Data Load - ???
Amarnath

Hi Amarnath,
There are some configuration settings that can effect dataload performance which are:-
1- DLTHREADSPREPARE - Specifies how many threads Essbase may use during the data load stage that codifies and organizes the data in preparation to being written to blocks in memory.
2- DLTHREADSWRITE - Specifies how many threads Essbase may use during the data load stage that writes data to the disk. High values may require allocation of additional cache.
3- DLSINGLETHREADPERSTAGE - Specifies that Essbase use a single thread per stage, ignoring the values in the DLTHREADSPREPARE and DLTHREADSWRITE settings.
If you set high value for 2nd setting then you need to increase cache size too.
Hope it answere your question.
Regards,
Atul Kushwaha

Similar Messages

  • Cache settings in BI7. Data still not displayed correctly in RSRT

    Good day
    Please assist? I did scan SDN for a similar problem and did check the CACHE settings in RSRT etc.
    My problem is: If I check the data in the DSO, then the updated information that was loaded yesterday is displayed. When I run the query, it does not display the last loaded information. I have run the query in RSRT and the updated information does not display either. I have changed the cache properties of this query to different settings already, and running the query from RSRT, with no success.
    I also checked the query itself (design) It is not restricted to anything. User selects 0CALMONTH as variable only. I have logged out and in again after each Cache setting, no success of displaying the updated DSO information.
    Can anyone please tell me why the query does not pick up the latest information from the DSO?
    Thanks in advance.
    Cj

    Hi Cj,
    Is the last request marked as Ready for Reporting? This needs to be the case before the latest data comes up in the query.
    Hope this helps!
    Regards,
    Saurabh

  • Some problem with IDoc's Settings while loading data

    Hey Experts,
    Previously i tried data loading using LIS & LO, but i was not able to load data, i was taking it as some transfer structure problem.
    But just today i tried to load Master Data, and its displaying the same error. I feel its something related to IDoc/Basis settings, please see the below wrror described and try to help.
    <b>Error when updating Idocs in Business Information Warehouse
    Diagnosis
    Errors have been reported in Business Information Warehouse during IDoc update:
    Could not find code page for receiving system</b>
    When i am checking the details its saying:
    Transfer (IDocs and TRFC): Missing messages or warnings ,
    Request IDoc : sent, not arrived ; Error passing data to port.....
    Please suggest the solution?
    Thanks.....

    Hey Roberto,
    in the details tab of RSMO, before that IDOC, i m getting this error message also:
    1. Requests (messages): Everything OK
    1.1 Data request arranged
    1.2 Confirmed with: OK
    2Extraction (messages): Missing messages
    2.1Missing message: Request received
    2.2Missing message: Number of sent records
    2.3Missing message: Number of sent records
    after that its
    Transfer (IDocs and TRFC): Errors occurred
    If it can add something extra in getting the solution.....
    Message was edited by: BI Project

  • Data Cache Settings

    Hello,
    I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
    databases currently. Can somebody please help me understand if they are right or need some changes:
    DataBase A:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 100000
    Block Size: (B) 143880
    Number of existing blocks: 7266
    Page file size: 40034304
    DataBase B:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 300000
    Block Size: (B) 91560
    Number of existing blocks: 1912190
    Page file size:2147475456--1
    Page file size: 500703056
    DataBase C:
    Data File Cache setting (KB) 300000
    Data cache setting (KB) 37500
    Block Size: (B) 23160
    Number of existing blocks: 26999863
    Page File Size: 21 page file =21 * 2= 42 GB
    If this might not be the issue then please let me know what might be causing it?
    Thanks!
    Edited by: user4958421 on Dec 15, 2009 10:43 AM

    Hi,
    1. For error no 1130203, here are the possible problems and solutions, straight from document.
    Try any of the following suggestions to fix the problem. Once you fix the problem, check to see if the database is corrupt.
    1. Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occuring, add more memory to the server computer.
    2. If you are on a UNIX computer, check the user limit profile.
    3. Check the block size of the database. If necessary, reduce the block size.
    4. Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
    5. Make sure that the Analytic Services computer has enough resources. Consult the Analytic Services Installation Guide for a list of system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Analytic Services needs
    Sandeep Reddy Enti
    HCC
    http://hyperionconsultancy.com/

  • LR 5.5 - Camera RAW Cache settings reset non-stop

    Lightroom doesn’t remember the folder path in Camera RAW Cache Settings section in File Handling tab in Preferences window. I want to set my own folder Camera Raw Cache and its maximum size but every time Lightroom sets default path (C:/Users/Name/AppData/Local/Adobe/CameraRaw/Cache/) and also maximum size as 1GB. When I set all the parameters Camera RAW Cache and click OK or switch between the tabs in Lightroom preferences window or restart Lightroom and I return to the File Handling tab all the data is reset to default. The same happens when I want to change the parameters Camera RAW Cache from Adobe Bridge and Photoshop.
    Second question:
    When I use DNG file format with Embed Fast Load Data option in the folder Camera RAW Cache the “DAT” files doesn’t appear. Is it connected with the use of Embed Fast Load Data option? When I use the original NEF files from my Nikon “DAT” files appear.
    [Win 7 64-bit, LR 5.5, Web&Design CS6]

    I have no trouble placing my Camera Raw cache where I want to put it. Perhaps deleting your Preferences file would help with that problem.
    Yes, if you embed fast-load data in your DNGs, LR will use that instead of the CR Cache.
    Hal

  • Properly and accurately calculating application cache settings

    Hello everyone.
    We are running Hyperion Planning 11.1.2.1 and one of the dataforms we have set up is quite heavy (it includes several DynamicCalc members with considerable ammount of scripting) and it fails to load from time to time, just saying "Cannot open dataform X. Check logs" and such.
    I have tried to increase cache sizing in the databases of the Essbase application (right clic on each database > edit > properties > caches), as well as buffer sizes and commit blocks.
    Little by little I have managed to improve performance by modifying the above mentioned caches (it crashes less often), but still I guess it's nuts to infinitely increase caches to make sure it works.
    So, my question is: Is there a way to calculate the accurate cache settings for a given application?
    My current settings are:
    Buffer size: 200 KB
    Short buffer size: 200 KB
    Index cache setting: 30720 KB
    Data file cache setting: 61440 KB
    Data cache setting: 51200 KB
    Commit blocks: 30000
    Do you think these are accurate?
    Thanks a lot,
    G.S.Feliu

    You haven't really provided enough information to be honest, for example are you running a 64bit system?
    But that is rhetorical, as usual the first port of call is the DBAG. I don't see why a link should be posted, you must have access to it already if you're administering a production system. That will point out things like that the Data file cache setting is only relevant if Direct I/O is being used and that the index size should be at least as big as the file if memory allows.
    Commit blocks....is interesting, personally I have set it to 0 in some projects and seen some improvement but as usual testing is the key.
    However their is a performance tuning guide out there that you may find very useful:
    https://blogs.oracle.com/pa/entry/epm_11_1_2_epm1
    that focuses on the infrastructure a bit more. Bit complicated though and I would thoroughly recommend recording a set of benchmarks before applying any of those changes, and applying said changes one by one or you'll never know what is good and what is not so good.
    Learn from the pain others have already endured ;-)
    Good Luck
    Steve

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

  • Performance Tuning Data Load for ASO cube

    Hi,
    Anyone can help how to fine tune data load on ASO cube.
    We have ASO cube which load around 110 million records from a total of 20 data files.
    18 of the data files has 4 million records each and the last two has around 18 million records.
    On average, to load 4 million records it took 130 seconds.
    The data file has 157 data column representing period dimension.
    With BSO cube, sorting the data file normally help. But with ASO, it does not seem to have
    any impact. Any suggestion how to improve the data load performance for ASO cube?
    Thanks,
    Lian

    Yes TimG it sure looks identical - except for the last BSO reference.
    Well nevermind as long as those that count remember where the words come from.
    To the Original Poster and to 960127 (come on create a profile already will you?):
    The sort order WILL matter IF you are using a compression dimension. In this case the compression dimension acts just like a BSO Dense dimension. If you load part of it in one record then when the next record comes along it has to be added to the already existing part. The ASO "load buffer" is really a file named <dbname.dat> that is built in your temp tablespace.
    The most recent x records that can fit in the ASO cache are still retained on the disk drive in the cache. So if the record is still there it will not have to be reread from the disk drive. So you could (instead of sorting) create an ASO cache as large as your final dat file. Then the record would already still be on the disk.
    BUT WAIT BEFORE YOU GO RAISING YOUR ASO CACHE. All operating systems use memory mapped IO therefore even if it is not in the cache it will likely still be in on the disk in "Standby" memory (the dark blue memory as seen in Resource Monitor) this will continue until the system runs out of "Free" memory (light blue in resource monitor).
    So in conclusion if your system still has Free memory there is no need (in a data load) to increase your ASO cache. And if you are out of Free memory then all you will do is slow down the other applications running on your system by increasing ASO Cache during a data load - so don't do it.
    Finally, if you have enough memory so that the entire data file fits in StandBY + Free memory then don't bother to sort it first. But if you do not have enough then sort it.
    Of course you have 20 data files so I hope that you do not have compression members spread out amongst these files!!!
    Finally, you did not say if you were using parallel load threads. If you need to have 20 files read up on having parrallel load buffers and parallel load scripts. that will make it faster.
    But if you do not really need 20 files and just broke them up to load parallel then create one single file and raise your DLTHREADSPREPARE and DLTHREADSWRITE settings. Heck these will help even if you do go parallel and really help if you don't but still keep 20 separate files.

  • Data Load Optimization

    Hi,
    I have a cube with following dimension information and it requires optimization for data load, its data is cleared and loaded every week from SQL data source using load rule. It loads 35 million records and the load is so slow that only for data load excluding calculation takes 10 hrs. Is it common? Is there any change in the structure I need to make the load faster like changing the Measures to sparse or change the position of dimensions. Also the block size is large, 52920 B thats kind of absurd. I have also the cache settings below so please look at it please give me suggestions on this
    MEASURE      Dense     Accounts 245 (No. Of Members)
    PERIOD     Dense     Time 27
    CALC      Sparse     None      1
    SCENARIO     Sparse     None 7
    GEO_NM     Sparse     None     50
    PRODUCT     Sparse     None 8416
    CAMPAIGN     Sparse     None 35
    SEGMENT     Sparse     None 32
    Cache settings :
    Index Cache setting : 1024
    Index Cache Current Value : 1024
    Data File Cache Setting : 32768
    Data file Cache Current Value : 0
    Data Cache Setting : 3072
    Data Cache Current Value : 3049
    I would appreciate any help on this. Thanks!

    10 hrs is not acceptable even for that many rows. For my discussion, I'll assume a BSO cube,
    There are a few things to consider
    First what is the order of the columns in your load rule? Can you post the SQL? IS the sql sorted as it comes in? Optimal for a load would be to have your sparse dimensions first followed by the dense dimensions(preferably having one of the dense dimensiosn as columns instead of rows) For example your periods going across like Jan, Feb, Mar, etc
    Second, Do you have parallel data loading turned on? Look in the config for Dlthreadsprepare and DLthreadswrite. My multithreading you can get better throughput
    Third, how does the data get loaded? Is there any summation of data before being loaded or do you have the load rule set to addative. doing the summation in SQL would spead things up a lot since each block would only get hit once.
    I have also seen network issues cause this as transferring this many rows would be slow ( as KRishna said) and have seen where the number of joins done on the SQL caused massive delays in preparing the data. Out of interest, how long does the actual query take if you are just executing it from a SQL tool.

  • Dynamic action - Cache server data

    APEX 4.2.2
    Is there a way to cache server side data in a global (page/document level) Javascript vector (bunch of key-value pairs) (associative array, array of objects or some such)? This way the data can be used by subsequent dynamic actions' Javascript code without querying the server over and over.
    I see that apex_util.json_from_sql is still undocumented after all these years. Are there any examples of using that API to do this sort of thing? Can someone please share?
    Thanks

    https://apex.oracle.com/pls/apex/f?p=57688:24
    I took a shot at this and it seems to work well using Javascript global objects for data storage. Hope this helps someone.
    1. Page attributes - Javascript global variables var gData,gLookup={};2. On-demand application process Get_Emps as apex_util.json_from_sql('select empno,ename from emp');3. Page Load dynamic action to invoke the application process and cache the data var get = new htmldb_Get(null,$v('pFlowId'),'APPLICATION_PROCESS=Get_Emps',$v('pFlowStepId'));
    var retval=get.get();get=null;
    gData=apex.jQuery.parseJSON(retval);
    $.each(gData.row,function(i,obj){
       gLookup[obj.EMPNO]=obj.ENAME;
    });4. EMPNO page item - Standard OnChange dynamic action to SetValue of ENAME page item using the cached data - Javascript expression gLookup[$v(this.triggeringElement)]
    We document something when it is ready to be supported. This specific procedure isn't something we haven't invested in, and thus, per our discretion, have elected not to document or support it at this time.Fair enough.

  • Data Load to BI (7.0 SP 9) from R3(ECC 6.0 SP-Basis 9)

    Dear All,
    We have new instance of Devlopment BW System with version 7.0 and R/3 upgraded to ECC6.0. We connected the source system. When we extract the data through DTP the data load is sucessful with 0 Records.
    This is case with all the extractors.
    The data base is on Oracle 10.2
    Observations for this:
    0) Source system connection check OK.
    1) When I test in RSA3 for the same extract I could fetch some data there.
    2) I could transfer the global setting
    3) I could not see any of the iDoc generated in BW and received in R/3
    4) No back ground job is generated in R/3 in SM37
    5) I could extract the data from other Source System(SEM) instance based on 3.5 technology.
    As a progress on this issue I could load the load sucessfully by 3.X methodolgy (Using update rules) but not by BI7.0 Methodology(Using Transformation).
    As a standards by the client we have to use 7.0 Methodology so still need to find a solution for the same.
    No clue on how to solve and what is going on wrong. Help me in solve this issue.
    Thanks in Advance,
    PV
    Message was edited by:
            USERPV

    I am not sure if you have followed all the necessary steps to do a data load to the infocube. I also wish I had more information about your system and the error message you are getting. A data load can fail due to a variety of reasons -- depending on the BW version, system settings and the procedure you followed. Please use the data load monitor transaction rsmo, identify the error message and take the necessary action.
    if it may be useful reward point are appreciated

  • Data Loading Wizard in Apex 4.1.1

    I am having an issue utilizing the out of the box APEX Data loading wizard.
    My data (separated by \t) has double quotes (") in them and no variation of "Enclosed by" or "Separated by" item settings is allowing the "Parse Uploaded Data" process to correctly parse the information.
    Am I missing something obvious ? Has anyone got it to work with data that does contain double quotes?

    Hi Usul,
    I am able to parse data with double quotes and separated by a tab (\t). can you double check if your file is tab separated and contains double quotes ? or would you share the file with me so that i can check what is going on ?
    Regards,
    Patrick

  • Master data load

    Hi Experts,
    I am using data source 0MAT_PLANT_ATTR for attribute data load in 3.5 data flow using direct update info source.
    In Processing tab of Info package settings are as below
    "Only PSA"  alongside it "Update subsequently in Data Target" is checked and "Ignore Double Data Records" is also checked.
    In Error Handling it is "No Update, No Reporting"
    I always run a delta load.
    Problem is, delta load only brings data upto PSA. I always need to process into info object manually from PSA. Why is this so ???
    Good answers would be really appriciated.
    Thanks,
    Mihir

    Hi,
    All the posts above  implies the same, Its like these settings which you mentioned pertains to IP.
    1> Loading to PSA.
    2> Subsequently to data target.
    As per the setting the ip will load data to PSA to the fullest, and then you will get a process manually option to load data to subsequent data target, if you want to do it automatically then better switch to some other selection (i.e PSA and then into data target etc).
    But in case you are running the loads through a Process chain , then there are two different variants for the above to mentiones processes i.r. IP with load to PSA & Update from PSA .This atep update from PSA needs to be included in the process chain if you want to automatize the loading further from PSA to target process(i.r. Reconstuction, pushing of data).
    This could be done in the change mode of  the process chain in rspc , click on "Process types" and under Load process and Post processing you will find " update from psa to data target", add this variant under the IP for which u maid the setting and rest will go through.
    Hope this will be helpfull.
    Thax & regards
    Vaibhave Sharma

  • Using the cache in Data Integrator

    Hi,
    I'm trying to understand if is possible restrict the use of cache in Data Integrator to load initial datasource to limit the use of server resources.
    I understand from the manual to set the option to use the type of dataflow cache: Pageable or In-Memory. This option does not solve my problem. I would like to avoid to load all the cached datasource.
    Is possible to perform query objects directly without first loading all data sources in the tables?

    base /open SQL Statement etc) the first time i defined the system everything was fine but when i changed the Database (using M.S.Access) the " open SQL Statement" it would show the tables but not thier columns ,I"m using win98 sec edition / Teststand 1.0.1Hello Kitty -
    Certainly it is unusual that you can still see the tables available in your MS Access database but cannot see the columns? I am assuming you are configuring an Open Statement step and are trying to use the ring-control to select columns from your table?
    Can you tell me more about the changes you made to your file when you 'changed' it with MS Access? What version of Access are you using? What happens if you try and manually type in an 'Open Statement Dialog's SQL string such as...
    "SELECT UUT_RESULT.TEST_SOCKET_INDEX, UUT_RESULT.UUT_STATUS, UUT_RESULT.START_DATE_TIME FROM UUT_RESULT"
    Is it able to find the columns even if it can't display them? I am worried that maybe you are using a version of MS Access that is too new for the version of TestSt
    and you are running. Has anything else changed aside from the file you are editing?
    Regards,
    -Elaine R.
    National Instruments
    http://www.ni.com/ask

  • Roll back data load

    Hi All,
    I am using essbase 7.1.5. Is there any way to rollback the database in case if it finds any error. For example if source file (Flat file) contains 1000 records where at 501 record essbase found an error and has rejected that record (501) and aborted the load. In this case is there a way to rollback the database before loading the 500 records.If so where should i do those settings?? Please advice.
    Thanks in advance.
    Hari

    A 6 Hour data load? I've never heard of such a thing. It sounds like the data should be sorted for a more optimum dataload to me.
    As for the two stage approach, this assumes that you are loading variances (delta's) to the existing values, rather than replacing the data itself. It further assumes that you can create an input level cube to handle the conversion. You have your base data in one scenario, and your variances/delta's in another. You can reload your delta's (as absolutes) any time, and derive the absolutes from the sum of the base and variance values.
    - Scenario
    -- Base (+) <--- this get's recalculated when the Delta's are considered "good"
    -- Delta (+) <--- this get's loaded for changes only, and reset to zero when the base is recalculated.
    You export the modified data from this cube to your existing/consolidation cube. If you can "pre-load" the changes (outside of your calc window for the main cube), you can optimize the calculation window -- although if it takes 6 hours to load the database your calc window is probably shot no matter what you do.
    However, if you mean that the load AND calc takes 6 hours, and it takes a relatively short time to load the data alone, this can be a performance enhancement because you can recalc this "input cube" in seconds from a new/complete load relative to the "in place" reset and reload changes approach (in your existing cube).
    You are simply redirecting your data into a staging table, essentially, and the staging table handles the conversion of variances to absolutes so you can make the process more efficient over all (it is often more efficient to break the process up into smaller pieces).

Maybe you are looking for

  • IPad connection issue after iOS 8 upgrade

    I Have an iPad 3G - WiFi-  and recently upgraded to IOS 8.  Every since connection bounces from 3G to  "no service" (AT&T provider).  I have been able to get reconnected only by choosing "Reset Settings".  I previously powered off and restarted with

  • 2 iphones 1 apple id

    Hi,  I am new to Apple and both my wife and I each have iphone5's and ipad's.  We want to have one Apple ID so we do not have to purchase apps or books twice.  However, we do not want to share contacts, email, phone calls, or anything else.  We do wa

  • Error Message:  "Error 2: photoshop is undefined" ????

    I've looked both on the Adobe site & in this Forum to see if this error message is addressed, and I do not find it anywhere. Here's what I get every time I open CS2: Error 2: photoshop is undefined Line:1 -> photoshop launchBridgeInBackground (); I c

  • Can't access bookmark properties.

    I'd like to access bookmark properties. Right clicking a bookmark (which used to allow me to access bookmark properties) opens the bookmark in a new tab. I stored information in bookmark properties. Where did it go? This is a show stopper for me - I

  • Coordinates of object in 3d picture

    Hi, I would like to obtain the 3d coordinates of the green spheres after applying a rotation and translation of its parent, the red rectangle. How could this be achieved? Attachments: 3Dpicture.png ‏90 KB