DIRECT MODE에서의 PARALLEL DATA LOADING

제품 : ORACLE SERVER
작성날짜 : 1999-08-10
Direct mode에서의 parallel Data Loading
=======================================
SQL*Loader는 동일 table에 대한 direct mode에서의 parallel data load를
지원하고 있다. 이는 여러 session에서 동시에 데이타를 direct mode로 올림으로써
대용량의 데이타의 로드 속도를 향상시킬 수 있다. 특히, data file을 물리적으로
다른 disk에 위치시킴으로써 더욱 큰 효과를 낼 수 있다.
1. 제약사항
- index가 없는 table에만 로드 가능
- APPEND mode에서만 가능. (replace, truncate, insert mode는 지원 안됨)
- parallel query option이 설치되어 있어야 함.
2. 사용방법
각각의 data file을 load할 control file들을 생성한 후, 차례차례 수행하면 됨.
$sqlldr scott/tiger control=load1.ctl direct=true parallel=true&
$sqlldr scott/tiger control=load2.ctl direct=true parallel=true&
$sqlldr scott/tiger control=load3.ctl direct=true parallel=true
3. constraint
- enable parameter를 사용하면 데이타 로드 작업이 모두 끝난 후, 자동으로
constraint을 enable시켜 준다. 그러나 종종 enable되지 못하는 경우가
있으므로 반드시 status를 확인해야 한다.
- primary key 나 unique key constraint이 걸려 있는 경우, 데이타 로드 후
자동으로 enable할 때, index를 생성하느라 시간이 많이 소모될 수 있다.
따라서 data만 parallel direct 모드로 로드 한 후, index를 따로 parallel로
생성하는 것이 성능 측면에서 바람직하다.
4. storage 할당 방법 및 주의사항
direct로 데이타를 로드하는 경우 다음 절차를 따라 작업한다.
- 대상 table의 storage 절에 기초해 temporary segment를 생성한다.
- 마지막 데이타 로드 작업이 끝난 후, 마지막에 할당되었던 extent의 비어 있는
즉, 사용하지 않은 부분을 trim 한다.
- temporary segment에 해당되어 있는 extent들의 header 정보를
변경하고, HWM 정보를 수정하여, 대상 table에 extent가 소속되도록 한다.
이러한 extent 할당 방법은 다음과 같은 문제를 야기시킨다.
- parallel data load에서는 table 생성 시 할당된 최초 INITIAL extent를
사용하지 않는다.
- 정상적인 extent 할당 rule을 따르지 않고, 각 process는 next extent에
정의된 크기를 할당하여 data load를 시작하고, 새로운 extent가 요구될
때에는 pctincrease 값을 기준으로 할당되게 되는데, 이는 process 간에
독립적으로 계산되어진다.
- fragmentation이 심하게 발생할 수 있다.
fragmentation을 줄이고, storage 할당을 효율적으로 하기 위해서는
- INITIAL을 2-5 block 정도로 작게 하여 table을 생성한다.
- 7.2 이상 버젼에서는 options 절에서 storage parameter를 지정하여
사용한다. 이 때 initial과 next를 동일한 크기로 주는 것이 바람직하다.
OPTIONS (STORAGE=(MINEXTENTS n
MAXEXTENTS n
INITIAL n K
NEXT n K
PCTINCREASE n))
- options 절을 control file에 기술하는 경우 반드시 insert into tables
절 다음에 기술해야 한다.

First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
- InfoPackage 1, Students 1 - 10.000
- InfoPackage 2, Students 10.001 - 20.000
...and so on.
Following that I need to create a Process Chain that starts loading all packages at the same point in time.
Now...when the extractor is called, there are two parts that it runs through:
- Initialization of the extractor
- Fetching of records
( via flag i_initflag in the extractor ).
In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
Jeroen

Similar Messages

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • Does Essbase System 9 Support Parallel Data Loads?

    Hi,
    Can anyone tell me, whether Essbase System 9 supports Parallel Dataloads or not?
    If it supports, How many load rules can be executed parallelly. If any of you know this please tell me. This will be a great help to me.
    Thanks a lot..............

    Hi Atul Kushwaha,
    Are you sure that Essbase System 9 supports parallel dataloads? Because, In the New Features guide of Essbase 11.1.1, they have stated that Essbase 11.1.1 supports parallel data loads and it supports upto 8 rulefiles only.
    So please confirm where can I find this information, or please send me the link where I can find this information.
    Thank You.

  • Parallel data loading

    Hi,
    I am in need of some help. I am currently designing a new extractor for transactional data that needs to be able to handle a high volume (> 1 mil.) of records.
    We have a function module that already fetches the records in the desired structure. So...I could use this FM as the extractor.
    However. This FM is not the most performant. For this reason we have a prefetch FM that, based on the selection criteria, pre-fetches the data and puts it in a buffer. The first FM I mentioned then reads from the buffer instead of the DB.
    So, I would need to call this pre-fetch FM once during the initialization and at the record fetching I would use the other FM...right?
    Now...I saw that I can set-up the BW system smart enough so that it will load data in parallel.
    Imagine I create an InfoPackage in which I define as selection options that students 1 - 100.000 need to be loaded. I start the data loading. What selection criteria are passed to the extractor. 1 - 100.000 right?
    If 3 parallel threads are started, the initialization is done by the only 1 request right?
    The problem I am then facing is that the buffering might take X minutes and the buffer is bypassed as the needed records are not in there yet.
    I am not sure how to do this properly. Can anyone advise?
    Thanks.
    Jeroen

    First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
    As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
    - InfoPackage 1, Students 1 - 10.000
    - InfoPackage 2, Students 10.001 - 20.000
    ...and so on.
    Following that I need to create a Process Chain that starts loading all packages at the same point in time.
    Now...when the extractor is called, there are two parts that it runs through:
    - Initialization of the extractor
    - Fetching of records
    ( via flag i_initflag in the extractor ).
    In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
    What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
    Jeroen

  • Direct vs Indirect data load from ODS - InfoCube

    We know that there are two options to load data from ODS to InfoCube:
    1. Direct load:
    If the ODS setting is checked with the option of "Update data targets from ODS object automatically", then when loading data to ODS, then the data also can be loaded to the InfoCube, in this way, only one InfoPackage is needed to load data to the ODS, then the InfoCube can be filled with data automatically.
    2. Indirect load:
    If the ODS setting is NOT checked with the option of "Update data targets from ODS object automatically", then not only the InfoPackage to load the data to the ODS is needed, but also the InfoPackage to load data from the exported datasource of the ODS to the InfoCube is needed to be created in order to load the data from ODS to the InfoCube.
    I wonder what are the pro and con between the above two loading methods from ODS to InfoCube.
    Welcome to every BW expert inputs!

    HI Kevin,
    Direct Loads or rather Automated Loads are usually used where u need to load data automatically into the final data target. And it has no dependencies.This process has less of maintenance involved and u need to execute a single infopackage to send data to the final data target.
    But Indirect Loads are usually used when u have a number of dependencies and this ODs object is one of the object in a process chain. So if u require the this ODS data load wait till some other event has been executed , then indirect loads are used through process chains.
    Regards,
    JAsprit

  • If bi-directi​onal parallel data transfer possible in DAQ 6009???

    want to interface AVR to DAQ6009 via parallel 8-bit bus but I need it to be bi-directional thus stuck at this point since m nt sure whether bi-directional data transfer possible or not
    Solved!
    Go to Solution.

    First of all, serial port is available on win7. Silly to say that it's not. Since it is a single bit, mention of it is irrelevant, as is USB.
    Second, the lines of the 6009 can be either input or output. A line cannot be both so you would need to stop your task, change the line direction, and start a new task.
    Third, you say there is no protocol for the AVR yet you ask for an example. If you don't know the protocol, you need to read the data sheet and understand it. I'm going to assume that you mean this AVR.
    For digital I/O examples, see Help>Find Examples. As mentioned, the 6009 is strictly software timed so you will be limited to 'bit-banging' per the device's specs and if it has a minimum clock rate, find other hardware.

  • Data Load

    We are trying to load the data of 2lis_03_bf from sap R/3 into SAP BW.
    The following steps were followed in the process.
    1.Delete data from Inventory Queue LBWQ MCEX03 Entries
    2.Delete setup tables LBWG
    3.Check data in Extractor RSA3 0 records should be there
    4. Filling setup tables for 2LIS_03_BX MCNB Termination date = next
    day, Transfer structure = 2LIS_03_BX,Only Valuated stock( with posting
    block ) on 14th august
    5.Filling setup tables for 2LIS_03_BF OLI1BW The data restriction
    given by posting date 01.01.1999-14.08.2007
    6.Generate Initial status 2LIS_03_BX. RSA1 ( BW) Done with in the
    posting block
    7.Collapse data with marker update
    8.Start/Schedule Control Job in R/3 side for BF to run every 2hrs LBWE,
    as suggested by External Consultant.
    9.Initialize delta process for 2LIS_03_BF RSA1 Started on 15th of
    august but failed due termination in R/3.
    10.So started full update in two parallel data load into BW, 3months at
    a time. Each load took 2 days to bring 2 million records.
    11.This load of data till 14th of August 2007 finished on 4th Sep 2007.
    12.INITIAL LOAD WITHOUT DATA TRANSFER IS DONE Successfully ( to
    activate delta for BW)
    13.Delta to BW was scheduled and it transferred 0 from 0 records.
    14. Check for data is R/3 delta Q : RSA7 Data records are shown from
    01.09.2007 – 04.09.2007. Unable to find data from 15.08.2007 to
    31.08.2007.
    15.Performed a full data load from 15.08.2007 till date ( in order to
    get the data for the missing days) RSA1 0 from 0 records are
    transferred.
    We are looking for any advice in getting this data records from 15th of
    August till today.
    This is a very critical issue, because we are unable to provide our
    business with any production reports and also stock reports.
    Please some one help us to resolve the issue.
    Please help as early as possible

    Hi,
    I have a suggestion, you can try..
    as you said that the, your delta init has failed from 15th aug..but later your init was
    successful ended at today's date..right..
    so, if your init and delta activation is successful, then system would have started
    captureing the data through one of its update mode set by you..
    therefore, first goto RSA7 and check whether you have any delta records there..?
    if you found NO..
    then...go to T-code - LBWQ - and check the entries against 'MCEX03' you should be able to see the no. of records..
    step2 : go there double click on that record and check the value in the status field
    and if it is other than 'Ready', then change the status to 'Ready'...
    and revert back to me...for further steps..

  • Data Load Optimization

    Hi,
    I have a cube with following dimension information and it requires optimization for data load, its data is cleared and loaded every week from SQL data source using load rule. It loads 35 million records and the load is so slow that only for data load excluding calculation takes 10 hrs. Is it common? Is there any change in the structure I need to make the load faster like changing the Measures to sparse or change the position of dimensions. Also the block size is large, 52920 B thats kind of absurd. I have also the cache settings below so please look at it please give me suggestions on this
    MEASURE      Dense     Accounts 245 (No. Of Members)
    PERIOD     Dense     Time 27
    CALC      Sparse     None      1
    SCENARIO     Sparse     None 7
    GEO_NM     Sparse     None     50
    PRODUCT     Sparse     None 8416
    CAMPAIGN     Sparse     None 35
    SEGMENT     Sparse     None 32
    Cache settings :
    Index Cache setting : 1024
    Index Cache Current Value : 1024
    Data File Cache Setting : 32768
    Data file Cache Current Value : 0
    Data Cache Setting : 3072
    Data Cache Current Value : 3049
    I would appreciate any help on this. Thanks!

    10 hrs is not acceptable even for that many rows. For my discussion, I'll assume a BSO cube,
    There are a few things to consider
    First what is the order of the columns in your load rule? Can you post the SQL? IS the sql sorted as it comes in? Optimal for a load would be to have your sparse dimensions first followed by the dense dimensions(preferably having one of the dense dimensiosn as columns instead of rows) For example your periods going across like Jan, Feb, Mar, etc
    Second, Do you have parallel data loading turned on? Look in the config for Dlthreadsprepare and DLthreadswrite. My multithreading you can get better throughput
    Third, how does the data get loaded? Is there any summation of data before being loaded or do you have the load rule set to addative. doing the summation in SQL would spead things up a lot since each block would only get hit once.
    I have also seen network issues cause this as transferring this many rows would be slow ( as KRishna said) and have seen where the number of joins done on the SQL caused massive delays in preparing the data. Out of interest, how long does the actual query take if you are just executing it from a SQL tool.

  • Master Data loading got failed: error "Update mode R is not supported by th

    Hello Experts,
    I use to load master data for 0Customer_Attr though daily process chain, it was running successfully.
    For last 2 days master data loading for 0Customer_Attr got failed and it gives following error message:
    "Update mode R is not supported by the extraction API"
    Can anyone tell me what is that error for? how to resolve this issue?
    Regards,
    Nirav

    Hi
    Update mode R error will come in the below case
    You are running a delta (for master data) which afils due to some error. to resolve that error, you make the load red and try to repeat the load.
    This time the load will fail with update mode R.
    As repeat delta is not supported.
    So, now, the only thing you can do is to reinit the delta(as told in above posts) and then you can proceed. The earlier problem has nothing to do with update mode R.
    example your fiorst delta failed with replication issue.
    only replicating and repeaing will not solve the update mode R.
    you will have to do both replication of the data source and re-int for the update mode R.
    One more thing I would like to add is.
    If the the delat which failed with error the first time(not update mode R), then
    you have to do init with data transfer
    if it failed without picking any records,
    then do init without data transfer.
    Hope this helps
    Regards
    Shilpa
    Edited by: Shilpa Vinayak on Oct 14, 2008 12:48 PM

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • "Error code 0xc000035a Attempting to load a 64-bit application, however this cpu is not compatible with 64-bit mode"  when I load Windows 7 [64bit] on an older iMac(with Parallels 6)? Any solutions that don't mean buying a new iMac?

    "Error code 0xc000035a Attempting to load a 64-bit application, however this cpu is not compatible with 64-bit mode" 
    when I load Windows 7 [64bit] on an older iMac(with Parallels 6)? Any solutions that don't mean buying a new iMac?

    You should probably post your question on the Parallels forums:  http://forums.parallels.com

  • Segmentation fault error during data load in parallel with multiple rules

    Hi,
    I'm trying to do sql data load in parallel with multiple rules (4 or 5 rules, maybe), i'm getting a "segmentation fault" error. I tested 3 rules file and it worked fine. we're using Essbase system 9.3.2., with UDB (v8) as the sql data source. ODBC driver is DataDirect 5.2 DB2 Wire Protocol Driver (ARdb222). Please let me know if you have any information on this.
    thx.
    Y

    Hi Thad,
    I was wondering, if system is unicode or non unicode that should not matter the amount and currency field . As currencies are defined by SAP and it is in pure English at least a currency code part of 3 Chars. 
    Could this because of some incosistency of data ??
    I would like to know for Which currency  had some special characters it it in that particular record ??
    Hope that helps.
    Regards
    Mr Kapadia

  • Data load to DSO takes long time to finish

    Dear All,
    We have a data load from data source to std  DSO.The data load takes 5 hours to complete  6000 records in single data package which is long time.
    Process monitor shows yellow status at one of the step for long time "No message :Transformation End" and after 5 hours approx  it completes successfully.
    Please find the snapshot of process monitor(Attached File Process monitor.png).
    There is an end routine and the transformation  is having direct mapping except for a target object exchage rate which is master data look up of DSO (Attached FIle : Transformation rule.png)
    The look up DSO /BI0/AFIGL_DS00 in the below code is having DOCNUM as a primary key  but not the POSKY. Since one of the field is not a primary key,secondary index is created for the look up DSO.But,still it takes huge time to finish the last step as mentioned in the snapshot.
    Setting for parallel process is 1
    DTP--> Update tab-->Error handling-->No update,no reporting.But there is a error DTP present which I believe that there is no use when "No update,No reporting" option is chosen.
    Can you please suggest the reason for the such long time.Also,Please suggest how to find the exact place where it consumes lot of time.
    End routine Logic:
        IF NOT RESULT_PACKAGE IS INITIAL.
          REFRESH IT_FIG.
          SELECT DOCNUM  POSKY DEBCRE LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    Thanks in advance
    Regards
    Pradeep

    Hi,
    below code check it and try to load the data.
    IF RESULT_PACKAGE IS NOT INITIAL.
          SELECT DOCNUM 
                          POSKY
                          DEBCRE
                          LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
               <RESULT_FIELDS>-DEB = WA_DOCNUM.
               <RESULT_FIELDS>-DEB = WA_POSKY.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    if your are getting any error please let us know
    1.decrease the data packet size in DTP like 10,000 or 20,000.
    2.increase the parallel process at DTP level.
    Thanks,
    Phani.

  • Comparison of Data Loading techniques - Sql Loader & External Tables

    Below are 2 techniques using which the data can be loaded from Flat files to oracle tables.
    1)     SQL Loader:
    a.     Place the flat file( .txt or .csv) on the desired Location.
    b.     Create a control file
    Load Data
    Infile "Mytextfile.txt" (-- file containing table data , specify paths correctly, it could be .csv as well)
    Append or Truncate (-- based on requirement) into oracle tablename
    Separated by "," (or the delimiter we use in input file) optionally enclosed by
    (Field1, field2, field3 etc)
    c.     Now run sqlldr utility of oracle on sql command prompt as
    sqlldr username/password .CTL filename
    d.     The data can be verified by selecting the data from the table.
    Select * from oracle_table;
    2)     External Table:
    a.     Place the flat file (.txt or .csv) on the desired location.
    abc.csv
    1,one,first
    2,two,second
    3,three,third
    4,four,fourth
    b.     Create a directory
    create or replace directory ext_dir as '/home/rene/ext_dir'; -- path where the source file is kept
    c.     After granting appropriate permissions to the user, we can create external table like below.
    create table ext_table_csv (
    i Number,
    n Varchar2(20),
    m Varchar2(20)
    organization external (
    type oracle_loader
    default directory ext_dir
    access parameters (
    records delimited by newline
    fields terminated by ','
    missing field values are null
    location ('file.csv')
    reject limit unlimited;
    d.     Verify data by selecting it from the external table now
    select * from ext_table_csv;
    External tables feature is a complement to existing SQL*Loader functionality.
    It allows you to –
    •     Access data in external sources as if it were in a table in the database.
    •     Merge a flat file with an existing table in one statement.
    •     Sort a flat file on the way into a table you want compressed nicely
    •     Do a parallel direct path load -- without splitting up the input file, writing
    Shortcomings:
    •     External tables are read-only.
    •     No data manipulation language (DML) operations or index creation is allowed on an external table.
    Using Sql Loader You can –
    •     Load the data from a stored procedure or trigger (insert is not sqlldr)
    •     Do multi-table inserts
    •     Flow the data through a pipelined plsql function for cleansing/transformation
    Comparison for data loading
    To make the loading operation faster, the degree of parallelism can be set to any number, e.g 4
    So, when you created the external table, the database will divide the file to be read by four processes running in parallel. This parallelism happens automatically, with no additional effort on your part, and is really quite convenient. To parallelize this load using SQL*Loader, you would have had to manually divide your input file into multiple smaller files.
    Conclusion:
    SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. However, we can always copy the data from external tables to Oracle Tables using DB links.

    Please let me know your views on this.

  • Essbase Studio Performance Issue : Data load into BSO cube

    Hello,
    Having succesfully built my outline by member loading through Essbase Studio, I have tried to load data into my application again with Studio. However I was never able to complete the data load because it is taking forever. Each time I tried to work with Studio in streaming mode (hoping to increase the query speed), the load gets terminated due to the following error : Socket read timed out.
    In the Studio properties file, I typed in, oracle.jdbc.ReadTimeout=1000000000, but the result has not changed. Even if it did work out, I am also not sure the streaming mode is gonna provide a much faster alternative to working in non-streaming mode. What I'd like to know is, which Essbase settings I can change (either in Essbase or Studio server) in order to speed up my data load. I am loading into a Block Storage database with 3 Dense, 8 Sparse and 2 attribute dimensions. I filtered some dimensions and tried to load data to see exactly how long it takes to create a certain number of blocks. With ODBC setting in Essbase Studio, it took me 2.15 hours to load data into my application where only 153 blocks were created with the block size of 24B. Assuming that in my real application the number of Blocks created are going to be at least 1000 times more than this , I need to make some changes in settings. I am transferring the data from Oracle Database, with 5 tables joined to a fact table (view) from the same data source. All the cache settings in Essbase are in default mode. Would changing cache settings, buffer size or multiple threads help to increase the performance? Or what would you suggest that I should do?
    Thank you very much.

    Hello user13695196 ,
    (sorry I no longer remember my system number here)
    Before it comes to any optimisation attemps in the essbase (also Studio) environment you should definitily make clear that your source data query performs well at the oracle db.
    I would recommand:
    1. to create in your db source schema a View from your sql statement (these behind your data load rule)
    2. query against this view with any GUI (Sql Developer, TOAD etc.) to fetch all rows and measure the time it takes to complete. Also count the effected (returned) number of rows for your information and for future comparing of results.
    If your query runs longer then you think is acceptable then
    a) check DB statistiks,
    b) check and/or consider creating indexes
    c) if you are unsure then kindliy ask your DBA for help. Usually they can help you very fast.
    (Don't be shy - a DBa is a human being like you and me :-) )
    Only when your sql runs fast (enough for you; or your DBA says is the best you can achieve) at the database move your effort over to essbase.
    One hint in addition:
    We had often problems when using views for dataload (not only performance but rather other strange behavior) . Thats the reaons why I like more directly to set up on (persistence) tables.
    Just to keep in mind: If nothing helps create a table from your view and then query your data from this table for your essbase data load. Normaly however this should be your last option.
    Best Regards
    (also to you Torben :-) )
    Andre
    Edited by: andreml on Mar 17, 2012 4:31 AM

Maybe you are looking for

  • Cell definition in Query Design within Bex Analyzer

    Hello, I would like to use special cell definition to visualizing the difference row definition between 2 columns (last year and actual year). With "Cell Definition" function in Query Design within Bex Analyzer there's a option to select a new restri

  • Issue with Variants in 2004s

    Hi BW Gurus, We recently upgraded our system from 3.5 and 2004s and we are having issues with variants. Issue1. Variants created by one user is not visible by other users Issue2. Variants created for workbook is not saved. Anyone encountered these is

  • From .mov to .avi or .mkv

    Hello together, i want to use Quicktime for AVC/H.264 encoding. The mov container don't support AC3 sound. How can i convert mov to avi- or matroska- (mkv) container format, without new encoding; also only a container-change ? Thank you and best rega

  • Reference record in another record

    Hi, I have two packages each having one record. Is it possible to reference the record of the second package in the record of the first package ? For example: create or replace pkg1 is type rec1 is record ( col11 integer, col12 ref pk2.col21 end pkg1

  • Implementing a Revision History Strategy

    Hello, I'm looking for advice on ways to keep revision history for all updates to entities in EJB3. Users of my application will need to interact with the historical data, so a simple audit log will not suffice. As a proof of concept, I implemented a