Parallel data loading

Hi,
I am in need of some help. I am currently designing a new extractor for transactional data that needs to be able to handle a high volume (> 1 mil.) of records.
We have a function module that already fetches the records in the desired structure. So...I could use this FM as the extractor.
However. This FM is not the most performant. For this reason we have a prefetch FM that, based on the selection criteria, pre-fetches the data and puts it in a buffer. The first FM I mentioned then reads from the buffer instead of the DB.
So, I would need to call this pre-fetch FM once during the initialization and at the record fetching I would use the other FM...right?
Now...I saw that I can set-up the BW system smart enough so that it will load data in parallel.
Imagine I create an InfoPackage in which I define as selection options that students 1 - 100.000 need to be loaded. I start the data loading. What selection criteria are passed to the extractor. 1 - 100.000 right?
If 3 parallel threads are started, the initialization is done by the only 1 request right?
The problem I am then facing is that the buffering might take X minutes and the buffer is bypassed as the needed records are not in there yet.
I am not sure how to do this properly. Can anyone advise?
Thanks.
Jeroen

First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
- InfoPackage 1, Students 1 - 10.000
- InfoPackage 2, Students 10.001 - 20.000
...and so on.
Following that I need to create a Process Chain that starts loading all packages at the same point in time.
Now...when the extractor is called, there are two parts that it runs through:
- Initialization of the extractor
- Fetching of records
( via flag i_initflag in the extractor ).
In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
Jeroen

Similar Messages

  • DIRECT MODE에서의 PARALLEL DATA LOADING

    제품 : ORACLE SERVER
    작성날짜 : 1999-08-10
    Direct mode에서의 parallel Data Loading
    =======================================
    SQL*Loader는 동일 table에 대한 direct mode에서의 parallel data load를
    지원하고 있다. 이는 여러 session에서 동시에 데이타를 direct mode로 올림으로써
    대용량의 데이타의 로드 속도를 향상시킬 수 있다. 특히, data file을 물리적으로
    다른 disk에 위치시킴으로써 더욱 큰 효과를 낼 수 있다.
    1. 제약사항
    - index가 없는 table에만 로드 가능
    - APPEND mode에서만 가능. (replace, truncate, insert mode는 지원 안됨)
    - parallel query option이 설치되어 있어야 함.
    2. 사용방법
    각각의 data file을 load할 control file들을 생성한 후, 차례차례 수행하면 됨.
    $sqlldr scott/tiger control=load1.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load2.ctl direct=true parallel=true&
    $sqlldr scott/tiger control=load3.ctl direct=true parallel=true
    3. constraint
    - enable parameter를 사용하면 데이타 로드 작업이 모두 끝난 후, 자동으로
    constraint을 enable시켜 준다. 그러나 종종 enable되지 못하는 경우가
    있으므로 반드시 status를 확인해야 한다.
    - primary key 나 unique key constraint이 걸려 있는 경우, 데이타 로드 후
    자동으로 enable할 때, index를 생성하느라 시간이 많이 소모될 수 있다.
    따라서 data만 parallel direct 모드로 로드 한 후, index를 따로 parallel로
    생성하는 것이 성능 측면에서 바람직하다.
    4. storage 할당 방법 및 주의사항
    direct로 데이타를 로드하는 경우 다음 절차를 따라 작업한다.
    - 대상 table의 storage 절에 기초해 temporary segment를 생성한다.
    - 마지막 데이타 로드 작업이 끝난 후, 마지막에 할당되었던 extent의 비어 있는
    즉, 사용하지 않은 부분을 trim 한다.
    - temporary segment에 해당되어 있는 extent들의 header 정보를
    변경하고, HWM 정보를 수정하여, 대상 table에 extent가 소속되도록 한다.
    이러한 extent 할당 방법은 다음과 같은 문제를 야기시킨다.
    - parallel data load에서는 table 생성 시 할당된 최초 INITIAL extent를
    사용하지 않는다.
    - 정상적인 extent 할당 rule을 따르지 않고, 각 process는 next extent에
    정의된 크기를 할당하여 data load를 시작하고, 새로운 extent가 요구될
    때에는 pctincrease 값을 기준으로 할당되게 되는데, 이는 process 간에
    독립적으로 계산되어진다.
    - fragmentation이 심하게 발생할 수 있다.
    fragmentation을 줄이고, storage 할당을 효율적으로 하기 위해서는
    - INITIAL을 2-5 block 정도로 작게 하여 table을 생성한다.
    - 7.2 이상 버젼에서는 options 절에서 storage parameter를 지정하여
    사용한다. 이 때 initial과 next를 동일한 크기로 주는 것이 바람직하다.
    OPTIONS (STORAGE=(MINEXTENTS n
    MAXEXTENTS n
    INITIAL n K
    NEXT n K
    PCTINCREASE n))
    - options 절을 control file에 기술하는 경우 반드시 insert into tables
    절 다음에 기술해야 한다.

    First, thanks for the hints. In the meanwhile I found some other documentation regarding my issue.
    As far as I understand, if I want to load parallel, I have to create multiple InfoPackes and split up the records in the selection criteria, e.g.:
    - InfoPackage 1, Students 1 - 10.000
    - InfoPackage 2, Students 10.001 - 20.000
    ...and so on.
    Following that I need to create a Process Chain that starts loading all packages at the same point in time.
    Now...when the extractor is called, there are two parts that it runs through:
    - Initialization of the extractor
    - Fetching of records
    ( via flag i_initflag in the extractor ).
    In the initialization I want to run the pre-fetch module. I worked everything regarding that already. Only when the pre-fetch is finished, will the actual data loading start.
    What I am not sure about is: Is this flag (the i_initflag mentioned above) passed for each InfoPackage that is started?
    Jeroen

  • Need to generate multiple error files with rule file names during parallel data load

    Hi,
    Is there a way that MAXL could generate multiple error files during parallel data load?
    import database AsoSamp.Sample data
      connect as TBC identified by 'password'
      using multiple rules_file 'rule1' , 'rule2'
      to load_buffer_block starting with buffer_id 100
      on error write to "error.txt";
    I want to get error files as this -  rule1.err, rule2.err (Error files with rule file name included). Is this possible in MAXL? 
    I even faced a situation , If i hard code the error file name like above, its giving me error file names as error1.err and error2.err. Is there any solution for this?
    Thanks,
    DS

    Are you saying that if you specify the error file as "error.txt" Essbase actually produces multiple error files and appends a number?
    Tim. 
    Yes its appending the way i said.
    Out of interest, though - why do you want to do this?  The load rules must be set up to select different 'chunks' of input data; is it impossible to tell which rule an error record came from if they are all in the same file?
    I have like 6 - 7 rule files using which the data will be pulled from SQL and loaded into Essbase. I dont say its impossible to track the error record.
    Regardless, the only way I can think of to have total control of the error file name is to use the 'manual' parallel load approach.  Set up a script to call multiple instances of MaxL, each performing a single load to a different buffer.  Then commit them all together.  This gives you most of the parallel load benefit, albeit with more complex scripting.
    Even i had the same thought of calling multiple instances of a Maxl using a shell script.  Could you please elaborate on this process? What sort of complexity is involved in this approach.? Did anyone tried it before?
    Thanks,
    DS

  • Does Essbase System 9 Support Parallel Data Loads?

    Hi,
    Can anyone tell me, whether Essbase System 9 supports Parallel Dataloads or not?
    If it supports, How many load rules can be executed parallelly. If any of you know this please tell me. This will be a great help to me.
    Thanks a lot..............

    Hi Atul Kushwaha,
    Are you sure that Essbase System 9 supports parallel dataloads? Because, In the New Features guide of Essbase 11.1.1, they have stated that Essbase 11.1.1 supports parallel data loads and it supports upto 8 rulefiles only.
    So please confirm where can I find this information, or please send me the link where I can find this information.
    Thank You.

  • Data Load

    We are trying to load the data of 2lis_03_bf from sap R/3 into SAP BW.
    The following steps were followed in the process.
    1.Delete data from Inventory Queue LBWQ MCEX03 Entries
    2.Delete setup tables LBWG
    3.Check data in Extractor RSA3 0 records should be there
    4. Filling setup tables for 2LIS_03_BX MCNB Termination date = next
    day, Transfer structure = 2LIS_03_BX,Only Valuated stock( with posting
    block ) on 14th august
    5.Filling setup tables for 2LIS_03_BF OLI1BW The data restriction
    given by posting date 01.01.1999-14.08.2007
    6.Generate Initial status 2LIS_03_BX. RSA1 ( BW) Done with in the
    posting block
    7.Collapse data with marker update
    8.Start/Schedule Control Job in R/3 side for BF to run every 2hrs LBWE,
    as suggested by External Consultant.
    9.Initialize delta process for 2LIS_03_BF RSA1 Started on 15th of
    august but failed due termination in R/3.
    10.So started full update in two parallel data load into BW, 3months at
    a time. Each load took 2 days to bring 2 million records.
    11.This load of data till 14th of August 2007 finished on 4th Sep 2007.
    12.INITIAL LOAD WITHOUT DATA TRANSFER IS DONE Successfully ( to
    activate delta for BW)
    13.Delta to BW was scheduled and it transferred 0 from 0 records.
    14. Check for data is R/3 delta Q : RSA7 Data records are shown from
    01.09.2007 – 04.09.2007. Unable to find data from 15.08.2007 to
    31.08.2007.
    15.Performed a full data load from 15.08.2007 till date ( in order to
    get the data for the missing days) RSA1 0 from 0 records are
    transferred.
    We are looking for any advice in getting this data records from 15th of
    August till today.
    This is a very critical issue, because we are unable to provide our
    business with any production reports and also stock reports.
    Please some one help us to resolve the issue.
    Please help as early as possible

    Hi,
    I have a suggestion, you can try..
    as you said that the, your delta init has failed from 15th aug..but later your init was
    successful ended at today's date..right..
    so, if your init and delta activation is successful, then system would have started
    captureing the data through one of its update mode set by you..
    therefore, first goto RSA7 and check whether you have any delta records there..?
    if you found NO..
    then...go to T-code - LBWQ - and check the entries against 'MCEX03' you should be able to see the no. of records..
    step2 : go there double click on that record and check the value in the status field
    and if it is other than 'Ready', then change the status to 'Ready'...
    and revert back to me...for further steps..

  • Data Load Optimization

    Hi,
    I have a cube with following dimension information and it requires optimization for data load, its data is cleared and loaded every week from SQL data source using load rule. It loads 35 million records and the load is so slow that only for data load excluding calculation takes 10 hrs. Is it common? Is there any change in the structure I need to make the load faster like changing the Measures to sparse or change the position of dimensions. Also the block size is large, 52920 B thats kind of absurd. I have also the cache settings below so please look at it please give me suggestions on this
    MEASURE      Dense     Accounts 245 (No. Of Members)
    PERIOD     Dense     Time 27
    CALC      Sparse     None      1
    SCENARIO     Sparse     None 7
    GEO_NM     Sparse     None     50
    PRODUCT     Sparse     None 8416
    CAMPAIGN     Sparse     None 35
    SEGMENT     Sparse     None 32
    Cache settings :
    Index Cache setting : 1024
    Index Cache Current Value : 1024
    Data File Cache Setting : 32768
    Data file Cache Current Value : 0
    Data Cache Setting : 3072
    Data Cache Current Value : 3049
    I would appreciate any help on this. Thanks!

    10 hrs is not acceptable even for that many rows. For my discussion, I'll assume a BSO cube,
    There are a few things to consider
    First what is the order of the columns in your load rule? Can you post the SQL? IS the sql sorted as it comes in? Optimal for a load would be to have your sparse dimensions first followed by the dense dimensions(preferably having one of the dense dimensiosn as columns instead of rows) For example your periods going across like Jan, Feb, Mar, etc
    Second, Do you have parallel data loading turned on? Look in the config for Dlthreadsprepare and DLthreadswrite. My multithreading you can get better throughput
    Third, how does the data get loaded? Is there any summation of data before being loaded or do you have the load rule set to addative. doing the summation in SQL would spead things up a lot since each block would only get hit once.
    I have also seen network issues cause this as transferring this many rows would be slow ( as KRishna said) and have seen where the number of joins done on the SQL caused massive delays in preparing the data. Out of interest, how long does the actual query take if you are just executing it from a SQL tool.

  • Number of parallel process definition during data load from R/3 to BI

    Dear Friends,
    We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI.  I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
    1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
    2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
    3) How system works and what will be net result of increasing or decreasing the number of parallel process.
    Expecting Experts help.
    Regards,
    M.M

    Dear Des Gallagher,
    Thank you very much for the useful information provided. The following was my observation.
    From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
    Can you kindly explain about the above mentioned point. i.e.,
    1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
    Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained  -> can you explain in detail
    Can you calrify my doubt and provide solution?
    Regards,
    M.M

  • Segmentation fault error during data load in parallel with multiple rules

    Hi,
    I'm trying to do sql data load in parallel with multiple rules (4 or 5 rules, maybe), i'm getting a "segmentation fault" error. I tested 3 rules file and it worked fine. we're using Essbase system 9.3.2., with UDB (v8) as the sql data source. ODBC driver is DataDirect 5.2 DB2 Wire Protocol Driver (ARdb222). Please let me know if you have any information on this.
    thx.
    Y

    Hi Thad,
    I was wondering, if system is unicode or non unicode that should not matter the amount and currency field . As currencies are defined by SAP and it is in pure English at least a currency code part of 3 Chars. 
    Could this because of some incosistency of data ??
    I would like to know for Which currency  had some special characters it it in that particular record ??
    Hope that helps.
    Regards
    Mr Kapadia

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Data load to DSO takes long time to finish

    Dear All,
    We have a data load from data source to std  DSO.The data load takes 5 hours to complete  6000 records in single data package which is long time.
    Process monitor shows yellow status at one of the step for long time "No message :Transformation End" and after 5 hours approx  it completes successfully.
    Please find the snapshot of process monitor(Attached File Process monitor.png).
    There is an end routine and the transformation  is having direct mapping except for a target object exchage rate which is master data look up of DSO (Attached FIle : Transformation rule.png)
    The look up DSO /BI0/AFIGL_DS00 in the below code is having DOCNUM as a primary key  but not the POSKY. Since one of the field is not a primary key,secondary index is created for the look up DSO.But,still it takes huge time to finish the last step as mentioned in the snapshot.
    Setting for parallel process is 1
    DTP--> Update tab-->Error handling-->No update,no reporting.But there is a error DTP present which I believe that there is no use when "No update,No reporting" option is chosen.
    Can you please suggest the reason for the such long time.Also,Please suggest how to find the exact place where it consumes lot of time.
    End routine Logic:
        IF NOT RESULT_PACKAGE IS INITIAL.
          REFRESH IT_FIG.
          SELECT DOCNUM  POSKY DEBCRE LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    Thanks in advance
    Regards
    Pradeep

    Hi,
    below code check it and try to load the data.
    IF RESULT_PACKAGE IS NOT INITIAL.
          SELECT DOCNUM 
                          POSKY
                          DEBCRE
                          LOCC
          FROM /BI0/AFIGL_DS00 INTO TABLE IT_FIG
          FOR ALL ENTRIES IN RESULT_PACKAGE
          WHERE DOCNUM = RESULT_PACKAGE-BILNO AND
                POSKY = '02'.
        LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
            READ TABLE IT_FIG INTO WA_FIG WITH KEY
                       DOCNUM = <RESULT_FIELDS>-BILNO.
            IF SY-SUBRC EQ 0.
               <RESULT_FIELDS>-DEB = WA_DOCNUM.
               <RESULT_FIELDS>-DEB = WA_POSKY.
              <RESULT_FIELDS>-DEB = WA_FIG-DEBCRE.
              <RESULT_FIELDS>-LOC_CURRC2 = WA_FIG-LOCC.
            ENDIF.
        ENDLOOP.
        ENDIF.
    if your are getting any error please let us know
    1.decrease the data packet size in DTP like 10,000 or 20,000.
    2.increase the parallel process at DTP level.
    Thanks,
    Phani.

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • Error while data loading

    Hi Gurus,
    I am getting error while data loading. At BI side when I  check the error it says Background Job Cancelled and when I check in R3 side I got the following error
    Job started
    Step 001 started (program SBIE0001, variant &0000000065503, user ID R3REMOTE)
    Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
    DATASOURCE = 2LIS_11_V_ITM
             Current Values for Selected Profile Parameters               *
    abap/heap_area_nondia......... 2000683008                              *
    abap/heap_area_total.......... 4000317440                              *
    abap/heaplimit................ 40894464                                *
    zcsa/installed_languages...... ED                                      *
    zcsa/system_language.......... E                                       *
    ztta/max_memreq_MB............ 2047                                    *
    ztta/roll_area................ 6500352                                 *
    ztta/roll_extension........... 3001024512                              *
    4 LUWs confirmed and 4 LUWs to be deleted with function module RSC2_QOUT_CONFIRM_DATA
    ABAP/4 processor: DBIF_RSQL_SQL_ERROR
    Job cancelled
    Please help me out what should I do.
    Regards,
    Mayank

    Hi Mayank,
    The log says it went to short dump due to temp space issue.as its the source system job ,check in the source system side for temp table space and also check at the BI side as well.
    Check with your basis regarding the TEMP PSA table space - if its out of space ask them to increase the table space and try to repeat the load.
    Check the below note
    Note 796422 - DBIF_RSQL_SQL_ERROR during deletion of table BWFI_AEDAT
    Regards
    KP
    Edited by: prashanthk on Jul 19, 2010 10:42 AM

  • Data Load Error - Unknown Item

    Under the advisement of an Oracle engineer our users were instructed to export the data from one of their DBs using "Level 0 Data", and then import the data back into the database. This process was needed in order to allow us to restore compression on our DBs as it had been lost during a patching process.
    In any event,
    I've exported the data to a .txt file and now I am attempting to load the file back into the database. I am receiving the below error:
    Parallel dataload Enabled: [1] block prepare threads, [1] block write threads.
    Unknown Item[-1. #IND] in Data Load, [360639] Records Completed.
    Unexpected Essbase error 1003000
    I did some searching on the web and located a discussion topic dealing with this type of an error. Id indicated that the error was related to an attempt to write a value to the DB that does not have a corresponding "table" in the DB. I 'kind of see' that from the error message, but I don’t understand how it could be occurring if the data export and import are occurring on the exact same application DB.
    Any assistance would be greatly appreciated.
    Thanks!

    It is a planning database, if so log on my oracle support and have a read of -
    Error "1003000 Unknown Item [1.#QNAN] or [-1.#IND] in Data Load, [xxxx] Records Completed" [ID 961934.1]
    Even if it is not there is invalid data that has been exported.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Inventory snapshot scenario - Data load frequency?

    Hi,
    I have gone through "How to" document for inventory snapshot extraction.
    Here are few questions for which I could not find answers in the document -
    1. Process 1 loads initial stock using BX data source into ODS.
    2. Then Process 2 - I assume there are two steps in this -
    a) Init using BF/UM Data source - If this is done, historical movements get added to initial stock in this ODS, which yields wrong results. So, is mapping to ODS required while doing init from BF/UM data sources? Init(Process 3) adds stock into snapshot cube for all the months from date of movement to current(system) date.
    b) Delta using BF/UM Data source - Adding delta to snapshot ODS makes sense. Is it also required to load this parallelly to snapshot cube(as mentioned in process 3)?
    No intention to confuse anybody.. Just wanted to know which of the following data load scenario yields perfectly fine results? The document is not self explainatory on this topic -
    I assume that, 0IC_C03 is being updated in parallel.
    1. Initial stock load using BX datasource into ODS. Then Initialize BF/UM data source into snapshot cube ONLY(Does this actually store historical snapshot?). Then do delta from BF/UM to ODS ONLY. Then follow rest of the steps.
    2. Initial stock load using BX datasource into ODS. Then Initialize BF/UM data source into snapshot cube and ODS. Then do delta from BF/UM to snapshot cube AND ODS. Then follow rest of the steps.
    3. Initial stock load using BX datasource into ODS. Initialize BF/UM data source WITHOUT DATA TRANSFER. Then start delta into snapshot cube AND ODS.
    Any help on this will be greatly appreciated.
    Thanks and Regards,
    Anup

    Hi,
    Ensure that the 3 key figures inlcuded in communication structure (of course it will get inluded as it is standard datasource)and when u create the update rules,this 3 key figures will set to no update and then you have to populate the 3 kf's using a routine.
    Hope this helps.
    Thanks,Ramoji.

  • Comparison of Data Loading techniques - Sql Loader & External Tables

    Below are 2 techniques using which the data can be loaded from Flat files to oracle tables.
    1)     SQL Loader:
    a.     Place the flat file( .txt or .csv) on the desired Location.
    b.     Create a control file
    Load Data
    Infile "Mytextfile.txt" (-- file containing table data , specify paths correctly, it could be .csv as well)
    Append or Truncate (-- based on requirement) into oracle tablename
    Separated by "," (or the delimiter we use in input file) optionally enclosed by
    (Field1, field2, field3 etc)
    c.     Now run sqlldr utility of oracle on sql command prompt as
    sqlldr username/password .CTL filename
    d.     The data can be verified by selecting the data from the table.
    Select * from oracle_table;
    2)     External Table:
    a.     Place the flat file (.txt or .csv) on the desired location.
    abc.csv
    1,one,first
    2,two,second
    3,three,third
    4,four,fourth
    b.     Create a directory
    create or replace directory ext_dir as '/home/rene/ext_dir'; -- path where the source file is kept
    c.     After granting appropriate permissions to the user, we can create external table like below.
    create table ext_table_csv (
    i Number,
    n Varchar2(20),
    m Varchar2(20)
    organization external (
    type oracle_loader
    default directory ext_dir
    access parameters (
    records delimited by newline
    fields terminated by ','
    missing field values are null
    location ('file.csv')
    reject limit unlimited;
    d.     Verify data by selecting it from the external table now
    select * from ext_table_csv;
    External tables feature is a complement to existing SQL*Loader functionality.
    It allows you to –
    •     Access data in external sources as if it were in a table in the database.
    •     Merge a flat file with an existing table in one statement.
    •     Sort a flat file on the way into a table you want compressed nicely
    •     Do a parallel direct path load -- without splitting up the input file, writing
    Shortcomings:
    •     External tables are read-only.
    •     No data manipulation language (DML) operations or index creation is allowed on an external table.
    Using Sql Loader You can –
    •     Load the data from a stored procedure or trigger (insert is not sqlldr)
    •     Do multi-table inserts
    •     Flow the data through a pipelined plsql function for cleansing/transformation
    Comparison for data loading
    To make the loading operation faster, the degree of parallelism can be set to any number, e.g 4
    So, when you created the external table, the database will divide the file to be read by four processes running in parallel. This parallelism happens automatically, with no additional effort on your part, and is really quite convenient. To parallelize this load using SQL*Loader, you would have had to manually divide your input file into multiple smaller files.
    Conclusion:
    SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table. However, we can always copy the data from external tables to Oracle Tables using DB links.

    Please let me know your views on this.

Maybe you are looking for