Data load times

Hi,
I have a question regarding data loads. We have a process cahin which includes 3 ods and cube.
Basically ODS A gets loaded from R/3 and the from ODS A it then loads into 2 other ODS B, ODS C and CUBE A.
So when I went to monitor screen of this load ODS A-> ODS B,ODS C,CUBE A. The total time shows as 24 minutes.
We have some other steps in process chain where ODS B-> ODS C, ODS C- CUBE 1.
When I go the monitor screen of these data loads the total time the tortal time  for data loads show as 40 minutes.
I *am suprised because the total run time for the chain itself is 40 minutes where in the chain it incclude data extraction form R/3 and ODS's Activations...indexex....
*Can anybody throw me some light?
Thank you all
Edited by: amrutha pal on Sep 30, 2008 4:23 PM

Hi All,
I am not asking like which steps needed to be included in which chain.
My question is when look at the process chain run time it says the total time is equal to 40 minutes and when you go RSMO to check the time taken for data load from ODS----> 3 other data targets it is showing 40 minutes.
The process chain also includes ods activation buliding indexex,extracting data from R/3.
So what are times we see when you click on a step in process chain and displaying messages and what is time you see in RSMO.
Let's take a example:
In Process chain A- there is step LOAD DATA- from ODS A----> ODS B,ODS C,Cube A.
When I right click on the display messages for successful load it shows all the messages like
Job started.....
Job ended.....
The total time here it shows 15 minutes.
when I go to RSMO for same step it shows 30 mintues....
I am confused....
Please help me???

Similar Messages

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • How to find Data load time ?

    Hi,
    Where do i look for the Total data load time ? from data source to PSA ?

    Hi Honar,
    1) Goto monitor of IP, in header tab you can find the runtime of IP.
    2) So you are loading data from source to BW.In IP header tab copy the req number, goto the source systems from which data is loading.
    goto SM37 give the request number with BI as pre-fix, you will find the total run time of job with job log.
    Hope this helps.

  • How can data load time can be optimized using routine

    Hi,
    Data laod is taking too much time . How can data load time can be optimized using routine
    Regards,
    Vivek

    Hi Vivek,
    If you have code wirtten, please try to post the same, so that i cna help you in optimising it and give it you...
    General Guideleines.
    1. Dont use select statement inside the loop.
    2. where possibel try to use Read statement with Binary Search Option and avoid using loop statement.
    3. Try to avaoid joins if you are using any, isntead use For alll entries.
    4. Try When u use for all entries try to make sure that your internal tbale is not initial.
    5. Try to Sort and delete adjacent duplicates from internal tables wherever needed.
    6. Use clear statement whererver required.
    Regards,
    Nanda.S

  • Data Load time

    I have 12 files which are similar in size. When they were loaded on one server, the load time is pretty consistent. For example, it takes about 100 seconds for loading each of the 12 files. On the other server, however, the data load time increases dramatically when loading later files, i.e., 100s, 120s, 180s....The essbase version are the same on both servers (window 2000). What could cause this problem? Is this something to do with server settings or essbase settings? Thanks for your response.Lin

    Hi,Can you please specify the version of Essbase server and development tool (app manager, admin services)?Have you checked the dense/sparce configuration?Grofaty

  • Data Load Times, etc.

    Hello Forums:
    I saw this website:
    http://www.s-w-h.com/index.php?language=en
    And found that a website like this loaded very quickly, and
    seemed to fit almost all resolution.
    I was wondering how did they make it fit dynamically? And how
    did they manage to reduce load times so significantly?
    I kinda wondered if it's possible to develop a massive forum
    or online community thing using purely Flash. Would that be
    possible? What if there were lots of user interaction, can the load
    times be significantly cut down?

    Does it load all the data while connected with 5GHz frequency…  What is the IP address that you are getting from the Router? You can check the IP address in the following manner:
    If you wish to find out your own IP address, you should run the same ms-dos prompt by clicking Start -> All Programs -> Accessories -> Command Prompt. A black pop up box should come up, type: "ipconfig /all" … There check the IP address and Default Gateway under LAN….
    # Open up the browser and on the address bar type “IP address of Default Gateway” that will open up the Router setup page…
    # In that page itself make the MTU size to 1400 from Auto….
    # Make the following Channel Settings of 2.4 GHz you can make Channel Width to 20 MHz only and Channel to 6, 9, 11.. Click Save Settings...
    Then check the connectivity…

  • How to decrease the dynamic table data loading time

    hi
    i have problem with dynamic table.
    when i execute the the table with passing a query , getting lot of time for loading the table data.( it takes 30sec for every 100 rows.)
    pls help me how to overcome this problem.
    thanks advance.

    Yes, This is oracle application...
    We can move into other tablespace as well. But concern is how to improve the alter table move command performance.
    Is there any specific parameter apart from the nologging and parallel server..
    If it is taking 8 hours , can some have experience that nologging will save how much time. or is there any risk in doing in production.
    Regards

  • Problem Occrued in Data loading (Time Stamp Error)

    Hi,
    Iam scheduled  info package   after its showing time stamp error in the monitor menu header status is red and details menu 0 records . time stamp error in the source system.
    my problem is i have all genric transaction data sources i checked in R/3 in RSA3 and Rsa6 i excuted in that particular data sources are active and its contain some records also.
    before loading the data i replicated all data sources these all active in BI system. but once scheduled the infopackage all data sources going to in active status.
    i Tried  to Execute T code  SE38  in BW give programme name as RS_Transtrucuture_activate .
    but i dont have authorisation these program to excute.if i want get authorisation i have to wait 3 days.
    please suggestme another solution procedure.
    Regards,
    Diwa(Devakar tati Reddy)

    Hi,
             I am not sure about what BW version are you using.  I hope you are using 3.5 or low.  After Datasource replication you need to activate your transfer structure / infosource.    I understand that you dont have authorization to run the SE38 program to activate the transfer structure.  But, you can also try to activate each transfer structure/infosource individually through edit mode of the same.  This is the only way out if you donot have the authorization for SE38. 
    Regards
    Kishore

  • Data load time out error.

    Hi Gurus,
    We have initialized one of our infocube, its having more then 14lakhs of data. each PSA data packet(90 in number)contains 15k odd record. Out of the 90 PSA data packets,10 successeded while yesterday night load scheduling, Now I am scheduling each packet individually through manual upload by right clicking on the data packet, every time its resulting in runtime error.
    Itu2019s a highly time consuming load, so canu2019t schedule it again.
    Do you have any other solution?????
    Thanks in advance,
    Kironmoy Banerjee.

    Hi,
    try to see which step in the package it is taking the time....is it the start routines??
    just see if there are start routines in the update rules.
    May be you should reduce the size of the data package when you first loaded but since you are alredy loading it from PSA its not possible.
    if you can delete the request again and reload it again via PSA and this time reduce the size of package to may be 5000 or 4000 then it should load.
    You can do the settings for nthe infopckage which you are using to schedule the load..go in the infopackage .... infopackage->schedule->datas. source option..
    change the size for the delta as well as for full loads.
    then the schedule the loads again...do an init without data transfer and a full repair without selections this time.
    Thanks
    Ajeet

  • Data loading time too long

    Hi there,
    I have two Infopackages - one for year 2003 and one for year 2004. For year 2003 I use interval 200301-200312 and for year 2004 I use interval 200401-200412 as data selection. Then I started to load data from R/3. For year 2003 we have 4.9 Millions records and took 6 hours to load and for year 2004 we have 5.5 Millions records and took 46 hours to load. The interesting thing was that when I tried to use InfoPackage 2003 and put the interval 200401-200412 as data selection for loading 2004 data, it took only 7.x hours! Why? Is something wrong with the InfoPackage for 2004? Any idea and suggestion is greatly appreciated.
    Weidong

    Hi Weidong,
    Check the processing type in both the infopackages. May be one of the infopackages has "PSA and then into Data Target" and the other infopackage has "PSA and Data Target in parallel".
    Bye
    Dinesh

  • EC-CS Data Loading Strategy

    Hello. We are currently in the process of implementing the Enterprise Consolidation Transaction Data InfoCube (0ECCS_C01).
    We are attempting to develop a data loading strategy for the InfoCube. Since this InfoCube does not have a delta process, reloading the entire cube on a daily basis is not feasable due to the length of time. We would like to set it up where it would load the current month for actuals plus into the future for forecast dollars.
    Has anyone established a data loading process for their consolidated accounting InfoCube that works well and keeps data loading time to a minimum?
    Best regards,
    Lynn

    Hi,
    You could prepare packages:
    one which you upload all data from previous years/months
    second with OLAP variables (for example 0DAT) which you upload data only from present day/month/year - depends which variable you select (add this package to the chain).
    When the second package will crash, you have to repeat procedure.
    Regards,
    Dominik

  • Two issues: activation of transfer rules and data load performance

    hi,
    I have two problems I face very often and would like to get some more info on that topics:
    1. Transfer rules activation. I just finished transport my cubes, ETL etc. on productive system and start filling cubes with data. Very often during data load it occurs that transfer rules need to be activated even if I transport them active and (I think) did not do anything after transportation. Then I again create transfer rules transports on dev, transport changes on prod and have to execute data load again.
    It is very annoying. What do you suggest to do with this problem? Activate all transfer rules again before executing process chain?
    2. Differences between dev and prod systems in data load time.
    On dev system (copy of production made about 8 months ago) I have checked how long it takes me to extract data from source system and it was about 0,5h for 50000 records but when I executed load on production it was 2h for 200000 records, so it was twice slower than dev!
    I thought it will be at least so fast as dev system. What can influence on data load performance and how I can predict it?
    Regards,
    Andrzej

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • Route and Loading Time

    All Gurus:
    I am trying to create a route for a specific shipping point (one plant) where the loading time is different for every route.  I am looking at configuration for defining shipping point:
    IMG > Enterprise Structure > Definition > Logistics Execution > Define, copy, delete, check shipping point.  Under "Determine Times", there is "Determine load. times" and 4 options:
         No loading time determination
    A     Route dependent
    B     Route independent (Route '      ')
    C     Default from shipping point
    If select "C" and specify 11 days for "Loading Time with Days", the system will always have 11 days defaulting on the sales order schedule lines between the "Loading Date" and the "Goods Issue Date".
    What I need is to have more flexibility as to the number of days on the Loading time.  If I select A or B, can you please let me know what that would do and where I can configure it so that the system gives me more flexibility as to the number of Loading Days?
    Thanks in advance,
    Ta

    Tal -
    When the loading date and GI date is same you can change the GI date by adding number of days in the field "TransLdTm". This time will get calulated as loading time. In each route definition this may vary according to requirement. this will give you different transit time determination for single ship point.                         
    In configuration do setting as below e.g.
    TransitTime               10
    Trav.dur.                    
    TransLdTm.               1
    TrLeadTimeHrs            
    With this system will calculate in reverse way from delivery date to customer based on route transit time.
    Material availability date + loading time (1) + transit time (10) ... this will be the calculation.
    Hope this answers your question. Let me know anything else you require on this.
    Regards
    Amol

  • How to load date and time from text file to oracle table through sqlloader

    hi friends
    i need you to show me what i miss to load date and time from text file to oracle table through sqlloader
    this is my data in this path (c:\external\my_data.txt)
    7369,SMITH,17-NOV-81,09:14:04,CLERK,20
    7499,ALLEN,01-MAY-81,17:06:08,SALESMAN,30
    7521,WARD,09-JUN-81,17:06:30,SALESMAN,30
    7566,JONES,02-APR-81,09:24:10,MANAGER,20
    7654,MARTIN,28-SEP-81,17:24:10,SALESMAN,30my table in database emp2
    create table emp2 (empno number,
                      ename varchar2(20),
                      hiredate date,
                      etime date,
                      ejob varchar2(20),
                      deptno number);the control file code in this path (c:\external\ctrl.ctl)
    load data
    infile 'C:\external\my_data.txt'
    into table emp2
    fields terminated by ','
    (empno, ename, hiredate, etime, ejob, deptno)this is the error :
    C:\>sqlldr scott/tiger control=C:\external\ctrl.ctl
    SQL*Loader: Release 10.2.0.1.0 - Production on Mon May 31 09:45:10 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 5
    C:\>any help i greatly appreciated
    thanks
    Edited by: user10947262 on May 31, 2010 9:47 AM

    load data
    infile 'C:\external\my_data.txt'
    into table emp2
    fields terminated by ','
    (empno, ename, hiredate, etime, ejob, deptno)Try
    load data
    infile 'C:\external\my_data.txt'
    into table emp2
    fields terminated by ','
    (empno, ename, hiredate, etime "to_date(:etime,'hh24:mi:ss')", ejob, deptno)
    this is the error :
    C:\>sqlldr scott/tiger control=C:\external\ctrl.ctl
    SQL*Loader: Release 10.2.0.1.0 - Production on Mon May 31 09:45:10 2010
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 5
    C:\>
    That's not an error, you can see errors within log and bad files.

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

Maybe you are looking for