0FI_AR_4 Initialization - millions of records

Hi,
We are planning to initialize 0FI_AR_4 datasource for which there are millions of records available in Source system.
While checking in Quality system we have realised that just for a single fiscal period it is taking hours to extract data, and in Production system we have data for last 4 years (about 40 million records).
The trace results (ST05) say that most of the time taken while fetching data from BKPF_BSID / BKPF_BSAD view.
I can see index available on tables BSID/BSAD - Index 5 - Index for BW extraction - which is not yet created on database.
This index has 2 fields - BUKRS & CPUDT.
I am not sure whether this index will help in extracting data.
What all things can be done to improve the performance of this extraction so that Initialization of 0FI_AR_4 can be completed within optimum time.
Appreciate your inputs experts.
Regards,
Vikram.

We are planning to change the existing FI_AR line item load from current fiscal year full to delta. As of now the FI_AR_4 is full from R/3 for certain company codes and fiscal Yr/Period 2013001 - 2013012. Now business wants historical data and going forward the extractor should bring only changes ( delta).
we would like to perform these below steps
1. Initialisation w/o data transfer on comp_code and FY/period 1998001 - 9999012
2. Reapir full load for all the historical data fiscal year/period wise like 1998001-1998012, 1999001-1999012...... current year 2013001 - 2013011 till PSA
3. Load these to DSO
4. activate the requests
5. Now do a delta load from R/3 to BW till PSA for the new selection 1998001-9999012
6. load till DSO
7. Activate the load
Pls let me know if these above steps will bring in all the data for FI_AR_4 line items and will not be missing any data once I do the delta load after the repair full loads.
Thanks

Similar Messages

  • How can I read, millions of records and write as *.csv file

    I have to return some set of columns values(based on current date) from the database (could be million of record also) The dbms_output can accomodate only 20000 records. (I am retrieving thru a procedure using cursor).
    I should write these values to a file with extn .csv (comma separated file) I thought of using a utl_file. But I heard there is some restriction on the number of records even in utl_file.
    If so, what is the restriction. Is there any other way I can achive it? (BLOB or CLOB ??).
    Please help me in solving this problem.
    I have to write to .csv file, the values from the cursor I have concatinated with "," and now its returning the value to the screen (using dbms_output, temporarily) I have to redirect the output to .csv
    and the .csv should be in some physical directory and I have to upload(ftp) the file from the directory to the website.
    Please help me out.

    Jimmy,
    Make sure that utl_file is properly installed, make sure that the utl_file_dir parameter is set in the init.ora file and that the database has been re-started so that it will take effect, make sure that you have sufficient privileges granted directly, not through roles, including privileges to the file and directory that you are trying to write to, add the exception block below to your procedure to narrow down the source of the exception, then test again. If you still get an error, please post a cut and paste of the exact code that you run and any messages that you received.
    exception
        when utl_file.invalid_path then
            raise_application_error(-20001,
           'INVALID_PATH: File location or filename was invalid.');
        when utl_file.invalid_mode then
            raise_application_error(-20002,
          'INVALID_MODE: The open_mode parameter in FOPEN was
           invalid.');
        when utl_file.invalid_filehandle then
            raise_application_error(-20002,
            'INVALID_FILEHANDLE: The file handle was invalid.');
        when utl_file.invalid_operation then
            raise_application_error(-20003,
           'INVALID_OPERATION: The file could not be opened or
            operated on as requested.');
        when utl_file.read_error then
            raise_application_error(-20004,
           'READ_ERROR: An operating system error occurred during
            the read operation.');
        when utl_file.write_error then
            raise_application_error(-20005,
                'WRITE_ERROR: An operating system error occurred
                 during the write operation.');
        when utl_file.internal_error then
            raise_application_error(-20006,
                'INTERNAL_ERROR: An unspecified error in PL/SQL.');

  • Need help / Advice ; manage daily millions of records;;plz help me:)

    Hi all,
    I've only 2 years of experience in Oracle DBA. I need advice from Experts:)
    To begin, the company I work for, decide to daily save in our Oracle database about 40 millions of records in our only table (User tables). These records should be daily imported from csv or xml feeds into one table.
    This 's a project that need :
    - Study the performance
    - Study What is required in terms of hardware
    As a leader in the market, Oracle 's the only DBMS that could support this size of data, but what's the limit of Oracle in this case? can Oracle support and manage perfectly daily 40 millions of records and for many years, ie We need all data of this table, we can't consider after a period that we don't need history: we need to save all data and without purge the history and this for many years i suppose!!! you can imagine 40 daily millions of records and for many years!!!
    Then we need to consolidate from this table different views (or maybe materalized view) for each department and business inside the company, one other project that need study!
    My questions 're :Using Oracle Database 10g Enterprise Edition Release 10.2.0.1.0:
    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?
    2-Study the performance ; which solutions, technics could I use to improve the performance of :
    - Daily Loading 40 millions of records from csv or xml file/files?
    - Daily Consolidate / managing different views/ materalized view from this big table?
    3- What is required in terms of hardware? features / Technologies( maybe clusters...)
    Hope that experts help me and advice me! thank you very much for your atention :)

    1-Can Oracle support and perfectly manage daily 40 millions of records and for many years?Yes
    2-Study the performance ; which solutions, technics could I use to improve >>>the performance of :Send me your email, and I can send you a Performance tuning metodology pdf.
    You can see my email on my profile.
    Daily Loading 40 millions of records from csv or xml file/files?DIrect Load
    - Daily Consolidate / managing different views/ materalized view from this big table?You can use table partitions, one partition for each day.
    Regards,
    Francisco Munoz Alvarez

  • Having Millions of Records in table how we can reduce the exicution time

    We have developed report it takes time to  running eighteen hours   background job monthly data because having millions of records in tables and used loops also Could you please help me how can read record million wise as parrlel exicution to reduce time

    Moderator message - Welcome to SCN.
    Please search the forums before asking a question.
    Also, Please read Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!! and How to post code in SCN, and some things NOT to do... and [Asking Good Questions in the Forums to get Good Answers|/people/rob.burbank/blog/2010/05/12/asking-good-questions-in-the-forums-to-get-good-answers] before posting again.
    Thread locked.
    Rob

  • How to Update millions or records in a table

    I got a table which contains millions or records.
    I want to update and commit every time for so many records ( say 10,000 records). I
    dont want to do in one stroke as I may end up in Rollback segment issue(s). Any
    suggestions please ! ! !
    Thanks in Advance

    Group your Updates.
    1.) Look for a good group criteria in your table, a Index on it is recommend.
    2.) Create an PL/SQL Cursor with the group criteria in the where clause.
    cursor cur_updt (p_crit_id number) is
    select * from large_table
    where crit_id > p_crit_id;
    3.) Now you can commit in a serial loop all your updates.

  • What's the best way to delete 2.4 million of records from table?

    We are having two tables one is production one and another is temp table which data we want to insert into production table. temp table having 2.5 million of records and on the other side production table is having billions of records. the thing which we want to do just simple delete already existed records from production table and then insert the remaining records from temp to production table.
    Can anyone guide what's the best way to do this?
    Thanks,
    Waheed.

    Waheed Azhar wrote:
    production table is live and data is appending in this table on random basis. if i go insert data from temp to prod table a pk voilation exception occured bcoz already a record is exist in prod table which we are going to insert from temp to prod
    If you really just want to insert the records and don't want to update the matching ones and you're already on 10g you could use the "DML error logging" facility of the INSERT command, which would log all failed records but succeeds for the remaining ones.
    You can create a suitable exception table using the DBMS_ERRLOG.CREATE_ERROR_LOG procedure and then use the "LOG ERRORS INTO" clause of the INSERT command. Note that you can't use the "direct-path" insert mode (APPEND hint) if you expect to encounter UNIQUE CONSTRAINT violations, because this can't be logged and cause the direct-path insert to fail. Since this is a "live" table you probably don't want to use the direct-path insert anyway.
    See the manuals for more information: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9014.htm#BGBEIACB
    Sample taken from 10g manuals:
    CREATE TABLE raises (emp_id NUMBER, sal NUMBER
       CONSTRAINT check_sal CHECK(sal > 8000));
    EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog');
    INSERT INTO raises
       SELECT employee_id, salary*1.1 FROM employees
       WHERE commission_pct > .2
       LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10;
    SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;
    ORA_ERR_MESG$               ORA_ERR_TAG$         EMP_ID SAL
    ORA-02290: check constraint my_bad               161    7700
    (HR.SYS_C004266) violatedIf the number of rows in the temp table is not too large and you have a suitable index on the large table for the lookup you could also try to use a NOT EXISTS clause in the insert command:
    INSERT INTO <large_table>
    SELECT ...
    FROM TEMP A
    WHERE NOT EXISTS (
    SELECT NULL
    FROM <large_table> B
    WHERE B.<lookup> = A.<key>
    );But you need to check the execution plan, because a hash join using a full table scan on the <large_table> is probably something you want to avoid.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • What would be best approach to migrate millions of records from on premise SQL server to Azure SQL DB?

    Team,
    In our project, we have a requirement of data migration. We have following scenario and I really appreciate any suggestion from you all on implementation part of it.
    Scenario:
    We have millions of records to be migrated to destination SQL database after some transformation.
    The source SQL server is on premise in partners domain and destination server is in Azure.
    Can you please suggest what would be best approach to do so.
    thanks,
    Bishnu
    Bishnupriya Pradhan

    You can use SSIS itself for this
    Have a batch logic which will identify data batches within source and then include data flow tasks to do the data transfer to Azure. The batch size chosen should be as per buffer meory availability + parallel tasks executing etc.
    You can use ODBC or ADO .NET connection to connect to azure.
    http://visakhm.blogspot.in/2013/09/connecting-to-azure-instance-using-ssis.html
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • WebInterfaces for Millions of records - Transactional InfoCube

    Hi Gerd,
    Could u please suggest me which one can i use when i'm dealing with millions of records-Large amount of data.
    (Displaying data from planning folders or WebInterfaceBuilder)
    Right now i'm using WebInterfaceBuilder when i'm doing planning where user is allowed to enter values - for millions of records like Revenue forecast planning on salesorders.
    Thanks in advance,
    Thanks for your time,
    Saritha.

    Hello Saritha,
    Well - technically there is no big difference whether you are using Web interfaces or planning folders. All data has to be selected from the data base, processed by the BPS, the information has to be transmitted to the PC and displayed there. So both front ends should have roughly the same speed.
    Sorry, but one question - is it really necessary to work with millions of data records online? The philosophy of the BPS is that you should limit the number of records you use online as much as possible - it should be an amount  also the user can handle online - i.e. manually working with every record (which is probably not possible when handling 1 million of records). If a large number of records should be calculated/manipulated this should be done in a batch job - i.e. a planning sequence that runs in the back ground. This prevents the system from terminating the operation due to a long run time (usual time until a time out for an online transaction occurs is about 20 min) and gives you also more opportunities to control memory use or parallelizing of processes (see note 645454).
    Best regards,
    Gerd Schoeffl
    NetWeaver RIG BI

  • Fast searching record among 70 millions of records in database

    Hi All,
    Could you please give me some Idea how can I do fast searching among 70 millions of record in database? I have tried with lucene but unable to get desired result.
    -Roy D

    lucene? What's that?Don't know, but it reminds me of Lucille ;-)
    sings a certain bluesy song
    To OP:
    Could you please give me some Idea how can I do fast searching among 70 millions of record in database?First you need to give us amore clear idea of what's going on.
    Can you post your execution plan and describe your table(s), indexes, database-version etc.?
    See: [How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]

  • How to DELETE millions of records. How to make it fast.

    Hi
    I need to delete near abt 134 millions of records from tables.
    How to make it faster? any trick , any settings.
    I am using Oracle 9i on Linux box.
    If suppose i use TRUNCATE . does it deletes all objects defined over tables like constraint, indexes etc.
    Thanks,
    Kuldeep

    hi
    SQL> create table te as select * from all_objects;
    Table created.
    SQL> create index te_ind on te ( owner);
    Index created.
    SQL> truncate table te;
    Table truncated.
    SQL> select index_name , status from user_indexes where table_name = 'TE';
    INDEX_NAME                     STATUS
    TE_IND                         VALID
    SQL> create table ti as select * from all_objects;
    Table created.
    SQL> create index ti_ind on ti ( owner);
    Index created.
    SQL> drop table ti;
    Table dropped.
    SQL> select index_name , status from user_indexes where table_name = 'TI';
    no rows selected
    SQL>regards
    Taj

  • Top Link Causes out of memory issue when millions of records need to update

    Hello everyone,
    I am using TopLink 9.0.4 in a batch process. The batch process reads from the temp table(temp table has millions of records one month worth of data which need be updated). The database being used is sqlserver 2005. Below is the snippet of code. It works for 6-7 hours and crashes after that due of out of memory:
    ExpressionBuilder expressionBuilder = new ExpressionBuilder();
    Statement stmt = con.createStatement();
    ResultSet rs = st.executeQuery("Select * from database tablename where field= 'done'");
    while(rs!=null && rs.next()){
    *//where vo is the value object obtained from the rs row by row*     
    if (updateInfo(vo, user,expressionBuilder )){
                   logger.info("updated : "+ rs.getString("col_name"));
                   projCount++;
    rs.close();
    st.close();
    private boolean updateInfo(ProjectVO vo, YNUser tcUser,expressionBuilder ) {
              boolean updated;
              updated = false;
              try {
                   updated = true;
              } catch (Exception e) {
                   logger.warn("update: caused exception, "
                             + e.getMessage());
              return updated;
    Edited by: user8981696 on Jan 14, 2010 1:00 PM

    Thanks for your reply.
    Please find below the answers to you suggestions/concerns:
    You seem to be using raw JDBC to select all of the records in a single result set, not sure if this may be causing a memory issue. You could try paging through the results instead.
    Ans: I have modified the code to get me 1000 records each time and I am getting the ResultSet by using PrepartedStatement instead of regular Statement object.
    What type of caching are you using?
    Ans: No caching is being used. If you have some thoughts on caching please suggest or put some sample code. Again there is no AppServer is being used, its just a regular java process(Batch process) so I dont know how to do caching in a simple java process.
    You may also wish to try the latest 9.0.4 patch release, or try the 10.1.3 version, or the latest EclipseLink 2.0 release.
    Ans: Where can I find the latest patch release 9.0.4?
    Any help/suggestion is really appreciated!

  • Inserting Millions of records-Please Help!

    Hi All,
    I have a scenario where I hvae to query MARA and filter out some articles and then query WLK1 table(Article/Site Combination) and insert the records to a Custom (z) table.the result maybe millions of records,
    can anyone tell me a efficient way to insert large number of records? This is urgent.Please help.
    Warm Regards,
    Sandeep Shenoy

    This is a sample code i am using in one of my programs. You can try similar way and insert into custom table with every loop pass.
    I am considering 2000 records at a time. You can decide the no and code accordingly.
      if not tb_bkpf[] is initial.
    fetching the data from BSEG for each 1000 entries in BKPF to
    reduce the overhead of database extraction.
        clear l_lines .
        describe table tb_bkpf lines l_lines.
        if l_lines >= 1.
          clear: l_start, l_end.
          do.
            l_start = l_end + 1.
            l_end = l_end + 2000.
            if l_end > l_lines.
              l_end = l_lines.
            endif.
            append lines of tb_bkpf from l_start to l_end to tb_bkpf_temp.
    Populating the tb_bseg_tmp in the order of the database table
            select bukrs
                   belnr
                   gjahr
                   buzei
                   shkzg
                   dmbtr
                   hkont
                   matnr
                   werks
              from bseg
              appending table tb_bseg_tmp
              for all entries in tb_bkpf_temp
              where bukrs = tb_bkpf_temp-bukrs and
                    belnr = tb_bkpf_temp-belnr and
                    gjahr = tb_bkpf_temp-gjahr and
                    hkont in s_hkont.
            refresh tb_bkpf_temp.
            if l_end >= l_lines.
              exit.
            endif.
          enddo.
        endif.

  • Performance across millions of records

    Hi,
    I have millions of records in the database. I need to retrieve these records from multiple master data tables and do the validations and post the error messages in some format. Please let me know the way where I can complete the process in 15minutes and which does not go to short dump. I am really expecting the performance to be excellent.

    Hi,
    I would go for a different concept - in other words: forget it. Let's say, you have 2 million records (millions wasn't very specific, but could be much more). 15 minutes (usual time-out comes already after 10 minutes!) are 900 seconds. Divide this by 2 Mio -> 0.45 milliseconds per entry.
    In this time you want to select the entry and perform a check. I doubt this will be possible - you might select the entries in this time, you might make a loop, maybe one read table in this time - but all together (avoiding RAM problem, having only index access, gathering error messages...) small chance.
    I guess you will rather spend a lot of time and won't succeed - or you have less entries to test then you said in the first place.
    Of course I cannot estimate the exact runtime - even if you would have given the exact requirement - but just make some tests with very small numbers and see yourself, if you can come close to the time / entry you need.
    Regards,
    Christian

  • How to update the millions of records in oracle database?

    How to update the millions of records in oracle database?
    table have contraints & index.how to do this mass update.normal update taking several hours.

    LostWorld wrote:
    How to update the millions of records in oracle database?
    table have contraints & index.how to do this mass update.normal update taking several hours.Please, refer to Tom Kyte's answer on your question
    [How to Update millions or records in a table|http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6407993912330]
    Kamran Agayev A. (10g OCP)
    http://kamranagayev.wordpress.com
    [Step by Step install Oracle on Linux and Automate the installation using Shell Script |http://kamranagayev.wordpress.com/2009/05/01/step-by-step-installing-oracle-database-10g-release-2-on-linux-centos-and-automate-the-installation-using-linux-shell-script/]

  • Database table with potentially millions of records

    Hello,
    We want to keep track of user's transaction history from the performance database.  The workload statistics contain the user transaction history information, however since the workload performance statistics are intended for temporary purposes and data from these tables are deleted every few months, we loose all the user's historical records.
    We want to keep track of the following in a table that we can query later:
    User ID      - Length 12
    Transaction  - Length 20
    Date         - Length 8
    With over 20,000 end users in production this can translate into thousands of records to be inserted into this table daily.
    What is the best way to store this type of information?  Is there a specific table type that is designed for storing massive data quantity?  Also, over time (few years) this table can grow into millions or hundreds of millions of records.  How can we manage that in terms of performance and storage space?
    If anyone has worked with database tables with very large amounts of records, and would like to share your experiences, please let us know how we could/should structure this function in our environment.
    Best Regards.

    Hi SS
    Alternatively, you can use a <u>cluster table</u>. For more help refer to F1 help on <b>"IMPORT TO / EXPORT FROM DATABASE"</b> statements.
    Or you can store data as a <u>file</u> on the application server using <b>"OPEN DATASET, TRANSFER, CLOSE DATASET"</b> statements.
    You can also select to archieve data of older than some definite date.
    You can also mix your alternatives for the recent and archieve data.
    *--Serdar <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=qbk%2bsag%2bjiw%3d">[ BC ]</a>

Maybe you are looking for