Performance in processing 80,000 records.

Hi
I am working on a module where I have to upload a file of 80,000 records, process them and then upload them in Web Service.
I am uploading file by simply parsing request
items = upload.parseRequest(request);After this I am traversing entire file line by line, processing individual records with my logic, and then saving them to a Vector.
In second servlet I am fetching these records and then uploading them to WSDL file.
This process will take some time.
I am facing few problems/questions here :
Question 1:
After 30 minutes or so.. the browser displays "This page cannot be displayed".
While I am debugging this code and setting breakpoints, I noticed that code is actually executing when browser displays "This page cannot be displayed" message.
Can I increase browser settings so that It can wait for some more time before displaying above message.
So, that my java code can complete its execution.
Question 2 :
I am using vector to store all 80,000 records at one go. Will the use of ArrayList or some other collection type increase performance.
Question 3 :
What if I break vector in parts.
i.e. instead of keeping 1 single vector of 80,000 records, if I store 10,000 records each in different vectors and then process them separately.
Please comment.
Thanks.

money321 wrote:
Question 1:
After 30 minutes or so.. the browser displays "This page cannot be displayed".
While I am debugging this code and setting breakpoints, I noticed that code is actually executing when browser displays "This page cannot be displayed" message.
Can I increase browser settings so that It can wait for some more time before displaying above message.
So, that my java code can complete its execution.
It is the request timeout, it is a webserver setting, not a webbrowser setting. Even though the request times out, the code should still continue to execute until the process finishes; you just don't get the response in your browser.
Question 2 :
I am using vector to store all 80,000 records at one go. Will the use of ArrayList or some other collection type increase performance.
Probably yes, because a Vector is thread safe while the ArrayList is not. It is a similar situation as StringBuffer/StringBuilder.
Question 3 :
What if I break vector in parts.
i.e. instead of keeping 1 single vector of 80,000 records, if I store 10,000 records each in different vectors and then process them separately.Wouldn't make much of a difference I'd say. The biggest performance hit is the webservice call, so try to save as much time as you can there. By the way, are you doing one webservice call, or 80000?
>
Please comment.
Thanks.

Similar Messages

  • Performance while uploading 50,000 records

    Hi
    I have to create an application which reads records from a file.
    The records can exceed upto 50,000.
    Then these records are to be processed using a web service.
    Now, I need to design an optimized solution.
    Simple design would be to read all records from file store them in context and then loop them through web service model.
    I think there has to be one more optimal solution.
    Even ahead of performance comes runtime memory issue !! (What if it falls short to hold all 50,000 records in context at same time)
    How can I break this application.
    Thanks

    money321 wrote:
    Question 1:
    After 30 minutes or so.. the browser displays "This page cannot be displayed".
    While I am debugging this code and setting breakpoints, I noticed that code is actually executing when browser displays "This page cannot be displayed" message.
    Can I increase browser settings so that It can wait for some more time before displaying above message.
    So, that my java code can complete its execution.
    It is the request timeout, it is a webserver setting, not a webbrowser setting. Even though the request times out, the code should still continue to execute until the process finishes; you just don't get the response in your browser.
    Question 2 :
    I am using vector to store all 80,000 records at one go. Will the use of ArrayList or some other collection type increase performance.
    Probably yes, because a Vector is thread safe while the ArrayList is not. It is a similar situation as StringBuffer/StringBuilder.
    Question 3 :
    What if I break vector in parts.
    i.e. instead of keeping 1 single vector of 80,000 records, if I store 10,000 records each in different vectors and then process them separately.Wouldn't make much of a difference I'd say. The biggest performance hit is the webservice call, so try to save as much time as you can there. By the way, are you doing one webservice call, or 80000?
    >
    Please comment.
    Thanks.

  • How can I modify this to process 80,000 records at a time until finish.

    Hello it's me again -
    Without using a rownum limit in my cursor declare my record-set is around 10 million records. This causes problems within our environment. Is there a way I can loop the following code and only do 80,000 at a time.
    1 process 80,000
    2. process next 80,000.
    How would I redeclare the cursor in a loop and grab the next 80,000?
    Thanks again
    Steve
    SET SERVEROUTPUT ON
    DECLARE
    CURSOR vt_mlr_cursor IS Select master_key, tn, user2 from vt_mlr Where user2 is not null and rownum < 80001;
    USERFIELD VARCHAR2(100);
    R_count NUMBER := 0;
    Field1 VARCHAR2(20);
    Field2 VARCHAR2(20);
    key VARCHAR2(10);
    phone VARCHAR2(11);
    BEGIN
    FOR vt_mlr_record IN vt_mlr_cursor
    LOOP
    BEGIN
         key := vt_mlr_record.master_key;
         phone     := vt_mlr_record.tn;
         USERFIELD := vt_mlr_record.user2;
         Field1 := SUBSTR(vt_mlr_record.user2,12,4);
         Field2 := SUBSTR(vt_mlr_record.user2,28,4);
              UPDATE vt_mlr
              SET
              line_medr = Field1,
              line_aidr = Field2
              WHERE
              master_key = vt_mlr_record.master_key;
              R_count := R_count + 1;
         EXCEPTION
         when others then
         Insert into temp_reject (REJECT_KEY, REJECT_TN, REJECT_VALUE) VALUES
         (key, phone, 'USER2 ' || USERFIELD );
         R_count := R_count - 1;
    END;
    END LOOP;
         commit;
         EXCEPTION
         when others then
         DBMS_OUTPUT.PUT_LINE('Error code ' || sqlcode || ' Error desc' || SUBSTR(sqlerrm,1,200));
    END;

    Add a "last_update" or "modified" column to your table.
    Then do this:
    declare
    CURSOR vt_mlr_cursor IS
       select master_key, tn, user2
       from vt_mlr
       where user2 is not null and rownum < 80001
             and modified != 'Y'
    (or)
             and last_update < begin_date ;
       begin_date constant date := sysdate ;
    begin
      update vt_mlr
         set line_medr = Field1,
             line_aidr = Field2,
             modified = 'Y'
    (or)
             last_update = sysdate

  • BKPF 6000 records  and bsid 300,000 records

    Which one is better ?
    1.
    LOOP  bkpf (6000 records )
             READ  bsid (300,000 records )
             READ  .....
             READ  .....
             READ  .....
             READ  .....
       ENDLOOP.
    and the other thing that i concern in (1.) is  perhap i have to use
    LOOP  bkpf (6000 records )
             loop bsid (300,000 records ) where .....
       ENDLOOP.
    2.
    LOOP bsid(300,000 records)
           READ bkpf(300,000 records )
       ENDLOOP.
    Now in my program i use (2.) but performance is quite bad. it's time out (1 hour) on PRD
    actually i have many internal tables to read not only bkpf but it's not my concern and all of READ i SORT and use BINARY SEARCH
    Thank you in advance

    Try the below code.
    sort it_bsid by bukrs belnr gjahr.
    loop at it_bkpf.
      clear: lv_indx.
      read table it_bsid with key bukrs = it_bkpf-rbukrs
                                             belnr = it_bkpf-belnr
                                             gjahr = it_bkpf-gjhar
                                             binary search.
      if sy-subrc = 0.
        lv_indx = sy-tabix.
        loop at it_bsid from lv_indx.
          if it_bsid-bukrs = it_bkpf-bukrs and
             it_bsid-belnr = it_bkpf-belnr and
             it_bsid-gjahr = it_bkpf-gjahr.
            << read other internal tables and do the necessary processing>>
          else.
             clear: lv_indx.
             exit.
          endif.
        endloop.
      endif.
    endloop.
    Hope this helps your time out issue.
    Thanks,
    Balaji

  • Need to commit after every 10 000 records inserted ?

    What would be the best way to Commit after every 10 000 records inserted from one table to the other using the following script :
    DECLARE
    l_max_repa_id x_received_p.repa_id%TYPE;
    l_max_rept_id x_received_p_trans.rept_id%TYPE;
    BEGIN
    SELECT MAX (repa_id)
    INTO l_max_repa_id
    FROM x_received_p
    WHERE repa_modifieddate <= ADD_MONTHS (SYSDATE, -6);
    SELECT MAX (rept_id)
    INTO l_max_rept_id
    FROM x_received_p_trans
    WHERE rept_repa_id = l_max_repa_id;
    INSERT INTO x_p_requests_arch
    SELECT *
    FROM x_p_requests
    WHERE pare_repa_id <= l_max_rept_id;
    DELETE FROM x__requests
    WHERE pare_repa_id <= l_max_rept_id;

    1006377 wrote:
    we are moving between 5 and 10 million records from the one table to the other table and it takes forever.
    Please could you provide me with a script just to commit after every x amount of records ? :)I concur with the other responses.
    Committing every N records will slow down the process, not speed it up.
    The fastest way to move your data (and 10 million rows is nothing, we do those sorts of volumes frequently ourselves) is to use a single SQL statement to do an INSERT ... SELECT ... statement (or a CREATE TABLE ... AS SELECT ... statement as appropriate).
    If those SQL statements are running slowly then you need to look at what's causing the performance issue of the SELECT statement, and tackle that issue, which may be a case of simply getting the database statistics up to date, or applying a new index to a table etc. or re-writing the select statement to tackle the query in a different way.
    So, deal with the cause of the performance issue, don't try and fudge your way around it, which will only create further problems.

  • Need to post Full Load data (55,000 records) to the target system.

    Hi All,
    We are getting the data from SAP HR system and we need to post this data to the partner system. So we configured Proxy(SAP) to File(Partner) scenario. We need to append the data of each message to the target file. Scince this is a very critical interface, we have used the dedicated queues. The scenario is working fine in D. When the interface transported to Q, they tested this interface with full load i.e with 55,000 messages.All messages are processed successfully in Integration Engine and to process in Adapter engine, it took nearly 37 hrs. We need to post all 55,000 records with in 2 hrs.
    The design of this interface is simple. We have used direct mapping and the size of each message is 1 KB. But need to append all messages to one file at the target side.We are using Advantco sFTP as receiver adapter and proxy as a sender.
    Could you please suggest a solution to process all 55,000 messages with in 2hrs.
    Thanks,
    Soumya.

    Hi Soumya,
    I understand your scenario as, HR data has be send to third party system once in a day. I guess, they are synchronizing employee (55,000) data in third party system with SAP HR data, daily.
    I would design this scenario as follows:-
    I will ask ABAPer to write a ABAP program, which run at 12:00, pickup 55,000 records from SAP HR tables and place them in one file. That file will be placed in SAP HR file system (you can see it using al11). At 12:30, PI File channel will pick up the file and transfer the file to third party target system as it is, without any transformation. File to File, pass through scenario (no ESR objects). Now, ask the target system to take the file, run their program (they should have some SQL routines). That SQL program will insert these records into target system tables.
    If 55,000 records make huge file at SAP HR system, ask ABAPer to split it into parts. PI will pick them in sequence based on file name.
    In this approach, I would ask both SAP HR (sender) and third party (target) system people to be flexible. Otherwise, I would say, it is not technically possible with current PI resources. In my opinion, PI system is middleware, not system in which huge computations can be done. If messages are coming from different systems, then collection in middleware makes sense. In your case, collecting large number of messages from single system, at high frequency is not advisable. 
    If third party target system people are not flexible, then go for File to JDBC scenario. Ask SAP HR ABAPer to split input file into more number of files (10-15, you PI system should be able to handle). At receiver JDBC, use native SQL. You need java mapping to construct, SQL statements in PI. Donu2019t convert flat file to JDBC XML structure, in your case PI cannot handle huge XML payload.
    You have to note, hardware upgrade is very difficult (you need lot of approvals depending your client process) and very costly. In my experience hardware upgrade will take 2-3 months.
    Regards,
    Raghu_Vamsee

  • Not able to update more than 10,000 records in CT04 for a characteristic

    Hi all,
    We are not able to update more than 10,000 records in CT04 for a certain characteristic.
    Is there any possible way to do this?
    Please advise...its a production issue.
    Thanks.

    Hello ,
    Please consider using a check table for the characteristic involved if you are working with large
    number of values assigned
    With a check table you have a possibility to work with a huge amount of values , also the performance should improve                          
    Please refer to the link
    http://help.sap.com/saphelp_erp60_sp/helpdata/en/ec/62ae27416a11d1896d0000e8322d00/frameset.htm
    Section - Entering a Check Table 
    Hopefully the information helps
    Thanks
    Enda.

  • Temporary performance degradation after inserting many records

    I run into the following performance problem:
    I'm accessing Oracle from Java (JDBC).
    2 tables:
    DISPLAYS
    PAYLOADINFO
    PAYLOADINFO contains a foreign key to DISPLAYS.
    A process writes 100000 records into PAYLOADINFO, in a single transaction. The transaction is committed. Each of the records places the SAME id in the foreign key (ie. all payload-info originates from the same display).
    After commit succeeds, a query (on other tables referencing the DISPLAY table) is launched.
    1) With the foreign key, queries become VERY slow for a while. After this 'while', queries get back up to speed.
    2) with NO foreign key linking the two tables, subsequent queries keep their original performance.
    I read somewhere that this could be caused by unbalanced indexing ? Is this the case ? Where should I look for a solution ?
    best regards,
    Geert

    Can you define "a while"? Are we talking minutes? hours? days?
    When are statistics gathered in the database?
    Do you see a change in the query plan? Or do you see other tasks consuming system resources?
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Script logic record more than 300,000 record

    Hi Expert,
    When I run my logic I have error in my formula log:
    (More than 300,000 records. Details are not being logged)
    Ignoring Status
    Posting ok
    I check my script it pull out total 422076 records.
    Is it meaning I cannot More than 300,000 records??
    Is there any where I can set MAX records I can generate for my single script to run??
    Thanks..

    You should use
    *XDIM_MAXMEMBERS dimension = numberOfMembers to be processed at a time
    For example
    *XDIM_MAXMEMBERS Entity = 50
    Figure out wich dimension has the most members, and use it, this sections you script logic.
    I hope that helps
    Leandro Brasil

  • Performance issue in update new records from Ekko to my ztable

    I'm doing changes to an existing program
    In this program I need to update any new purchase orders created in EKKO-EBELN to my ztable-ebeln.
    I need to update my ztable with the new records created on that particular date.
    This is a daily running job.
    This is the code I wrote and I'm getting 150,000 records into this loop and I'm getting performance issue, can Anyone suggest me how to avoid performance issue.
    loop at tb_ekko.
        at new ebeln.
          read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
          if sy-subrc <> 0.
            tb_ztable-ebeln = tb_ekko-ebeln.
            tb_ztable-zlimit = ' '.
            insert ztable from tb_ztable.
          endif.
        endat.
      endloop.
    Thanks
    Hema.

    Modify  your code as follows:
    loop at tb_ekko.
    at new ebeln.
    read table tb_ztable with key ebeln = tb_ekko-ebeln binary search.
    if sy-subrc <> 0.
    tb_ztable_new-ebeln = tb_ekko-ebeln.
    tb_ztable_new-zlimit = ' '.
    append tb_ztable_new.
    clear tb_ztable_new.
    endif.
    endat.
    endloop.
    insert ztable from table tb_ztable_new.
    Regards,
    ravi

  • Predefined Performance Management Process and Infotype 7409 'Performance'

    Hi,
    I hope, that anyone can answer my question.
    I have configured a predefined Performance Management Process in SAP 6.0 EHP4 and finished the process with an overall assessment of the performance of the employees. However, no data has been written into Infotyp 7409 "Performance". Shouldn't the data of such an assessment be written into it?
    The SAP documentation for IT7409 says the following:
    "An employee's manager can use a form to assess the performance of the employee. In a talent review meeting, the participants can confirm this performance assessment or reclassify the employee regarding his or her performance.
    For each performance assessment that is stored in the talent profile or made in the talent review meeting, the system creates a record for the Performance infotype in the background and stores the ID of the employee's central person in the record (object type CP)."
    (http://help.sap.com/erp2005_ehp_04/helpdata/EN/a7/2bbb6a72aa4812b834a5ac777f630b/frameset.htm)
    However, to be able to add a talent to a talent review meeting, the talent already needs to have an entry in IT7409.
    Hence my question: How does the IT7409 get filled?
    Kind regards
    Daniel

    My question seems to have been a problem in fact and seems to have been solved with OSS note 1338701 just yesterday.
    Kind regards
    Daniel

  • BPC 7.5 NW -- Data Manager Import Fails When Loading 40,000 Records?

    Hi Experts,
    Can't believe I'm posting this because the Data Manager component has always been one of the best parts of BPC.  But, since we got SP04 applied last week, every IMPORT process that I've run that has more than 40,000 records fails out.
    The result logs show that the CONVERT task completes just fine, but it doesn't really show the LOAD task.  ...Not exactly sure what's going on here.  So far I've also taken the following two steps to try for resolution:
    (1.)  Re-added the IMPORT package in Organize Package List from the Library to have a "fresh" version.  Didn't help.
    (2.)  In the "Modify Package" screens, there is a PACKAGESIZE parameter that is 40,0000 by default...  I've been able to locate that in BI using transaction RSA1 and have changed it to 10,000,000.  Saved it.  Tried it.  Didn't help either
    Has anyone seen this kind of behavior before?
    Thanks,
    Garrett

    Update -- This problem may now be resolved.
    I have been able to conduct test IMPORTs of 48,000, then 96,000 and then 1.7 million records.  All were fine.
    It turns out that that difference is that the text files were sorted by amount in the ones that failed.  They were sorted by GLAccount in column A for the ones that succeeded.
    Edit:  Yep, all files loaded normally when re-sorted by GLACCOUNT, etc. on the left-hand side.  Apparently, when you're doing a lot of records that might confuse the system or something
    Edited by: Garrett Tedeman on Nov 18, 2010 11:41 AM

  • Batch processing of file records(FTP) with validation logic to IDOC

    Hi,
    We have a scenario where we're expecting 10,000 records in XML file format where validation has to be done against multiple R/3 tables at the field level before IDOC posting.
    Can anyone suggest if we should go for RFC lookup or call proxies to do mapping validation? Performance is a major concern.
    regards,
    vivian

    As my experience, I think doing it through proxy is faster and easier interms of performance and ease. You shold go through the SDN blog
    /people/michal.krawczyk2/blog/2006/04/19/xi-rfc-or-abap-proxy-abap-proxies-with-attachments
    See if this will solve you problem.
    --Nilkanth.

  • Try to insert 10.000 records, but stop at 500 records

    I try to insert 10.000 records to 4 coloumn table at sun solaris oracle, from visual basic application, but it stop at 500 records, and when I try to insert record 501, it never succeded.
    Is there limitation in oracle database in insertion procedure ?

    Hi,
    There is no such limitations in Oracle Database. The insertion process is going on, but it looks like hanging. You can do one thing to trace the happenings.
    1. Paste progress bar item in your screen
    2. Set Min = 1
    3. Set Max = Total no of records from source table(where 10,000 records are there)
    4. You might have one Do while..loop structure to insert a record to a target table. Within that loop, increase the value of process bar. So,while inserting a record, the progress bar value will change.
    So, you can trace whether the process is running or not.
    I think, this will help u to trace the process.
    N.Swaminathan

  • Update performance on a 38 million records table

    Hi all,
    I´m trying to create a script to update a table that have around 38 million records. That table isn´t partitioned and I just have to update one CHAR(1 byte) field and set it to 'N'.
    The Database is 10g r2 running on a Unix TRU64.
    The script I create have a LOOP on a CURSOR that Bulk 200.000 records by pass and do a FORALL to update the table by ROWID.
    The problem is, on the performances tests that method took about 20 minutes to update 1 million rows and should take about 13 hours to update all table.
    My question is: Is that any way to improve the performance?
    The Script:
    DECLARE
    CURSOR C1
    IS
    SELECT ROWID
    FROM RTG.TCLIENTE_RTG;
    type rowidtab is table of rowid;
    d_rowid rowidtab;
    v_char char(1) := 'N';
    BEGIN
    OPEN C1;
    LOOP
    FETCH C1
    BULK COLLECT INTO d_rowid LIMIT 200000;
    FORALL i IN d_rowid.FIRST..d_rowid.LAST
    UPDATE RTG.TCLIENTE_RTG
    SET CLI_VALID_IND = v_char
    WHERE ROWID = d_rowid(i);
    COMMIT;
    EXIT WHEN C1%NOTFOUND;
    END LOOP;
    CLOSE C1;
    END;
    Kind Regards,
    Fabio

    I'm just curious... Is this a new varchar2(1) column that has been added to that table? If so will the value for this column remain 'N' for the future for the majority of the rows in that table?
    Has this column specifically been introduced to support one of the business functions in your application / will it not be used everywhere where the table is currently in use?
    If your answers to above questions contain many yes'ses, then why did you choose to add a column for this that needs to be initialized to 'N' for all existing rows?
    Why not add a new single-column table for this requirement: the single column being the pk-column(s) of the existing table. And the meaning being if a pk is present in this new table, then the "CLI_VALID_IND" for this client is 'yes'. And if a pk is not present, then the "CLI_VALID_IND" for this client is 'no'.
    That way you only have to add the new table. And do nothing more. Of course your SQL statements in support for the business logic of this new business function will have to use, and maybe join, this new table. But is that really a huge disadvantage?

Maybe you are looking for

  • Dont understand how to convert my videos to my ipod

    ok, i downloaded videora and i converted a short movie. now, i dont know what to do from there. i managed to get it into my library, but when i go to "update" nothing shows up in the video section of my ipod. please help, im going crazy. thanks.

  • Top navigation menu items should open in new tab

    In a share point 2013 site, i have created top navigation using term set from managed meta data. some of the menu items navigate the user with in the site whereas some other menu items will navigate to the external sites. My requirement is I want to

  • Setting alerts by subject

    Is there a way to set my iPhone to set off an alert based on words in the subject of a message? I belong to an emergency response group, and alerts are sent out by messaging (text). There are an changing array of people who can send out a call, but t

  • Skype works, but no internet and mail

    All, This sounds to funny to be true, but it is real. My skype works on my Mini (via LAN cable to my Airport), but If I open Safari or FF (or Mail) I get a message that it is not connected to the internet. I checked the network settings and I do get

  • CC keeps asking to update

    Hi Is been more than five weeks where creative cloud app keeps asking to update PS CC and PS cs6 with the camera raw 8.3 update. I already update three times and nothing happened. Please help, I'm sure because of this CC is not getting newer updates.