Saving 100,000 records in a loop too slow

I have an explicit cursor then a for loop on it and an update statement inside the loop it works but it does 100,000 individual updates which is not good for performance speed, any ideas to speed that up
My loop is
FOR code_rec IN cur_codes_table
LOOP
code_loop := code_loop +1 ;
UPDATE <table name> T1
SET T1.error_indicator = 'Err?'||main_rec.REC_ROWNUM
WHERE T1.REC_ROWNUM = main_rec.ROWNUM ;
END LOOP;

You haven't really provided us enough information to go on but I would suggest removing the FOR LOOP logic altogether and using a SINGLE UPDATE statement.
If you can provide CREATE / INSERT scripts to describe your situation as well as desired result the folks here can probably develop a solution for you.
Here is an example, it may or may not apply to your situation:
UPDATE  TABLE_T T1
SET     ERROR_INDICATOR = (
                                SELECT  'Err?'||REC_ROWNUM
                                FROM    MAIN_TABLE MT
                                WHERE   T1.REC_ROWNUM = MT.REC_ROWNUM
WHERE   EXISTS
                SELECT  NULL
                FROM    MAIN_TABLE MT
                WHERE   T1.REC_ROWNUM = MT.REC_ROWNUM
        );

Similar Messages

  • RECORDING ERROR MESSAGE-'DISC TOO SLOW'

    Every once and a while after I have recorded around 12+ tracks, I get a "Disc too slow" message with a black block at the end of the recording. The black block when played back is just a loud noise. What is this and what setting might I have wrong. I have the new IMac 27" with quad cores. The cores meter is barely moving and I don't hardly have any processors on. HELP.

    sand box wrote:
    Erik.......do you feel it is better to record direct to my external HD or the Imac HD?
    External. IF it is Firewire. USB is less suitable, but it'll do too. The startup disk is also busy handling RAM for virtual memory, so it is doubly stressed when recording to it, and will 'refuse' to record, or drop out, or say "disk too slow" sooner.
    What are the best settings for buffer and the other recording parameters?
    Depends on your machine; rule of thumb: keep it as low as possible when recording for minimum latency (32, 64 or 128 or 256), and set it higher when playing back, (256-1024) to avoid "system overload".
    The best recording format imo is 24 bit/44.1 KHz (or 48 KHz). If you record 10+ tracks that are all subtle acoustic recordings, it may ever so slighly improve sound/mix quality to go with 24/96KHz, but that will also double the overhead for the CPU. And imo the difference even then is hardly perceptible, save for the most highly trained professional ears.
    I record basic rock with 16 tracks or less and not overly complicated effects. Also I still don't understand what "flattening" of a track means. Thanx.
    Okay, so 24/44.1 is enough for that. Flattening a track means that you make a new audio file (solo the track, and bounce) that includes the plugin effects you applied to it. It is an (old fashioned) way of freeing up CPU (because you can switch off the plugins afterwards and use the 'flattened' audio file).
    Freezing provides a better alternative for freeing up CPU though. Look it up in the manual, it is a simple and effective feature.
    regards, Erik.

  • APO - SNP Alert Macros running too slow

    Hi,
    We have created alert macros which run for next 27 weeks for 38000 Product-Location Combinations. They are running too slow.
    We are deleting alerts externally using program /sapapo/amon_reorg and then running this macro with "ADD" . This macro slows down once it starts filling up alert table with 100,000 records. What are the best practices for writing the macro so that it runs fast ?.
    Thanks.

    Is it necessary to write all the alerts? What I mean is, do you filter the results when you view them through the Alert Monitor?
    If your alert profile has a minimum threshold set for an alert type, then you can read this threshold in the macro and only write alerts that fall below the threshold.
    e.g.
      Step: Get Threshold Values : ( 1 Iterations :Initial;Initial )
        Action Box: Get Threshold Value
          LAYOUTVARIABLE_SET( 'Alert_Thresh' ;
          ALERT_PROFILE_THRESH( SDP_ALERT_PROFILE' ; '4100' ; 'I' ) )
    Where SDP_ALERT_PROFILE is the profile that contain sthe minimum threshold you want to use and 4100 is the alert type. The I is for information, can use either W for Warning or E for Error as well.
    Then use the variable Alert_Thresh to check if your value falls below this, only then write the alert.
    Regards
    Ian

  • Extraction gets too slow  at nights (finding the bottleneck)

    Hi all,
    i'm loading historic data from R/3 to BW. The extraction process usually takes 15 minutes per data package, but at nights some packages takes like 1 hr.
    Can you helpme to understand in which step of the extraction the gap is ?
    Here is a piece of the extraction log form R/3, where you can see the 1hr gap:
    05:19:12 Call up of customer enhancement BW_BTE_CALL_BW2
    05:19:12 Result of customer enhancement: 100.000 records
    05:19:12 Call up of customer enhancement EXIT_SAPLRSAP_0
    05:19:12 Result of customer enhancement: 100.000 records
    05:19:14 Asynchronous sending of data package 88 in task
    05:19:16 tRFC: Data package 87, TID = AC14030C07D946E10C
    05:19:16 tRFC: Begin 07.09.2007 04:29:35, end 07.09.2007
    06:46:15 Call up of customer enhancement BW_BTE_CALL_BW2
    06:46:15 Result of customer enhancement: 100.000 records
    06:46:15 Call up of customer enhancement EXIT_SAPLRSAP_0
    06:46:15 Result of customer enhancement: 100.000 records
    06:46:17 Asynchronous sending of data package 89 in task
    06:46:21 tRFC: Data package 88, TID = AC14030C07D846E118
    06:46:21 tRFC: Begin 07.09.2007 05:19:30, end 07.09.2007
    The dotted line is the gap, i need to know what process of the extraction
    is running in there.
    Thanks a lot in advance !!

    Found it, the gap is on the extraction process.

  • I want to delete approx 100,000 million records and free the space

    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.

    user8731258 wrote:
    Hi,
    i want to delete approx 100,000 million records and free the space.
    I also want to free the space.How do i do it?
    Can somebody suggest an optimized way of archiving data.To archive, backup the database.
    To delete and free up the space, truncate the table/partitions and then shrink the datafile(s) associated with the tablespace.

  • Disk is too slow (Record)(-10004) error..so sick of this.

    Hello all,
    I can no longer record more than three tracks in logic without getting the error message "Disk is too slow (Record)(-10004)". When this happens, recording is stopped.
    At first I suspected my drive was faulty, maybe slowing down. So being in the middle of a session that took me 2 hours to set up, I called for a break and rushed off and bought an external Seagate 7200rpm firewire 800 drive. I installed it and set it as the recording path for the project. There was no change, the same error occured.
    I then switched the target drive to another internal one I use for Time Machine - same problem occured.
    It seems to me that this problem has nothing to do with my drives. I am at a loss to explain it. I have looked for hours online for a solution but while many have experienced this there seem to be few answers out there.
    Unless I find a solution this will be my last project with Logic. I tried and tried for the last 5 years to use this program but things like this keep happening. It's glitchy with UAD cards, Duende, RME interfaces, Midi controllers, Hard Drives, RAM and external clocks. I've had problems with them all over the years. I will most likely switch to Cubase which I feel is inferior for editing and loops, but at least it seems to be stable.
    If anyone has any insight I'll try and fix it, but I just can't keep shelling out money for a program that just doesn't work.

    I am experiencing a similar problem & have been receiving the same messages, even while recording as little as one track and playback has become an issue as well. However, THIS WAS NOT ALWAYS THE CASE. I have heard of people with this same problem, where they receive this message out of nowhere after logic has been working perfectly for them.
    I also would like to note that I am running all settings in logic for optimized recording and playback (audio & buffer settings etc etc etc)
    THIS IS NOT A HARDWARE ISSUE, at least in my situation as I am running a fast internal HD & have ample memory. Please reach out if you feel like you have a pragmatic solution to this issue.
    This may be a possible lead on the fix... I remember reading this post from a user "soundsgood" in 2008 who was having a similar issue..I don't understand completely his solution, but if someone could enlighten me, I feel that this might be the solution to our issue
    +"Okay - forgive me 'cause I'm a newbie on this forum and if somebody else has already figgered this out, I'm sorry.... I've been having the same problem all of the sudden after many years of crash-free and error-free recording. I've read everything. I've pulled my hair out. I've done dozens of clean installs. I've repaired permissions so many times I can do it blindfolded. And sitting here tonight, it dawned on me.... there are TWO places Logic is sucking data from: wherever you've got your SONG files stashed, of course.... but it ALSO NEEDS TO ACCESS THE STARTUP DRIVE (or wherever else you might have your Apple Loops installed). I was watching my drives being accessed during a playback of a fairly complicated tune (most tracks were frozen of course), and both of the afore-mentioned drives were going berserck with accesses. We're all focusing on our dedicated audio drives, but forgetting about our boot drives (where Logic usually resides along with most or all of our loops). I carbon copy cloned my boot / operating system (including Logic) to a different (in my case, an external firewire) drive and the problem disappeared. Could've been because the cloning process de-fragged all the loops & stuff, or maybe my OS just likes snatching its sample/loop info from an external drive. Worked for me... so far....... let me know if it works for others....."+

  • In BDC, I Have 10,000 Records Which Method do I Select? and Why?

    Hi all,
    In BDC , I Have 10,000 Records of Material Master Application. I have go through by Session Method or Call Transaction Method. Which Method do I Select? and Why?

    Hi..
    There you hav to go for sessions method....
    because...
    1. session methos has auto matic error handling option. so if there is any error in last but 100 th reocrd it will just threws that record and remaining part willl complete.
    2. And it was offline method.. means formatting of the data and assigning to Sap lay can be done in two steps... So you  10000 recors can update in expected time comaper with Calltransaction method...
    Get back to me if you are not satisfy with above reasons.
    Thanks,
    Naveen.I

  • Performance in processing 80,000 records.

    Hi
    I am working on a module where I have to upload a file of 80,000 records, process them and then upload them in Web Service.
    I am uploading file by simply parsing request
    items = upload.parseRequest(request);After this I am traversing entire file line by line, processing individual records with my logic, and then saving them to a Vector.
    In second servlet I am fetching these records and then uploading them to WSDL file.
    This process will take some time.
    I am facing few problems/questions here :
    Question 1:
    After 30 minutes or so.. the browser displays "This page cannot be displayed".
    While I am debugging this code and setting breakpoints, I noticed that code is actually executing when browser displays "This page cannot be displayed" message.
    Can I increase browser settings so that It can wait for some more time before displaying above message.
    So, that my java code can complete its execution.
    Question 2 :
    I am using vector to store all 80,000 records at one go. Will the use of ArrayList or some other collection type increase performance.
    Question 3 :
    What if I break vector in parts.
    i.e. instead of keeping 1 single vector of 80,000 records, if I store 10,000 records each in different vectors and then process them separately.
    Please comment.
    Thanks.

    money321 wrote:
    Question 1:
    After 30 minutes or so.. the browser displays "This page cannot be displayed".
    While I am debugging this code and setting breakpoints, I noticed that code is actually executing when browser displays "This page cannot be displayed" message.
    Can I increase browser settings so that It can wait for some more time before displaying above message.
    So, that my java code can complete its execution.
    It is the request timeout, it is a webserver setting, not a webbrowser setting. Even though the request times out, the code should still continue to execute until the process finishes; you just don't get the response in your browser.
    Question 2 :
    I am using vector to store all 80,000 records at one go. Will the use of ArrayList or some other collection type increase performance.
    Probably yes, because a Vector is thread safe while the ArrayList is not. It is a similar situation as StringBuffer/StringBuilder.
    Question 3 :
    What if I break vector in parts.
    i.e. instead of keeping 1 single vector of 80,000 records, if I store 10,000 records each in different vectors and then process them separately.Wouldn't make much of a difference I'd say. The biggest performance hit is the webservice call, so try to save as much time as you can there. By the way, are you doing one webservice call, or 80000?
    >
    Please comment.
    Thanks.

  • Conversion Problem in Unit of measure 1PAC=100,000 EA

    Hi Experts,
    I am facing the problem in conversion of Unit of Measures.
    If some material has the conversion unit when we order to Vendor.
    The Unit of measure is EA, the Order unit is PAC.
    The conversion is 1PAC=100,000 EA
    But the System is allowing as 1PAC=10,000 EA only and it is not allowing to enter 100,000 EA and throughing the error
    Entry too long (enter in the format __,___)
    Message no. 00089
    How can we increase the characters of the Unit of measure. Like this we have some 50 Materials, I need to increase the length of the conversion spaces. How to do it, Please guide me.
    rgds
    Swamy

    Hi
    you can check this out the denominator factor is Stored in structure - SMEINH - field UMREN.
    Best option is to maintain decimal place for the UOM , which is more easier to mainatin.
    Even if you go through the note the same is explained. Please read below for Too large numerators and denominators
    When 120000 CM3 = 0,2 tons (TO), you can no longer save numerator and denominator of conversion ratio 600000 CM3 = 1 TO as numerator and denominator may have maximally five digits.
    Here, you must either select a larger volume unit or a smaller unit of weight: With DM3 the conversion ratio would be 600 DM3 = 1 TO, with KG the conversion ratio would be 600 CM3 = 1 KG.
    Generally, the alternative units of measure and the base unit of measure should result in quantities that are in the same dimension since the conversion factors may not be larger than 99999/1 and not smaller than 1/99999.
    Thanks & Regards
    KK
    Edited by: Kishore Kumar Chiluka on Apr 22, 2008 8:25 AM

  • BPC 7.5 NW -- Data Manager Import Fails When Loading 40,000 Records?

    Hi Experts,
    Can't believe I'm posting this because the Data Manager component has always been one of the best parts of BPC.  But, since we got SP04 applied last week, every IMPORT process that I've run that has more than 40,000 records fails out.
    The result logs show that the CONVERT task completes just fine, but it doesn't really show the LOAD task.  ...Not exactly sure what's going on here.  So far I've also taken the following two steps to try for resolution:
    (1.)  Re-added the IMPORT package in Organize Package List from the Library to have a "fresh" version.  Didn't help.
    (2.)  In the "Modify Package" screens, there is a PACKAGESIZE parameter that is 40,0000 by default...  I've been able to locate that in BI using transaction RSA1 and have changed it to 10,000,000.  Saved it.  Tried it.  Didn't help either
    Has anyone seen this kind of behavior before?
    Thanks,
    Garrett

    Update -- This problem may now be resolved.
    I have been able to conduct test IMPORTs of 48,000, then 96,000 and then 1.7 million records.  All were fine.
    It turns out that that difference is that the text files were sorted by amount in the ones that failed.  They were sorted by GLAccount in column A for the ones that succeeded.
    Edit:  Yep, all files loaded normally when re-sorted by GLACCOUNT, etc. on the left-hand side.  Apparently, when you're doing a lot of records that might confuse the system or something
    Edited by: Garrett Tedeman on Nov 18, 2010 11:41 AM

  • Exporting BI Publisher 11g ouput to Excel 2007 with over 65,000 records

    Hi All,
    I have seen information that BI Publisher 11.1.1.5 allows sending BI Publisher reports directly into native Excel 2007. Will this automatically allow reports with over 65,000 records to be saved in a single Excel 2007 sheet? If so, is it possible to do this in BI Publisher 11.1.1.3 which I am currently using or must I upgrade to 11.1.1.5 to get this functionality?
    Will BI Publisher 11.1.1.5 also work with Excel 2010 to be able to load reports with over 65,000 records into a single Excel 2010 sheet?
    Thanks for any information that will help me find a solution to this issue.
    Barry

    well, i did some research, and indeed, to be able to output as .xlsx you would need an update to 11.1.1.4, as it is the first one which supports Excell 2007/10.
    *[url http://www.java-forums.org/blogs/advanced-java/collection/]Java collection*

  • Performance while uploading 50,000 records

    Hi
    I have to create an application which reads records from a file.
    The records can exceed upto 50,000.
    Then these records are to be processed using a web service.
    Now, I need to design an optimized solution.
    Simple design would be to read all records from file store them in context and then loop them through web service model.
    I think there has to be one more optimal solution.
    Even ahead of performance comes runtime memory issue !! (What if it falls short to hold all 50,000 records in context at same time)
    How can I break this application.
    Thanks

    money321 wrote:
    Question 1:
    After 30 minutes or so.. the browser displays "This page cannot be displayed".
    While I am debugging this code and setting breakpoints, I noticed that code is actually executing when browser displays "This page cannot be displayed" message.
    Can I increase browser settings so that It can wait for some more time before displaying above message.
    So, that my java code can complete its execution.
    It is the request timeout, it is a webserver setting, not a webbrowser setting. Even though the request times out, the code should still continue to execute until the process finishes; you just don't get the response in your browser.
    Question 2 :
    I am using vector to store all 80,000 records at one go. Will the use of ArrayList or some other collection type increase performance.
    Probably yes, because a Vector is thread safe while the ArrayList is not. It is a similar situation as StringBuffer/StringBuilder.
    Question 3 :
    What if I break vector in parts.
    i.e. instead of keeping 1 single vector of 80,000 records, if I store 10,000 records each in different vectors and then process them separately.Wouldn't make much of a difference I'd say. The biggest performance hit is the webservice call, so try to save as much time as you can there. By the way, are you doing one webservice call, or 80000?
    >
    Please comment.
    Thanks.

  • BKPF 6000 records  and bsid 300,000 records

    Which one is better ?
    1.
    LOOP  bkpf (6000 records )
             READ  bsid (300,000 records )
             READ  .....
             READ  .....
             READ  .....
             READ  .....
       ENDLOOP.
    and the other thing that i concern in (1.) is  perhap i have to use
    LOOP  bkpf (6000 records )
             loop bsid (300,000 records ) where .....
       ENDLOOP.
    2.
    LOOP bsid(300,000 records)
           READ bkpf(300,000 records )
       ENDLOOP.
    Now in my program i use (2.) but performance is quite bad. it's time out (1 hour) on PRD
    actually i have many internal tables to read not only bkpf but it's not my concern and all of READ i SORT and use BINARY SEARCH
    Thank you in advance

    Try the below code.
    sort it_bsid by bukrs belnr gjahr.
    loop at it_bkpf.
      clear: lv_indx.
      read table it_bsid with key bukrs = it_bkpf-rbukrs
                                             belnr = it_bkpf-belnr
                                             gjahr = it_bkpf-gjhar
                                             binary search.
      if sy-subrc = 0.
        lv_indx = sy-tabix.
        loop at it_bsid from lv_indx.
          if it_bsid-bukrs = it_bkpf-bukrs and
             it_bsid-belnr = it_bkpf-belnr and
             it_bsid-gjahr = it_bkpf-gjahr.
            << read other internal tables and do the necessary processing>>
          else.
             clear: lv_indx.
             exit.
          endif.
        endloop.
      endif.
    endloop.
    Hope this helps your time out issue.
    Thanks,
    Balaji

  • Display 100,000 rows in Table View

    Hi,
    I am in receipt of a strange requirement from a customer, who wants a report which returns about 100,000 rows which is based on a Direct Database Request.
    I understand that OBIEE is not an extraction tool, and that any report which has more than 100-200 rows is not very useful. However, the customer is insistent that such a report be generated.
    The report returns about 97,000 rows and has about 12 columns and is displayed as a Table View.
    To try and generate the report, i have set the ResultRowLimit in the instanceconfig.xml file to 150,000 and restarted the services. I have also set the query limits in the RPD to 150,000, so this is not the issue as well.
    When running the report, the session log shows the record count as 97,452 showing that all the records are available in the BI Server.
    However, when i click on the display all the rows button at the end of the report, the browser hangs after about 10 minutes with nothing being displayed.
    I have gone through similar posts, but there was nothing conclusive mentioned in them. Any input to fix the above issue will be highly appreciated.
    Thanks,
    Ab
    Edited by: obiee_user_ab on Nov 9, 2010 8:25 PM

    Hi Saichand,
    The client wants the data to be downloaded in CSV, so the row limit in the Excel template, that OBIEE uses is not an issue.
    The 100,000 rows that are retrieved is after using a Dashboard Prompt with 3 parameters.
    The large number of rows is because these are month end reports, which is more like extraction.
    The customer wants to implement this even though OBIEE does not work well with large number of rows, as there are only a couple of reports like this and it would be an expensive proposition to use a different reporting system for only 3-4 reports.
    Hence, i am on the lookout for a way to implement this in OBIEE.
    The other option is to directly download the report into CSV, without having to load all the records onto the browser first. To do the same, i read a couple of blog entries, but the steps mentioned were not clear. So any help on this front will also be great
    Thanks,
    Ab

  • Try to insert 10.000 records, but stop at 500 records

    I try to insert 10.000 records to 4 coloumn table at sun solaris oracle, from visual basic application, but it stop at 500 records, and when I try to insert record 501, it never succeded.
    Is there limitation in oracle database in insertion procedure ?

    Hi,
    There is no such limitations in Oracle Database. The insertion process is going on, but it looks like hanging. You can do one thing to trace the happenings.
    1. Paste progress bar item in your screen
    2. Set Min = 1
    3. Set Max = Total no of records from source table(where 10,000 records are there)
    4. You might have one Do while..loop structure to insert a record to a target table. Within that loop, increase the value of process bar. So,while inserting a record, the progress bar value will change.
    So, you can trace whether the process is running or not.
    I think, this will help u to trace the process.
    N.Swaminathan

Maybe you are looking for