Single record insert performance problems

Hi,
we have on production environment a Java based application that makes aprox 40.000 single record Inserts per hour into a table.
We ha traced the performance of this Insert and the medium time is 3ms, that is ok. Our Java architecture is based in Websphere Application Server and we access to Oracle 10g through a WAS datasource.
But we have detected that 3 or 4 times a day, during aprox 30 seconds, the Java service is not able to make any insertion in that table. And suddenly it makes all the "queued inserts" in only 1 second. That "pause" in the insertion cause problems of navigation because is the top layer there is a web application.
We are sure that is not a problem with the WAS or the Java code. We are sure that is a problem with the Oracle configuration, or some tunning action for this kind of applications that we don´t know. We first thought it could be a problem with a sequence field in the table. Also, a problem when occurs the change of the redo log. But we've checked it with our DBA and this is not the problem.
Has anybody any idea of what could be the origin of this extrange behaviour?
Thanks a lot in advance.
Jose.

There are a couple of things you'd need to look at to diagnose this - As Joe says it's not really a JDBC issue from what we know.
I've seen issues with Oracle's automatic SGA resizing causing sporadic latency in OLTP systems. Another suspect would be log file sync wait events, which are associated with commits. Don't discount the impact of well meaning people using tools like TOAD to query the DB - they can sometimes cause more harm than good.
Right now I'd suggest you run AWR at 10 minute intervals and compare reports from when you had your problem with a time when you didn't.

Similar Messages

  • Recurrent single record Insert problems

    Hi,
    we have on production environment a Java based application that makes aprox 40.000 single record Inserts per hour into a table.
    We ha traced the performance of this Insert and the medium time is 3ms, that is ok. Our Java architecture is based in Websphere Application Server and we access to Oracle 10g through a WAS datasource.
    But we have detected that 3 or 4 times a day, during aprox 30 seconds, the Java service is not able to make any insertion in that table. And suddenly it makes all the "queued inserts" in only 1 second. That "pause" in the insertion cause problems of navigation because is the top layer there is a web application.
    We are sure that is not a problem with the WAS or the Java code. We are sure that is a problem with the Oracle configuration, or some tunning action for this kind of applications that we don´t know. We first thought it could be a problem with a sequence field in the table. Also, a problem when occurs the change of the redo log. But we've checked it with our DBA and this is not the problem.
    Has anybody any idea of what could be the origin of this extrange behaviour?
    Thanks a lot in advance.
    Jose.

    If by one single disk you mean a single physical disk then spreading the online redo lobs onto multiple disks would be a good idea. Log Group A should be on a seperate disk from Log Group B and the members of each log group should also be on seperate disks.
    Note that Oracle does not stop processing when it does a checkpoiint. Here is basically how it works. When online redo LogA fills a checkpoint is signaled and Oracle immediately starts writing to LogB. The ckeck point is processed by ckpt and dbwr.
    If logB fills and a switch to LogC occurs before the ckeckpoint signaled by the switch from LogA to LogB has completed then you get the checkpoint not complete information message. Oracle will continue processing though it has more work to do having to combine two ckeckpoints. If the problem continues then you get 3 and 4 checkpoints backed up. This will bog Oracle down as the buffer cache has to be pretty full at this point. You cannot read data unless you have a clean buffer to read into.
    The checkpoint not complete message normally means your online redo logs are too small and are filling too quickly. In the absence of Data Guard configured to use log shipping you should look to see how many log switches you are doing per day. If is is more the 24 - 48 then increase the size of the online redo logs.
    Database writer performance issues are the next area you would want to ckeck after properly sizing the online redo logs and verifying the IO performance of log writer. Pretty much all you can do to help log writer IO is put enough physical disk under the logs to spread the IO out.
    If your problem is caused by the activity level during peak periods then setting MTTR probably will not help since dbwr will be busy writing anyway.
    You really should run a statspack or AWR report to help clarify the issue.
    If you determine you are switching online redo logs too many times and increase the size take a set of statspack both before and after you make the change. This will help you see the effect on the database as a whole.
    HTH -- Mark D Powell --

  • Jdbc thin driver bulk binding slow insertion performance problem

    Hello All,
    We have a third party application reporting slow insertion performance, while I traced the session and found out most of elapsed time for one insert execution is sql*net more data from client, it appears bulk binding is being used here because one execution has 200 rows inserted. I am wondering whether this has something to do with their jdbc thin driver(10.1.0.2 version) and our database version 9205. Do you have any similar experience on this, what other possible directions should I explore?
    here is the trace report from 10046 event, I hide table name for privacy reason.
    Besides, I tested bulk binding in PL/SQL to insert 200 rows in one execution, no problem at all. Network folks confirm that network should not be an issue as well, ping time from app server to db server is sub milisecond and they are in the same data center.
    INSERT INTO ...
    values
    (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17,
    :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32,
    :33, :34, :35, :36, :37, :38, :39, :40, :41, :42, :43, :44, :45)
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.02 14.29 1 94 2565 200
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.02 14.29 1 94 2565 200
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 25
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    SQL*Net more data from client 28 6.38 14.19
    db file sequential read 1 0.02 0.02
    SQL*Net message to client 1 0.00 0.00
    SQL*Net message from client 1 0.00 0.00
    ********************************************************************************

    I have exactly the same problem, I tried to find out what is going on, changed several JDBC Drivers on AIX, but no hope, I also have ran the process on my laptop which produced a better and faster performance.
    Therefore I made a special solution ( not practical) by creating flat files and defining the data as an external table, the oracle will read the data in those files as they were data inside a table, this gave me very fast insertion into the database, but still I am looking for an answer for your question here. Using Oracle on AIX machine is a normal business process followed by a lot of companies and there must be a solution for this.

  • Actual time recording (computer performance problem?)

    I'm a beginner.
    I'm recording the output file in every second, but the time of output file is different from the actual time.
    I'm wondering if this is due to the computer performance (450MHz, 128MB) or if vi program can be implemented. I attached the file I use. The problem comes up when I use a vi with 50 channel outputs, but doesn't come up if I reduce the channels to around 20.
    Thank you.
    Attachments:
    LODS_MFC2.vi ‏460 KB

    It sounds like you are certainly running into timing issues. I see that in your VI, you use AI Sample Channel to acquire data from a single channel everytime the loop iterates. This is not the best means to go about doing such. The reason being that each time the VI is called, it has to setup the operation, then perform the read, and then clear the operation. What you should be doing is setting up the operationg outside the loop, perform only reads in the while loop, and then clear the task after you are complete. LabVIEW gets shipped with several examples that behave just this way, you should look in the example finder for these.
    Additionallly, if you read the help on AI Sample Channel.vi, it states it does a single, untimed measurement of a channel. Since
    you posted with the title "actual time recording", I feel that timing might be of importance to you.
    You never did mention what version of LabVIEW you have, or the hardware and drivers you are using, so I am leaving this post rather general. If you have some more questions after reading through this post and looking through some of the examples, post back to this discussion with more information, and I will try and help you out further.
    Sincerely,
    Jared A.

  • Record management performance problem

    Hi,
    We are implementing CRM 5.0 Case Management we have assigned a record model of around 700 nodes to a case type but the case performance on the portal was affected. It becomes very slow but when changed by another record model of few nodes the performance is much better.
    Is there a way to enhance the case performance if a large record model is assigned?
    Note: The performance in the back end is not affected by the size of the record model like the portal
    Regards
    Khaled Fahim

    Hello Marco,
    I recommend to post this query to the [BusinessObjects Enterprise Administration|BI Platform; forum.
    This forum is dedicated to topics related to administration and configuration of BusinessObjects Enterprise, BusinessObjects Edge, and Crystal Reports Server.
    It is monitored by qualified technicians and you will get a faster response there.
    Also, all BOE Administration queries remain in one place and thus can be easily searched in one place.
    Best regards,
    Falk

  • Performance problem inserting lots of rows

    I'm a software developer; we have a J2EE-based product that works against Oracle or SQL Server. As part of a benchmarking suite, I insert about 70,000 records into our auditing table (using jdbc, going over the network to a database server). The database server is a smallish desktop Windows machine running Windows Server 2003; it has 384M of RAM, a 1GHz CPU, and plenty of disk.
    When using Oracle (9.2.0.3.0), I can insert roughly 2,000 rows per minute. Not too shabby!
    HOWEVER -- and this is what's making Oracle look bad -- SQL Server 2000 on the SAME MACHINE is inserting roughly 8,000 rows per minute!
    Why is Oracle so slow? The database server is using roughly 50% CPU, so I assume disk speed is an issue. My goal is to get Oracle to compare favorably with SQL Server on the same hardware. Any ideas or suggestions? (I've done a fair amount of Oracle tuning in the past to get SELECTs to run faster, but have never dealt with INSERT performance problems of this magnitude.)
    Thanks,
    Daniel Rabe

    I've tried using a PreparedStatement and a CallableStatement, always with bind variables. (I use a sequence to populate one of the columns, so initially my code was doing the insert, then a select to get the last inserted row. This was fast on SQL Server but slow on Oracle, so I conditionalized my code to use a pl/sql block that does INSERT... RETURNING so I could get the new rowid without doing the extra select - that required switching from PreparedStatement to CallableStatement). The Performance Manager shows "Executes without Parses" > 98%.
    Performance Manager also shows Application I/O Physical Reads approx 30/sec, and Background Process I/O Physical Writes approx 60/sec.
    File Write Operations showed most of the writes going to my tablespace (which is sized plenty big for the data I'm writing), but with occasional writes to UNTODBS01.DBF as well.
    The database is in NOARCHIVELOG mode.
    I'm NOT committing very often - I'm doing all 70,000 rows as one transaction. BTW, I realize this isn't a real-life scenario - this is just the setup I do so that I can run some benchmarks on various queries that our application performs. Once I get into those, I'm sure I'll have a whole new slew of questions for the group. ;-)
    I'll look into SQL TRACE and TKPROF - time to refresh some skills I haven't used in a while...
    Thanks,
    --Daniel Rabe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Search for and edit single record in database

    I am creating a few forms that access an Access database that will be used to enter data into the database.  I am able to open records from the database and scroll through records one at a time and have added features to be able to search for and display a single record.  The problem that I am having  is when I load a single record and then edit that record, I am unable to save any changes made to the record, in other words, the record doesn't update on the database.
    I can add new records and edit records as long as I scroll to them using .next(), .last(), .first(), and .previous() commands; however, when I load a single record I can't figure out how to save changes to that record in the database.

    Ok...so I think i can get it to work by doing something like:
    xfa.sourceSet.DataConnection.open();
    xfa.sourceSet.DataConnection.first();
    var oRecordList = ???????????
    var nCount = oRecordList.length;
    for (var i = 0; i < nCount; i++){
        if (CurrentRecord.rawValue != SearchField.rawValue){
            xfa.sourceSet.DataConnection.next();
    where CurrentRecord is a text field that shows the index of the current record and SearchField is the field where a user enters the record that they are searching for.
    I think I just need to figure out how to count the number of records in the database.  Any ideas?

  • Multiple record insert problem in Oracle Procedure that uses a cursor

    Dear X-pert guies,
    I have a oracle procedure that use a cursor, I repeatedly make query on 1st table using cursor value and insert that queried value(of 1st table) to 2nd table
    y_summary. y_summary has composite  primary key :PK_Y_SUM (BILL_DATE, TRUNK_MGR, IDD_FLAG, PK_FLAG, PREFIX).*
    when i run the procedure explicit2('201001'); the it gives me the error:::: begin explicit2('201001'); end;_
    ORA-00001: unique constraint (PRM.PK_Y_SUM) violated_
    ORA-06512: at "PRM.EXPLICIT2", line 413_
    ORA-06512: at line 1_
    but when i remove the composite primary key from y_summary table then, the procedure runs ok and make so many duplicate entries in y_summary.
    but i want the single record  to be inserted for single time in y_summary ,so You guies are honorly requested to make the required help .
    the structure of y_summary Table and Procdure code is given below.
    Table:
    -- Create table
    create table Y_SUMMARY
    BILL_DATE VARCHAR2(10) not null,
    TRUNK_MGR VARCHAR2(20) not null,
    IDD_FLAG VARCHAR2(10) not null,
    PK_FLAG NUMBER(2) not null,
    OUTCALLS NUMBER(20,2),
    OUTDUR NUMBER(20,2),
    PREFIX VARCHAR2(10) not null
    tablespace TBS_PRM_D01
    pctfree 10
    pctused 40
    initrans 1
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    -- Create/Recreate primary, unique and foreign key constraints
    alter table Y_SUMMARY
    add constraint PK_Y_SUM primary key (BILL_DATE, TRUNK_MGR, IDD_FLAG, PK_FLAG, PREFIX)
    using index
    tablespace TBS_PRM_D01
    pctfree 10
    initrans 2
    maxtrans 255
    storage
    initial 64K
    minextents 1
    maxextents unlimited
    Procedure:
    create or replace procedure explicit2( month_val in varchar2) is
    cursor explicit_cur is select dest_code from y_table where dest_code like '44%' order by dest_code desc;
    dummy varchar2(100);
    lv_length Number(9);
    sqlstr varchar2(2500);
    rec_count1 number;
    rec_count2 number;
    rec_count3 number;
    begin
    open explicit_cur;
    LOOP
    fetch explicit_cur into dummy;
    EXIT WHEN explicit_cur%NOTFOUND;
    rec_count1 :=0;
    rec_count2 :=0;
    rec_count3 :=0;
    lv_length := length(dummy);
    sqlstr := 'select count(*) from y_table_data1 t where t.fee_dur_1_1 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'', ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''00'' and substr(t.orig_called_num,1,'||lv_length||')='||''''||dummy||'''';
    execute immediate sqlstr into rec_count1;
    sqlstr := 'select count(*) from y_table_data1 t where t.fee_dur_1_2 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'',
    ''ITAX1B'',''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''00'' and substr(t.orig_called_num,1,'||lv_length||')='||''''||dummy||'''';
    execute immediate sqlstr into rec_count2;
    sqlstr := 'select count(*) from y_table_data1 t where t.fee_dur_1_1 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'',
    ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''012'' and substr(t.orig_called_num,1,'||lv_length||')='||''''||dummy||'''';
    execute immediate sqlstr into rec_count3;
    if(rec_count1>0) then
    sqlstr := 'insert into y_summary(BILL_DATE ,PREFIX, TRUNK_MGR,OUTCALLS , OUTDUR , IDD_FLAG , PK_FLAG )
    select '|| month_val||' ,substr(t.orig_called_num,1,'||lv_length||'),t.trunkout_operator ,count(*) , round(sum(ceil(t.duration / 15) * 15) / 60, 0),''00'',''1'' from y_table_data1 t where t.fee_dur_1_1 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'', ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''00'' and substr(t.orig_called_num,1,'||lv_length||')='||''''||dummy||''''|| ' group by t.trunkout_operator,substr(t.orig_called_num,1,'||lv_length||')';
    execute immediate sqlstr;
    commit;
    sqlstr :='DELETE from y_table_data1 t where t.fee_dur_1_1 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'', ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''00'' and substr(t.orig_called_num,1,'||lv_length||')='||''''||dummy||'''';
    execute immediate sqlstr;
    commit;
    end if ;
    if(rec_count2>0) then
    sqlstr :='insert into y_summary(BILL_DATE ,PREFIX, TRUNK_MGR,OUTCALLS , OUTDUR , IDD_FLAG , PK_FLAG )
    select '|| month_val||' ,substr(t.orig_called_num,1,'||lv_length||'),t.trunkout_operator ,count(*) , round(sum(ceil(t.duration / 15) * 15) / 60, 0),''00'',''0'' from y_table_data1 t where t.fee_dur_1_2 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'', ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''00'' group by t.trunkout_operator,substr(t.orig_called_num,1,'||lv_length||')';
    execute immediate sqlstr;
    commit;
    sqlstr :='DELETE from y_table_data1 t where t.fee_dur_1_2 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'', ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''00'' and substr(t.orig_called_num,1,'||lv_length||')='||''''||dummy||'''';
    execute immediate sqlstr;
    commit;
    end if;
    if(rec_count3>0) then
    sqlstr :='insert into y_summary(BILL_DATE ,PREFIX, TRUNK_MGR,OUTCALLS , OUTDUR , IDD_FLAG , PK_FLAG )
    select '|| month_val||',substr(t.orig_called_num,1,'||lv_length||'),t.trunkout_operator ,count(*) , round(sum(ceil(t.duration / 15) * 15) / 60, 0),''012'',''0'' from y_table_data1 t where t.fee_dur_1_1 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'', ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''012'' group by t.trunkout_operator,substr(t.orig_called_num,1,'||lv_length||')';
    execute immediate sqlstr;
    commit;
    sqlstr :='DELETE from y_table_data1 t where t.fee_dur_1_1 <> 0
    and t.out_trunk in (''MHISRM'', ''GEISRM'', ''GEIMRP'', ''MHIMRP'', ''13'', ''ITAX1B'', ''ITAX3B'',''ITAX5B'', ''ITAX6B'', ''ITAX7B'', ''BTIMRP'', ''BTI5RP'', ''BTI6RP'', ''BTI7RP'')
    and t.service_code = ''012'' and substr(t.orig_called_num,1,'||lv_length||')='||''''||dummy||'''';
    execute immediate sqlstr;
    commit;
    end if;
    end loop;
    close explicit_cur;
    end explicit2;
    Edited by: user10951541 on 25.4.2010 12.08

    Dear concern
    Really sorry not to make format listing because i am amature to this blog.
    my anwser to your way .
    1. I have Tested my SQL statements manually in SQL*Plus. this runs ok
    2. "Cursor loops, such as the one you have coded here, have been obsolete in Oracle since version 8i 12+ years ago.
    Look up BULK COLLECT and FORALL in the docs and use them instead."
    I am trying to make use of the BULK COLLECT and FORALL statement in proper location.
    3. "Your procedure never performs a commit so no work actually takes place" i need to get the anwser why........................?
    4. "On what basis was the decision made to use the default PCTFREE and PCTUSED values of 10 and 40?"
    is there any problem if default is used..? if any suggestion........pls
    5." You did not format your listing using the CODE tags as explained in the FAQ making your listing unreadable ... so I've not read it.
    Please read the FAQ and use the proper way to post code so we can understand it. Then perhaps we can help you further. " really sorry not to make understandable to you..? but i will try from next post..
    I really will try to be synced..
    My aim is to make query to Table A using the cursor value like( '4422','442','4411','441','44') and get some data in accordance of these values.Then i put the data into another Table B. same time i need to delete the record from Table A containing the prefix value in accordance for example- i compute value for '4422' from Table A and put the computed value to Table B .Then i delete the record from Table A where prefix is '4422' .so that computed value for the next prefix '442' should contain the computed value for 442[0-1] and 442[3-9] .Same way it will be happened for ('4411','441','44'....bla...bla).
    Thanks in advance..

  • SQL: System locks up and runs slow after performing simple DML record insert

    SQL Version:  2008 R2
    I am having a serious problem.  I ran the following code to perform a simple table record insert which ran successfully.  However, after running this code I could no longer access the related table.  I couldn't run a query against it. 
    I tried to delete all the records and that wouldn't work either.  When attempting to run any DML statements (i.e. Select * From vPCCertificateTypes) against this table SQL gets locked up and never returns anything and no error messages.  I have
    to manually stop the query.  Now the entire SQL system is running slow.
    Any help would be greatly appreciated.  The code I ran to originally insert the records is below.
    Regards,
    Bob Sutor
    CODE: 
    Begin TRAN
     INSERT INTO vPCCertificateTypes (VendorGroup, CertificateType, Description, Active, Category)
     SELECT HQCo, 'MBE', 'Minority-owned Business Enterprise', 'Y', 'Affirmative Action' 
     FROM HQCO
     Where HQCo IS NOT NULL
      AND HQCo <> 100
      AND HQCo <> 99
     IF @@ROWCOUNT <> 44 ROLLBACK ELSE COMMIT
    Bob Sutor

    The problem was an open uncommitted transaction against the table as you suspected.  I ran your code and it cleared the transaction.  In the past I would intentionally leave our the Commit statement and then if the transaction ran fine, I would
    simply highlight and run 'COMMIT' and it would close the transaction.  Apparently that is not the correct approach.  I'll use your suggested code from now on.
    Thanks for the help.
    Bob Sutor

  • How to insert sales text (MM02) into a single record of a Ztable.

    Hi,
    I'm extracting data from different data base tables and populating a Ztable which has Matnr as primary key and sales text as a field.
    I have already used READ_TEXT to display the text and it is displayed in multiple records which in turn leads to duplication of Material numbers.
    Now I want to avoid duplication of records (Matnr) as this being a primary record, and display the sales text of a particular material number into one single record.
    Can anyone tell me how to insert sales text (MM02) transaction into one single record.
    Thanks,
    Govind

    sorry i am not enough clear about your requirement...
    as i can understand i am explaining to you.
    suppose your itab contains repaeating matnr.
    matnr
    1
    1
    2
    2
    2
    3
    3
    like this.
    data : text(200),
             matnr like mara-matnr.
    loop at itab.
    call READ_TEXT fnmodule.
    loop at tline.
    concatenate text tline-tdline into text.
    endloop.
    matnr = itab-matnr.
    at end of matnr.
    itab1-matnr = matnr.
    itab1-text = text.
    append itab1.
    clear text.
    endat.
    endloop.
    NB change the code as per your requirement
    regards
    shiba dutta

  • Performance problem with selecting records from BSEG and KONV

    Hi,
    I am having performance problem while  selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
    Regards,
    Prashant

    Hi,
    Some steps to improve performance
    SOME STEPS USED TO IMPROVE UR PERFORMANCE:
    1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
    2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
    3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
    4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
    5. Avoid using nested SELECT statement SELECT within LOOPs.
    6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
    7. Avoid using SELECT * and Select only the required fields from the table.
    8. Avoid nested loops when working with large internal tables.
    9. Use assign instead of into in LOOPs for table types with large work areas
    10. When in doubt call transaction SE30 and use the examples and check your code
    11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
    12. Use "CHECK" instead of IF/ENDIF whenever possible.
    13. Use "CASE" instead of IF/ENDIF whenever possible.
    14. Use "MOVE" with individual variable/field moves instead of "MOVE-
    CORRESPONDING" creates more coding but is more effcient.

  • Performance Problem : High CPU on a simple Insert

    I have a simple INSERT ( INSERT into TABLE values ( :1, :2 ) etc. )
    But this statement is exhibiting following characteristics
    Optimizer Cost : 1
    Buffer Gets : 26 million !!!
    Rows : 750,000
    Gets/Execution : 35,000
    This is causing performance problem on our production.
    Any ideas will be appreciated ?
    thanks

    897208 wrote:
    I have a simple INSERT ( INSERT into TABLE values ( :1, :2 ) etc. )
    But this statement is exhibiting following characteristics
    Optimizer Cost : 1
    Buffer Gets : 26 million !!!
    Rows : 750,000
    Gets/Execution : 35,000
    This is causing performance problem on our production.
    Any ideas will be appreciated ?
    thanksThread: HOW TO: Post a SQL statement tuning request - template posting
    HOW TO: Post a SQL statement tuning request - template posting
    post EXPLAIN PLAN & results from SQL_TRACE this statement

  • Insert multiple rows into a same table from a single record

    Hi All,
    I need to insert multiple rows into a same table from a single record. Here is what I am trying to do and I need your expertise. I am using Oracle 11g
    DataFile
    1,"1001,2001,3001,4001"
    2,"1002,2002,3002,4002"
    The data needs to be loaded as
    Field1      Field2
    1               1001
    1               2001
    1               3001
    1               4001
    2               1002
    2               2002
    2               3002
    2               4002
    Thanks

    You could use SQL*Loader to load the data into a staging table with a varray column, then use a SQL insert statement to distribute it to the destination table, as demonstrated below.
    SCOTT@orcl> host type test.dat
    1,"1001,2001,3001,4001"
    2,"1002,2002,3002,4002"
    SCOTT@orcl> host type test.ctl
    load data
    infile test.dat
    into table staging
    fields terminated by ','
    ( field1
    , numbers varray enclosed by '"' and '"' (x))
    SCOTT@orcl> create table staging
      2    (field1  number,
      3     numbers sys.odcinumberlist)
      4  /
    Table created.
    SCOTT@orcl> host sqlldr scott/tiger control=test.ctl log=test.log
    SQL*Loader: Release 11.2.0.1.0 - Production on Wed Dec 18 21:48:09 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
    Commit point reached - logical record count 2
    SCOTT@orcl> column numbers format a60
    SCOTT@orcl> select * from staging
      2  /
        FIELD1 NUMBERS
             1 ODCINUMBERLIST(1001, 2001, 3001, 4001)
             2 ODCINUMBERLIST(1002, 2002, 3002, 4002)
    2 rows selected.
    SCOTT@orcl> create table destination
      2    (field1  number,
      3     field2  number)
      4  /
    Table created.
    SCOTT@orcl> insert into destination
      2  select s.field1, t.column_value
      3  from   staging s, table (s.numbers) t
      4  /
    8 rows created.
    SCOTT@orcl> select * from destination
      2  /
        FIELD1     FIELD2
             1       1001
             1       2001
             1       3001
             1       4001
             2       1002
             2       2002
             2       3002
             2       4002
    8 rows selected.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Need to commit after every 10 000 records inserted ?

    What would be the best way to Commit after every 10 000 records inserted from one table to the other using the following script :
    DECLARE
    l_max_repa_id x_received_p.repa_id%TYPE;
    l_max_rept_id x_received_p_trans.rept_id%TYPE;
    BEGIN
    SELECT MAX (repa_id)
    INTO l_max_repa_id
    FROM x_received_p
    WHERE repa_modifieddate <= ADD_MONTHS (SYSDATE, -6);
    SELECT MAX (rept_id)
    INTO l_max_rept_id
    FROM x_received_p_trans
    WHERE rept_repa_id = l_max_repa_id;
    INSERT INTO x_p_requests_arch
    SELECT *
    FROM x_p_requests
    WHERE pare_repa_id <= l_max_rept_id;
    DELETE FROM x__requests
    WHERE pare_repa_id <= l_max_rept_id;

    1006377 wrote:
    we are moving between 5 and 10 million records from the one table to the other table and it takes forever.
    Please could you provide me with a script just to commit after every x amount of records ? :)I concur with the other responses.
    Committing every N records will slow down the process, not speed it up.
    The fastest way to move your data (and 10 million rows is nothing, we do those sorts of volumes frequently ourselves) is to use a single SQL statement to do an INSERT ... SELECT ... statement (or a CREATE TABLE ... AS SELECT ... statement as appropriate).
    If those SQL statements are running slowly then you need to look at what's causing the performance issue of the SELECT statement, and tackle that issue, which may be a case of simply getting the database statistics up to date, or applying a new index to a table etc. or re-writing the select statement to tackle the query in a different way.
    So, deal with the cause of the performance issue, don't try and fudge your way around it, which will only create further problems.

Maybe you are looking for

  • Is there an adobe extension manger for 64 bit ?

    I have PC 64 bit .. I was having adobe photoshop cc 14.1.2 with extension manager 32 bit , but when I downloaded the version of adobe photoshop 15.0.8 I noticed that there is no extension So I was asking if there is an extension manager I can downloa

  • Issue in calling Adapter Module

    Hi experts, We have a File - > IDoc scenario where the file has got fixed length data. The data is in the following format: Header  xdfsdfefefeswefr Detail    dfdkfereorierwer Detail    fdfkjwoeiefiwere Detail Trailer  erererewoirwoiwe We do not have

  • Parser support for jaxp1.2

    Hi, Can any body tell me what to do for the following problem 11:49:00,801 INFO [STDOUT] 11:49:00,801 ERROR [PROVIDERREGISTRATION] Could not register the provide rs as the underlying parserdoes not support JAXP 1.2 java.lang.IllegalArgumentException:

  • Sdi1.setDisclosed(true) not working correctly

    Hi all, I use Jdev 11g. I created a page with a table and two tabs. When a new row is selected in the table then tab1 should be shown if tab2 was selected.. I use this code:     public void rowSelected(SelectionEvent selectionEvent) {         reslove

  • Is there a place to post a question about...

    ...pdf security devices for OS X? Security that prevents illegal sharing of pdf/e book files created on OS X. I would think, lots of people would have this interest. thank you...