INSERTS slow

I have a batch process that comepletes in 3 hrs max if I process all the rows in one shot.
something like this
insert into dest_table
select * from
source_table;
But when I process the same at an entity level by passing in the entity pk the process is takes around 20 minutes on an average for each entity.
insert into dest_table
select * from
source_table
where entity_pk = pk;
Thanks
Why is this slow?

1) I assume that 20 minute per entity takes much longer than 3 hours total.
2) If something can be done in SQL, it will generally more efficient to do it in SQL. Oracle doesn't have to do a lot of context shifts between the SQL and PL/SQL engines. Oracle can often optimize queries differently if it knows that it is going to have to read every row from source_table than if you to do a lot of single-row reads. It's far more efficient to slurp up every row in the table via multi-block reads than to search an index and do a single block read for each and every row.
3) 20 minutes seems awfully long for the PL/SQL approach. Is the query using the primary key index?
Justin

Similar Messages

  • Insert slow into Global Temporary Table...

    I am working with a stored procedure that does a select into a global temporary table and it is really slow. I have read up on the append hint and know that it is not a solution since GTT's are in the temporary table space which are thus always appended and never have logs.
    Is there something else that I need to know about performance for GTT? I find it hard to believe that Oracle would find it acceptable to take 50 seconds to insert 3300 rows.

    My apologies in advance as my skill level with Oracle is not as high as I would like for this type of analysis and remediation.
    I had thought of it being the select as well but if I run it by itself it takes about 1 second. The interesting part is when I explain plan on it with the Insert, the SQL plan changes.
    Here is the Non-insert explain plan:
    <code class="jive-code jive-java">
    Plan hash value: 3474166068
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 51 | 8 (38)| 00:00:01 |
    | 1 | HASH GROUP BY | | 1 | 51 | 8 (38)| 00:00:01 |
    | 2 | VIEW | VM_NWVW_1 | 1 | 51 | 7 (29)| 00:00:01 |
    | 3 | HASH UNIQUE | | 1 | 115 | 7 (29)| 00:00:01 |
    | 4 | NESTED LOOPS | | | | | |
    | 5 | NESTED LOOPS | | 1 | 115 | 6 (17)| 00:00:01 |
    | 6 | NESTED LOOPS | | 1 | 82 | 5 (20)| 00:00:01 |
    | 7 | SORT UNIQUE | | 1 | 23 | 2 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID| PEAKSPEAKDAYSEG$METERMASTER | 1 | 23 | 2 (0)| 00:00:01 |
    |* 9 | INDEX RANGE SCAN | IDX_PDSEG$MTR_SEGID | 1 | | 1 (0)| 00:00:01 |
    |* 10 | TABLE ACCESS BY INDEX ROWID | FC_FFMTR_DAILY | 1 | 59 | 2 (0)| 00:00:01 |
    |* 11 | INDEX RANGE SCAN | FC_FFMTRDLY_IDX10 | 2461 | | 2 (0)| 00:00:01 |
    |* 12 | INDEX UNIQUE SCAN | FC_METER_PK | 1 | | 0 (0)| 00:00:01 |
    | 13 | TABLE ACCESS BY INDEX ROWID | FC_METER | 1 | 33 | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    9 - access("SM"."SEGID"=584)
    10 - filter(TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')>=TO_DATE(' 2002-01-01 00:00:00',
    'syyyy-mm-dd hh24:mi:ss') AND TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')<TO_DATE(' 2003-01-01
    00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND ("V"."ADJUSTED_TOTAL_VOLUME"<>0.0 OR
    ROUND("V"."ADJUSTED_TOTAL_ENERGY",3)<>0.0))
    11 - access("V"."METER_NUMBER"="SM"."METER_ID")
    12 - access("M"."METER_NUMBER"="V"."METER_NUMBER")
    </code>
    Here is the Insert explain plan:
    <code class="jive-code jive-java">
    Plan hash value: 4282493455
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | INSERT STATEMENT | | 39 | 2886 | | 7810 (1)| 00:01:34 |
    | 1 | LOAD TABLE CONVENTIONAL | PEAKDAY_TEMP_CONSECUTIVEVALUES | | | | | |
    | 2 | HASH GROUP BY | | 39 | 2886 | | 7810 (1)| 00:01:34 |
    |* 3 | HASH JOIN RIGHT SEMI | | 39 | 2886 | | 7809 (1)| 00:01:34 |
    | 4 | VIEW | VW_NSO_1 | 1 | 10 | | 2 (0)| 00:00:01 |
    | 5 | NESTED LOOPS | | 1 | 27 | | 2 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | PK_PEAKSPEAKDAYSEG | 1 | 4 | | 0 (0)| 00:00:01 |
    | 7 | TABLE ACCESS BY INDEX ROWID | PEAKSPEAKDAYSEG$METERMASTER | 1 | 23 | | 2 (0)| 00:00:01 |
    |* 8 | INDEX RANGE SCAN | IDX_PDSEG$MTR_SEGID | 1 | | | 1 (0)| 00:00:01 |
    | 9 | VIEW | PEAKS_RP_PEAKDAYMETER | 3894 | 243K| | 7807 (1)| 00:01:34 |
    | 10 | SORT UNIQUE | | 3894 | 349K| 448K| 7807 (1)| 00:01:34 |
    | 11 | NESTED LOOPS | | | | | | |
    | 12 | NESTED LOOPS | | 3894 | 349K| | 7722 (1)| 00:01:33 |
    | 13 | TABLE ACCESS FULL | FC_METER | 637 | 21021 | | 18 (0)| 00:00:01 |
    |* 14 | INDEX RANGE SCAN | FC_FFMTRDLY_IDX11 | 6 | | | 10 (0)| 00:00:01 |
    |* 15 | TABLE ACCESS BY INDEX ROWID| FC_FFMTR_DAILY | 6 | 354 | | 12 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("METER_ID"="METER_ID")
    6 - access("GS"."SEGID"=584)
    8 - access("SM"."SEGID"=584)
    14 - access("M"."METER_NUMBER"="V"."METER_NUMBER")
    filter(TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')>=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
    hh24:mi:ss') AND TO_DATE(TO_CHAR("V"."MEASUREMENT_DAY"),'YYYYMMDD')<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
    hh24:mi:ss'))
    15 - filter("V"."ADJUSTED_TOTAL_VOLUME"<>0.0 OR ROUND("V"."ADJUSTED_TOTAL_ENERGY",3)<>0.0)
    </code>
    As you can see there is a real spike in the cost and yet the only thing that was done was the addition of the Insert to GTT. From what I can ascertain the solution may be in an alternate SQL or finding some way to push Oracle into running the query as it would have for the first execution (non-insert).
    I tried creating a simple view out of the SELECT statement to see if that would precompile it but in the end it ran exactly the same way.
    The next thing that I am going to try is removing the PEAKS_RP_PEAKDAYMETER view by going more direct.
    I have not done the trace file analysis yet. Should I still do that?

  • DBMS_MONITOR and trcsess, inserts slow.

    We have a multi-tier application connecting to the database using connection pool. There one db that suddenly started performing very slow especially on inserts and I wanted to try to pin exactly why this was happening. CPU when the inserts are happening is pegged to the max. Since there are many thread running selects and doing batch inserts I was unable to trace any one specific sessions or monitor them. So I wanted to use this so that I gather all info that is happening on the DB so that I can analyze what exaclty is going on.
    EXECUTE DBMS_MONITOR.DATABASE_TRACE_ENABLE(waits => TRUE, binds => FALSE, instance_name => 'DB03');
    After we ran for a while I had a bunch of .trc files in the user_dump dir and now I wanted to run trcsess to combine all the .trc files so that I can run it through TKPROF.
    trcsess output=testoutput.trc service=DB03.domainname.com
    This does not give me anything. The output file seems empty. I also tried
    trcsess output="testoutput.trc" service="SYS$USERS"
    trcsess output=testoutput.trc action=INSERT
    etc
    and nothing seems to work. Anyone knows what I'm doing wrong?
    Also any other suggestion on trying to trace the problem or inputs would be great. Since I may not have thought of all possible scenarios.

    I used SERVICE_NAME.
    select service_name from v$session where sid = sys_context('userenv','sid');
    SYS$USERS
    SQL> show parameter service_names
    NAME                                 TYPE        VALUE
    service_names                        string      DB03.testy.com
    00:25:07 [udump] oracle@db-prod32 865$ lsnrctl services
    LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 18-DEC-2007 00:28:02
    Copyright (c) 1991, 2005, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.32)(PORT=1521)))
    Services Summary...
    Service "DB03" has 1 instance(s).
      Instance "DB03", status UNKNOWN, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:1328 refused:0
             LOCAL SERVER
    Service "DB03.testy.com" has 1 instance(s).
      Instance "DB03", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:5 refused:0 state:ready
             LOCAL SERVER
    Service "DB03_XPT.testy.com" has 1 instance(s).
      Instance "DB03", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:5 refused:0 state:ready
             LOCAL SERVER
    The command completed successfullyEarlier I was running all this from the database itself, I ssh into the box and start the monitor, etc from the DB itself.

  • Insert slow in Oracle RAC

    Dear friends,
    I implemented Oracle RAC in 2 Windows server nodes, with iSCSI as the storage disk for ASM. The performance of select is almost twice as compared to that of a stand alone oracle machine and the load is almost equally splitted. But when I tried INSERT, UPDATE and DELETE, the performance is degrading and is poorer than that of the stand alone machine. Is this due to the waiting for disk access?
    Regards,
    Aravind K R

    Aravind K R wrote:
    Dear friends,
    I implemented Oracle RAC in 2 Windows server nodes, with iSCSI as the storage disk for ASM. The performance of select is almost twice as compared to that of a stand alone oracle machine and the load is almost equally splitted. But when I tried INSERT, UPDATE and DELETE, the performance is degrading and is poorer than that of the stand alone machine. Is this due to the waiting for disk access?
    It may or may not be due to disk access. In general, for select, the RAC would give better performance. But for DML's , scalability is what is going to be possible speedup , in general won't be there. Since you are on RAC, there are lots of interconnect messages that would be shared between the nodes so besides looking at the disk access, do ensure to check the performance of the HBA's since they, in addition to the disk, are going to be one of the most important points which can effect the performance.
    HTH
    Aman....

  • Ellapsed time too much in tkprof output

    Hi All,
    I don't know exactly how to interprete Tkprof output file but i have a problem of performance inserting data to my one table, it takes around 1min but before it was 10 secs. i trace the program and analyze the output from tkprof and i get the following portion:
    insert /*+ APPEND +*/ into T_NAME(COL1,  COL2, COL3, .....)
    values
    (:s1 ,:s2 ,:s3 ,:s4 ,:s5 ,:s6 ,:s7 ,:s8 ,:s9 ,:s10 ,:s11 ,:s12 ,:s13 ,:s14 ,
    :s15 ,:s16 ,:s17 ,:s18 ,:s19 ,:s20 ,:s21 ,:s22 ,:s23 ,:s24 ,:s25 ,:s26 ,
    :s27 ,:s28 ,:s29 ,:s30 ,:s31 ,:s32 ,:s33 ,:s34 ,:s35 ,:s36 ,:s37 ,:s38 ,
    :s39 ,:s40 ,:s41 ,:s42 ,:s43 ,:s44 ,:s45 ,:s46 ,:s47 ,:s48 ,:s49 ,:s50 ,
    :s51 ,:s52 ,:s53 ,:s54 ,:s55 ,:s56 ,:s57 ,:s58 ,:s59 ,:s60 ,:s61 ,:s62 ,
    :s63 ,:s64 ,:s65 ,:s66 ,:s67 ,:s68 )
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 3 2.94 292.24 12144 1501 57125 4728
    Fetch 0 0.00 0.00 0 0 0 0
    total 5 2.94 292.24 12144 1501 57125 4728
    Misses in library cache during parse: 2
    Misses in library cache during execute: 3
    Optimizer mode: ALL_ROWS
    Parsing user id: 103 (USERNAME)
    Rows Execution Plan
    0 INSERT STATEMENT MODE: ALL_ROWS
    When interpreting this, i got huge value on Ellapsed, disk column during execution:
    The command was exectued 3 times and it was taking 292 secs means 97 secs per execution. I thought it was making my insert problem slow.
    So if i'm not wrong how to avaoid this insertion problem, how to reduce this time of ellapsed.
    What is the problem causing table insertion slow.
    Please help because it is affecting our business
    Thanks for your help
    Raitsarevo

    raitsarevo wrote:
    Hi,
    Gathering statistics will affect performance or not. I mean when i gather statistics of my table now, will this affect operation in this table during execution, the table will be locked or not, indexes will be also or not. Can users work in the table during stat gathering.
    Can anybody give me script to gather stats for partitionned table please.
    Gathering statistics is a good idea in general but it's very unlikely that will help in your particular case. The data needs to be inserted into the table and indexes need to be maintained, this in independent from any statistics. Still it's a good idea in general to refresh statistics if e.g. bulk inserts increased the size and number of rows significantly.
    Have you followed up the advices already given so far regarding further checks and running your statement with tracing enabled at a higher level?
    You can enable this using the following instead of using sql_trace = true:
    ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';and switch it off like that:
    ALTER SESSION SET EVENTS '10046 trace name context off';For more information regarding this, e.g. enabling trace in another session, see e.g. here:
    http://www.juliandyke.com/Diagnostics/Trace/EnablingTrace.html
    raitsarevo wrote:
    Hi guys,
    Reading an output fromENterprise manager, i found that this insert statement is consuming to much "db file sequential read ". I know that is causing wait event. So my question is how to reduce this consumption.
    Thanks
    Raitsarevo.That's very likely caused by the required index maintenance and there is not much you can do about it apart from dropping the indexes. As already mentioned by Jonathan you might hits bugs of the ASSM space management, therefore it would also be good to know if the tablespace the object resides in uses ASSM or not (check the columns EXTENT_MANAGEMENT, ALLOCATION_TYPE, SEGMENT_SPACE_MANAGEMENT of DBA_TABLESPACES).
    Try to generate the extended trace as advised and post the tkprof output here.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to split one scene in two in iMovie HD 06

    Hi there, is iMovie HD 06 able to split one scene in two? I am trying to insert slow motion for part of the scene but not the whole. Can I do that? Are there alternatives?

    Easy. Select the clip you want to split, therefore highlighting it, move the playhead to the point that you want to start your slo mo, go up to EDIT and select Split Video at Playhead. Repeat for the end point of that slo mo bit and you are left with three separate clips.

  • Generate random oscillating waves

    Hello,
    I'm stuck on something that may be pretty simple to do... I'm trying to get waves that look like this: http://www.scientificamerican.com/media/inline/000C600B-7F33-1C71-9EB7809EC588F2D7_arch1.gif to show up on a waveform graph.
    The thing is I need to set an upper and lower limit and then have them go up and down randomly. I've looked in the simulate signal and I can't find anything that'll make it oscillate randomly between the two limits that I set.
    Thank you for your time.
    -Tom

    I agree with Lynn - pure sine waves will give you a good indication of your code is working properly and not corrupting the signal in any way - it's much tougher to see problems with what is essentially random signals.  You can also insert slow square waves (like 0.1 Hz)  to see if your code correctly passes both DC and fast edges properly.
    Steve
    Visit the NI Biomedical User Group at:
    www.ni.com/biomedusers

  • Powermac 6100 Dead Mouse

    I have a 6100 and the adb mouse has stopped moving the cursor. The keyboard still works and I can open files etc. I have tried several mice, and several keyboards. I am fairly sure it is not an extensions problem as I have booted up from 2 different hard drives. Ok, here is what may have happened, (prepare to insert slow shaking of head here), it might have gotten wet due to a leaky humidifier on the furnace prior to the onset of problems. Is there a fuse or something that could short out on the adb bus? Why would the keyboard function but not the mouse?
    Any thoughts? I still use the 6100 for the kids and as a printer server for using my old serial laserprinter with a G4 Imac.
    Thanks, Colin

    MC,
    I think Grant may be right about buying another mouse but if that does not work, you will have a true mystery on your hands.
    You ask "what kind of idiot lets his computer get wet?"
    I will tell you. A graduate student brought her laptop into the store and said that two years of her phd thesis was on her laptop hard drive and it would not work. The tech opened the laptop and water ran out. The HDD would not work in a repair sled so he opened the hard drive and water ran out. He started up the drive again and recovered the data. He also made a backup copy.
    The store is long gone but the memories live on. What happened? She would take her laptop out to the car, leave it in the freezing cold, haul it into a warm house, back to the car, into school, around campus and home again. It had filled up with condensation. I think she learned her lesson about backing up files.
    Jim

  • SQL: System locks up and runs slow after performing simple DML record insert

    SQL Version:  2008 R2
    I am having a serious problem.  I ran the following code to perform a simple table record insert which ran successfully.  However, after running this code I could no longer access the related table.  I couldn't run a query against it. 
    I tried to delete all the records and that wouldn't work either.  When attempting to run any DML statements (i.e. Select * From vPCCertificateTypes) against this table SQL gets locked up and never returns anything and no error messages.  I have
    to manually stop the query.  Now the entire SQL system is running slow.
    Any help would be greatly appreciated.  The code I ran to originally insert the records is below.
    Regards,
    Bob Sutor
    CODE: 
    Begin TRAN
     INSERT INTO vPCCertificateTypes (VendorGroup, CertificateType, Description, Active, Category)
     SELECT HQCo, 'MBE', 'Minority-owned Business Enterprise', 'Y', 'Affirmative Action' 
     FROM HQCO
     Where HQCo IS NOT NULL
      AND HQCo <> 100
      AND HQCo <> 99
     IF @@ROWCOUNT <> 44 ROLLBACK ELSE COMMIT
    Bob Sutor

    The problem was an open uncommitted transaction against the table as you suspected.  I ran your code and it cleared the transaction.  In the past I would intentionally leave our the Commit statement and then if the transaction ran fine, I would
    simply highlight and run 'COMMIT' and it would close the transaction.  Apparently that is not the correct approach.  I'll use your suggested code from now on.
    Thanks for the help.
    Bob Sutor

  • In Adobe Reader (11.0) the typing is very slow when I insert sticky notes.

    I bought a new laptop and I just installed the latest version of Adobe Reader.
    When I insert sticky notes, typing is very slow. This doesn't happen with the
    other program. I've uninstalled and reinstalled. I still have the same problem.

    This phenomenon can happen with PDF files that have errors in them.  Adobe Reader or Acrobat may be able to fix these errors, but if you close the application, it will ask to save the corrected file.  If you save it once, that should no longer occur when you open it again.
    This is just one explanation, but it doesn't explain why that happens with every PDF you have.
    Test: does it also happen with a PDF that is not supposed to contain any errors: http://helpx.adobe.com/pdf/adobe_reader_reference.pdf ?

  • Slow inserts

    We suddently started seeing some slow inserts on some test boxes. All boxes are on RedHat 5 and 10.2.0.1.
    We have a multi-tier web application conencting to the DB using connection pool and inserting about a 100000 records in a table. They first do a select, if not found then its an insert. So to take the other pieces out of the puzzle. I wrote a small plssql, which does a select and then does an insert based on some random data generated.
    When I ran this on my test system(db1) it ran about 37 sec, cpu usage was very low. When I ran this on the system that was reported to be slow(db2), it ran for about 30 minutes and CPU was pegged.
    Then I think we let this system and when I tried today, on the system (db2), it completed in about 35 sec. This was wierd. So I re-created the schema (dropped the user) and then ran the test, it was slow again and the CPU was pegged.
    So I thought well may be since the system was not touched (no inserts or any activity) the stats jobs ran, etc that could have been the reason. So ran stats on the database and the schema but no use, it was still slow.
    So then I ran the PL/SQL block on another system (db3) where the schema was created but no iserts or and activty on this box for atleast a few days. But then it was extemely slow in this box as well and cpu was pegged.
    All these boxes have the same table, indexes, they are all identical. Any reason why this could be happening and what else should I be looking for?
    SET timing on;
    DECLARE
       uname    NVARCHAR2 (60);
       primid   NVARCHAR2 (60);
       num      NUMBER;
    BEGIN
       FOR i IN 1 .. 100000
       LOOP
          uname := DBMS_RANDOM.STRING ('x', 20);
          primid := DBMS_RANDOM.STRING ('x', 30);
          DBMS_OUTPUT.put_line (uname || ' ==> ' || primid);
          SELECT   COUNT (*)
              INTO num
              FROM test
             WHERE (deldate IS NULL)
               AND id = 0
               AND (   primid = primid
                    OR UPPER (username) = uname
                    OR uiname = uname
          ORDER BY time DESC;
          INSERT INTO test
               VALUES (0, uname, uname, 1,
                       uname, primid
          IF MOD (i, 200) = 0
          THEN
             COMMIT;
             DBMS_OUTPUT.put_line ('Commited');
          END IF;
       END LOOP;
    END;
    /

    So I used
    execute sys.dbms_support.start_trace_in_session(175, 9629, waits=>true, binds=>false);
    Othere than a couple of these I don't see any other wait events, should I be looking for something else or am I doing the trace wrong. I think this is equivalent to what you suggested.
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      log file sync                                   1        0.00          0.00
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      log file switch completion                      3        0.00          0.00

  • Insert too slow

    Hi DB Gurus,
    Our application is inserting 60-70K records in a table in each transaction. When multiple sessions are open on this table user face performance issues like application response is too slow.
    Regarding this table:
    1.Size = 56424 Mbytes!
    2.Count = 188,858,094 rows!
    3.Years of data stored = 4 years
    4.Average growth = 10 million records per month, 120 million each year! (has grown 60 million since end of June 2007)
    5.Storage params = 110 extents, Initial=40960, Next=524288000, Min Extents=1, Max Extents=505
    6.There are 14 indexes on this table all of which are in use.
    7. Data is inserted through bulk insert
    8. DB: Oracle 10g
    Sheer size of this table (56G) and its rate of growth may be the culprits behind performance issue. But to ascertain that, we need to dig out more facts so that we can decide conclusively how to mail this issue.
    So my questions are:
    1. What other facts can be collected to find out the root cause of bad performance?
    2. Looking at given statistics, is there a way to resolve the performance issue - by using table partition or archiving or some other better way is there?
    We've already though of dropping some indexes but it looks difficult since they are used in reports based on this table (along with other tables)
    3. Any guess what else can be causing this issue?
    4. How many records per session can be inserted in a table? Is there any limitation?
    Thanks in advance!!

    Hi DB Gurus,
    Our application is inserting 60-70K records in a
    table in each transaction. When multiple sessions are
    open on this table user face performance issues like
    application response is too slow.
    Regarding this table:
    1.Size = 56424 Mbytes!
    2.Count = 188,858,094 rows!
    3.Years of data stored = 4 years
    4.Average growth = 10 million records per month, 120
    million each year! (has grown 60 million since end of
    June 2007)
    5.Storage params = 110 extents, Initial=40960,
    Next=524288000, Min Extents=1, Max Extents=505
    6.There are 14 indexes on this table all of which are
    in use.
    7. Data is inserted through bulk insert
    8. DB: Oracle 10g
    Sheer size of this table (56G) and its rate of growth
    may be the culprits behind performance issue. But to
    ascertain that, we need to dig out more facts so that
    we can decide conclusively how to mail this issue.
    So my questions are:
    1. What other facts can be collected to find out the
    root cause of bad performance?
    2. Looking at given statistics, is there a way to
    resolve the performance issue - by using table
    partition or archiving or some other better way is
    there?
    We've already though of dropping some indexes but it
    looks difficult since they are used in reports based
    on this table (along with other tables)
    3. Any guess what else can be causing this issue?
    4. How many records per session can be inserted in a
    table? Is there any limitation?
    Thanks in advance!!You didn't like the responses from your same post - DB Performance issue

  • BLOB column inserts are slow

    Hi All,
    Have a table with 7 columns where 4 columns are of Varchar2 type, 2 columns are of NUMBER type and 1 column is of type BLOB.
    Myself inserting the values to the table from JAVA program. Insertion to VARCHAR2 and NUMBER type columns are very much fast. But insertion to BLOB column is dead slow(data to BLOB column values are about 10KB).
    Please help me in this regard to insert BLOB values fastly.
    Regards/Sreekeshava S

    Sreekeshava S wrote:
    Running JAVA program in the same server as that of DB. Connecting how? IPC? TCP? Dedicated server? Shared server?
    Calling Oracle how? Doing a SQL statement prepare per insert? Reusing the SQL cursor handle? Binding variables?
    And inserting 250 records/ sec(during peak load and 50 records/sec during normal load). where each record is having size of 10K(Blob column size).And what is slow? You have NOT yet provided ANY evidence that points to the actual INSERT being slow.
    As I have already explained, there are a number of layers from client to server - and any, or all of these, could be contributing to the problem.
    Use your web browser and look up what instrumentation is. Apply it. Instrument your code. On the client. On the server. So you have evidence (call stats and metrics) to use to determine what and where the performance problem is. And not have to guess - and like most developers point your finger at the database in the false belief that your client code, client design, and client usage of the database, are all perfect.

  • Insert query slows in Timesten

    Hello DB Experts ,
    I am inserting bulk data with ttbulkcp command. my permsize is 20GB . Insert query gets slow . can anyone help me that how can i maximize throughput by ttbulkcp.
    Regards,

    Hi Chris thanks for your reply.
    I have uncommented that memlock parameter is working now. I will not use system DSN now onwards. thanks for that suggestion .
    1.    The definition of the table you are loading data into, including indexes.
    My Comments : Table defination.The table doesnot having any primary key and any indexes.
    create table TBLEDR
    (snstarttime number,
    snendtime number,
    radiuscallingstationid number,
    ipserveripaddress varchar2(2000) DEFAULT '0',
    bearer3gppimsi varchar2(2000) DEFAULT '0',
    ipsubscriberipaddress  varchar2(2000),
    httpuseragent  varchar2(2000) DEFAULT '0',
    bearer3gppimei  varchar2(256) DEFAULT '0',
    httphost varchar2(2000) DEFAULT '0',
    ipprotocol  varchar2(256) DEFAULT '0',
    voipduration varchar2(256) DEFAULT '0',
    traffictype varchar2(256) DEFAULT '0',
    httpcontenttype varchar2(2000) DEFAULT '0',
    transactiondownlinkbytes number DEFAULT '0',
    transactionuplinkbytes number DEFAULT '0',
    transactiondownlinkpackets number  DEFAULT '0',
    transactionuplinkpackets number DEFAULT '0',
    radiuscalledstationid  varchar2(2000) DEFAULT '0',
    httpreferer varchar2(4000) DEFAULT '0',
    httpurl varchar2(4000) DEFAULT '0',
    p2pprotocol  varchar2(4000)  DEFAULT '0'
    2.    Whether the indexes (if any) are in place while you are loading the data.
    My comments: No indexes are there.
    3.    The CPU type and speed.
    Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz .32 core .
    4.    The type of disk storage you are using for the filesystem containing the database.
    We are not using any external storage. we are using linux ext3 filesystem.
    5.   The location of the CSV file that you are loading - is it on the same filesystem as the database files?
    My comment - database files are resides on /opt partition. and yes the CSV files are also placed in same directories .those files are in /opt/Files.
    6.   The number of rows of data in the CSV file.
    My comment - in per CSV file there is around 50,000 Records.
    7.   Originally you said 'I am only getting 15000 to 17000 TPS'. How are you measuring this? Do you TPS (i.e. commutes per second) or 'rows inserted per second'? Note that by default ttBulkCp commits every 1024 rows so if you are measuring commits then the insert rate is 1024 x that.
    My comment- Now I have set timing on at bash prompt. lets say when i have run command ./ttbulkcp at that time i have note down the timing. now when the command complete , i am again note down the time. and then i am calculating the TPS. further in this, i have one file with ttbulkcp . I am having 50000 records in file. and out of these records around 38000 records gets sucsucced. and thus i am calculating TPS.

  • Insert statements using function are slow in 10.2.0.3

    I am migrating our data warehouse server to newer more powerful hardware, both server and disk are far superior. As part of the migration the oracle rdbms version is also changing from 10.1.0.3 to 10.2.0.3 on RHEL4.
    We have some jobs that run and populate some fact tables and when the insert runs on the new server it is at least 6 times slower that on the older hardware. The job that does the inserts uses some funtions to populate some fileds and seems to be the culprit quite possibly?
    I can insert many rows quickly in the same table space with all of the same storage parameters and this goes really fast but when we call the functions it really slows down to a crawl?
    Is is possible this could be a bug in the 10.2.0.3 rdbms version?

    You have not provided substantial amounts of information required to help you.
    Here is a partial list.
    1. Has anyone collected system statistics on the new system?
    http://www.morganslibrary.org/reference/system_stats.html
    2. Has anyone produced Explain Plan reports with DBMS_XPLAN to see if the plans are different?
    http://www.morganslibrary.org/reference/explain_plan.html
    3. Has anyone run StatsPacks to compare the systems?
    http://tahiti.oracle.com
    We can't help you without knowing what is different.

Maybe you are looking for

  • JCONNECTOR with SAP B1 2005A

    I am having problems connecting to SAP B1 with DI API using JConnector. If I put wrong db name, I get error message. If all information is correct, my programme "frizes" at this line int result = company.connect();. It looks like it is in a loop. Doi

  • Problem with Winclone and Windows 7 Bootcamp

    I previously used Winclone to backup and restore onto the windows 7 partition when I had Snow Leopard. Now on Lion, Winclone won't work, and of course my bootcamp was not restored with the upgrade. I've tried partitioning the mac HD into 2 drives, th

  • How do I embed a facebook status update into a website/blog?

    Ok, so my latest project has been putting together a blog. I've figured out a couple of ways of adding twitter to the page so that it automatically updates to the latest tweet, but what about facebook? I see facebook has a page for adding a profile b

  • OWB SDK question

    Hi, I want to create my own meta data (Tables) in OWB repository. I assume I can use the SDK to do that. Can I create a Table with a XMLType column? How do I locate that Table in the OWB repository? Will the Table be created in OWBSYS account? Can I

  • I accidentally erased my wife's iphone4s in my find my iphone, how can i bring it back

    i accidentally erased my wife's iphone4s in my iphone5 find my iphone, how can i bring it back