Slow database insert condition

I have a situation were we have multiple sessions inserting data in a partitioned table. At any given time we could have multiple sessions. The problem we are facing is that for some reason the insert is taking really logn(say 10 sec or greater) for some transactions. Most of the time the inserts are sub second transactions. This slow insert condition has now pattern. The only thing i have noticed is that this condition occurs where that are a lot of sessions open(50-60)
My question is what i can do in a 9i instance that would help me pinpoint why a session is taking this long to insert. These are simple insert statements and there is no table locking involved.
Your help is gretly appreciated.

Since i have about 50-60 sessions doing insert and this is a very busy system. The problem might not occur for 10 days, so, session tracing seems impractical.
I am looking for a solution where the DB does some internal recording. If the insert takes 10 seconds or more then the DB should record some stats. Is there anything like that.

Similar Messages

  • How to use BAPI_RE_CN_CHANGE to insert conditions

    Hi,
    I need to insert conditions in a RE contract, i find out that the BAPI BAPI_RE_CN_CHANGE   could do this, but it's not working, the BAPI return is OK, but no records are inserted in the cointract.
    Anyone can send me the parameters that i had to pass to the BAPI ou a example code of it ?
    Thanks a lot.

    Hello,
    I have solved the problem, i'm sending a invalid parameter to the BAPI.
    Now the condition is being inserted into the contract.
    Thanks

  • Slow database access across network

    I have a postgresql database setup on a server machine and have written a java program in netbeans that accesses it. The problem i have is that running the jar created by netbeans from any other machine on the network has really slow database access but running the program from netbeans itself is fine. Running the jar on the server machine is also fine.
    Anyoner got any idea as to why this could be?
    Tom

    I don't know if this will help, I'm just wading into writing my own java apps to hit a SQL 2K db. We purchased a SQL2k/JSP driven backend on which we extended their OM to meet our needs. It was build as an extranet product but ran very poorly across the 'Net.
    The biggest annoyance was not the actual transaction time of the app but the time it took from the end user to click a button and the start of the transaction.
    After much testing I determined it was the JSP.
    So what we have done to alieviate the issue is to have multiple regional Tomcats that 'throw' their transactions to a central colocation where the SQL servers sit (so the tomcats do the JSP managment in a sense locally to the end user) . For big clients we actually place a linux/tomcat box at the edge of their firewall. For whatever reason, I'm far too busy to determine the how now that I get the responses I need, this has completely eradicated the annoying pause between user interaction and the actual transaction to the DB server. Extended testing showed the SQL server running in the sub half second range to process a command but the tomcat rendering could take up to 2 to 5 seconds in worst case for no discernable reason. Moving the tomcat frontends to the network segment where the clients we solved it.
    Of course I'm digging into swing now too and it has some slowness that the Eclipse zealots say SWT fixes but I haven't the depth or experience to properly judge the camps arguments yet.
    hth...

  • Sql queries - slow database

    hi all,
    is there a general but specific way to generate a list of sql queries that are consuming most resources and resulting in slow database performance?
    thanks.

    There are very few ways in this world that are "general but specific".
    You can use "general" tools like StatsPack and AWR.
    You can write "general" queries on V$SQL, V$SQLAREA, V$SQLSTATS.
    You can use "specific" methods like Tracing.
    You can use "specific" methods like Client side (or Application Server side) Logs.

  • Error - '.' expected - while database insert thru JSP & javabeans & JDBC

    I get the following error when i try to compile the below mentioned code in JDEVELOPER 10.1.2
    Error : '.' expected
    ===================
    have error in my code. I am trying to do a simple database insert program using javabeans and jsp. I get a value to be inserted in database through the jsp page in a text box and would like to be inserted into database using beans The connection to database and mysql query are in java file.
    Here is the code.
    <%@page contentType="text/html"%>
    <%@page pageEncoding="UTF-8"%>
    <%@ page language="Java" import="java.sql.*" %>
    <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
    <%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %>
    <html>
    <head>
    </head>
    <body>
    <form name="form1" action="beancode" method="POST">
    Emp ID: <input type="text" name ="emplid"> <br><br><br>
    <input type = "submit" value="Submit">
    <jsp:useBean id="sampl" class="beancode" scope="page">
    <jsp:setProperty name="sampl" property="*"/>
    </jsp:useBean>
    </form>
    </body>
    </html>
    I know i might have made a mistake here in using the bean. Here is the java code which does the insert part.
    import java.sql.Connection;
    import java.sql.DriverManager;
    import java.sql.PreparedStatement;
    import java.sql.ResultSet;
    * @author Trainees
    public class beancode
    private String employid;
    private Connection con = null;
    private ResultSet rs = null;
    private PreparedStatement st = null;
    /** Creates a new instance of beancode */
    public beancode()
    try
              Class.forName("org.gjt.mm.mysql.Driver");
              Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/sample?user=user&password=password");
    catch(Exception e)
              System.out.println(e.getMessage());
    public void setemployid(String empid)
              employid = empid;
         public String getemployid()
              return (employid);
    public void insert()
    try
    String s1="insert into samp values('"+employid+"')";
    st = con.prepareStatement(s1);
    st.executeUpdate();
    st.clearParameters();
    st.close();
    catch(Exception m)
    }

    It's pretty hard to spot any errors the way it's currently formatted. But, you're trying to call the beancode when submitting your form, but the beancode isn't a servlet. Try to find a working example, run it, and then change it to implement your requirements.
    The following example shows you have to use the jstl sql tags: http://www.oracle.com/technology/sample_code/tech/java/codesnippet/jsps/jstlsql.html. If you want to do an insert when a user submits a form, you need 2 pages. The first will contain your form, the second will insert the record.
    (once you've got that running, look for some examples using an mvc framework, and how to use bind variables with jdbc)

  • Cross Database Insert on rows

    Is there a standard way to insert rows into a table and get the last inserted value for the identity column back ?
    e.g. Oracle does'nt support IDENTITY or AUTONUMBER on insert as a sequence is desired. This creates a huge problem for writing cross database insert code.
    Most of the times the last value inserted for the identity column has to be returned so it can be used in a database transaction.
    Say User wants to insert data into an User table which has the following columns...
    user_id int (PK) Auto generated number
    firstname VARCHAR(20)
    lastname VARCHAR(20)
    In Sybase or SQL Server The user would just call
    insert into User (firstname,lastname) values ('John', 'Doe') and then call select @@identity to get the last value of the PK inserted.
    However in Oracle the insert method first needs to know the name of the sequence from where the value of the PK needs to be obtained. Then it has to use that value to insert row in the table.
    If the sequence is Seq_User
    select Seq_User.nextval from dual and then
    insert into User (user_id,firstname,lastname) values (last_value, 'John', 'Doe')
    Now if a user has written code using JDBC for Sybase and the Database is migrated to Oracle, He has to go changing all his inserts!!!
    Is there a way to write cross database code so that user does not have to worry about changing his code for all the inserts?
    Any help is greatly appreciated.
    Thanks,

    This has been discussed here already.
    As far as I know there is no standard way.
    Maybe you can make an own "standard" solution by doing your logic in a stored procedure which returns the new ID value.
    So you can move all DBMS specific stuff into the stored procedure.
    Of course there's a need of adapting this to each target DBMS.
    Can you reduce the number of needed SPs by making them more generic?
    Very similiar solution - if you like it more - would be to make generic methods, in a base class or interface and overwritten/reimplemented for each target DBMS.
    There could be a good use of inheritance and polymorphism in this task which SPs don't provide (except for those DBMSses who allow Java-coded SPs, but that would be platform dependant again).

  • How to insert condition on PO once GR has been created

    Hi all,
    I want to allow users to insert conditions on Purchase Order once Goods Received has been created. How do I do that?
    Thanks,
    Nas

    Hi
    You can add conditions in change mode ME22N.But it will not change the your Material documents that are already posted.Conditions applicable for balalnce quantity.
    You have to cancel your material documents and do fresh GR to reflect the changes for already GR done quantity.
    Regards
    Ramakrishna

  • Flexible database insert

    hi all!
    after hours of trials, searching this forum and others now the very important question: I use the database-connectivity kit with labview 8.5 and want to create a flexible database-insert procedure. I easily want to insert row by row of a 2-dim array, by knowing the specific amount of columns for the chosen table. 
    my idea is: a gui for database-table view and editor function for any table in the database. What I did is, you can currently view each table in the database, change something inside. now i buffer the whole information of the changed table, delete the content in the database and write the changed buffered content back by using the INSERT - VI.
    I always get the error message, that the number of columns is not equal to the number of parameters.I don't want to bring in the code specific information for a table, because my programm should work with any table in the database. 
    ... is there anyone of you, who is able to help me out ? I get crazy by this time ...
    i added the vi in the attachement, unfortunatelly it is  a bit in german/englisch, sorry for that.
    thx a lot !
    energymix
    Attachments:
    tabellen_admin.vi ‏58 KB

    Database will return error if you right any empty string to it and in the code the table is written directly  without  parsing the table string for empty strings.  I suspect that there might be some extra empty cells are there in the table that is creating the problem.  So just parse the entire table for blank spaces -> truncate the entire table to the size the database expects->Update the data in the database.
    Hope this helps.
    With regards,
    JK
    (Certified LabVIEW Developer)
    Give Kudos for Good Answers, and Mark it a solution if your problem is solved.

  • Duplicate messages/database inserts despite XA

    I have a simple test application which reads a single message at a time from WebSphere MQ, calls a tux service to insert into a database and writes an output MQ message (again via a tux service). The app is duplicating database inserts and MQ puts despite it being configured to use XA.
    I am running with AUTOTRAN=Y and can see the XA transactions in the trace.
    If I cause a DB failure (oracle 11g -> dataguard with fast start failover) , the servers exit and are re-started (this appeared to be the best way of cleaning up) however I get duplicates, despite the XA trace implying the global transaction was rolled back:
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ...MQ Read->GPU POC TEST MSG18313<0> <<<<<<MQGET here
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ...................calling <getu>
    124530.tuxtestmachine!server.864336.1.0: gtrid x0 x4aec30de x8d3: getu started
    124530.tuxtestmachine!server.864336.1.0: gtrid x0 x4aec30de x8d3: ..Input<GPU POC TEST MSG18313>
    124530.tuxtestmachine!server.864336.1.0: gtrid x0 x4aec30de x8d3: Conver string
    124530.tuxtestmachine!server.864336.1.0: gtrid x0 x4aec30de x8d3: .............................getu<GPU POC TEST MSG18313->GPU POC TEST MSG18313>
    124530.tuxtestmachine!server.864336.1.0: gtrid x0 x4aec30de x8d3: getu complete
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ..returned from getu
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ..calling mqwrite
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ...................calling <mqwrite>
    124530.tuxtestmachine!s_mqwrite.651446.1.0: gtrid x0 x4aec30de x8d3: TRACE:at: { tpservice({"writemq", 0x10, 0x20077930, 22, 0, -2147483648, {0, -2, -1}})
    124530.tuxtestmachine!s_mqwrite.651446.1.0: gtrid x0 x4aec30de x8d3: ...in mqwrite
    124530.tuxtestmachine!s_mqwrite.651446.1.0: gtrid x0 x4aec30de x8d3: +++++Writing to MQ<GPU POC TEST MSG18313><GPU POC TEST MSG18313> <<<This is the MQPUT
    124530.tuxtestmachine!s_mqwrite.651446.1.0: gtrid x0 x4aec30de x8d3: TRACE:at: { tpreturn(2, 0, 0x20077930, 0, 0x0)
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ..insert to database
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ..............Calling dbinsert with <GPU POC TEST MSG18313><21>
    124530.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: ...................calling <db_insert>
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: { /ò!(0x20019b74, 0, 0x0)
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: } /ò! = -7
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: { xa_open(0x2002035c, 0, 0x0)
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: } xa_open = 0
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: { xa_start(0x20019b74, 0, 0x0)
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: } xa_start = 0
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: ...Calling dbinsert<GPU POC TEST MSG18313>
    124530.tuxtestmachine!s_ora.897234.1.0: gtrid x0 x4aec30de x8d3: Database failure : ORA-25408: can not safely replay call <<<<As a result of the DB failure
    124640.tuxtestmachine!MQXA.778310.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: { xa_rollback(0x200196b4, 0, 0x0)
    124640.tuxtestmachine!TMS_ORA.1085652.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: { xa_rollback(0x2001a454, 0, 0x0)
    124640.tuxtestmachine!s_mq.1016002.1.0: gtrid x0 x4aec30de x8d3: !!!Error calling<db_insert><13>
    124640.tuxtestmachine!MQXA.778310.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: } xa_rollback = 0 <<<< This looks to me like a succesful rollback by the TMS
    124640.tuxtestmachine!TMS_ORA.1085652.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: } xa_rollback = -7 <<Resource manage unavailable, DB is still unavailable here
    124640.tuxtestmachine!TMS_ORA.1085652.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: { xa_open(0x20020c3c, 0, 0)
    124640.tuxtestmachine!TMS_ORA.1085652.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: } xa_open = 0
    124640.tuxtestmachine!TMS_ORA.1085652.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: { xa_rollback(0x2001a454, 0, 0x0)
    124640.tuxtestmachine!TMS_ORA.1085652.1.0: gtrid x0 x4aec30de x8d3: TRACE:xa: } xa_rollback = 0
    $
    Anyone with any ideas as to where I go from here ? Your time is much appreciated.
    Thanks.

    Hi Dan,
    Thanks for the question.
    Currently the Messaging Pattern implementation does not use/implement JTA/XA or Coherence transactions. It's much like regular JMS in that it provides it's own transaction framework, that neither uses JTA or XA. While the JMS specification provides an XA API, the implementation of this is optional, as are Queues or Topics.
    The original goal of the Messaging Pattern implementation was to demonstrate that JMS-like messaging is possible with Coherence. When we looked at constructing the Messaging example, most people we talked to said they didn't want JMS and especially they didn't want transactions. They simply wanted a simple messaging implementation. So it's initial goal was to demonstrate a high-scalable messaging layer. Unfortunately that meant no "real" transaction support, especially in terms of things like XA and/or JTA.
    That said given the way that the pattern is currently implemented one could easily implement an XA resource so that it could be used as a last resource. That's probably the quickest way to get what you would like at the moment.
    Of course I'm certainly interested in implementing XA support, but we know that is non-trivial. It's not currently on our roadmap for the pattern, but it could be. Our next release will provide persistent messaging, but we could investigate XA after that.
    -- Brian
    Brian Oliver | Architect | Oracle Coherence Engineering
    Oracle Fusion Middleware

  • EXTREMELY SLOW XQUERY PERFORMANCE AND SLOW DOCUMENT INSERTS

    EXTREMELY SLOW XQUERY PERFORMANCE AND SLOW DOCUMENT INSERTS.
    Resolution History
    12-JUN-07 15:01:17 GMT
    ### Complete Problem Description ###
    A test file is being used to do inserts into a schemaless XML DB. The file is inserted and then links are made to 4
    different collection folders under /public. The inserts are pretty slow (about
    15 per second and the file is small)but the xquery doesn't even complete when
    there are 500 documents to query against.
    The same xquery has been tested on a competitors system and it has lightening fast performance there. I know it
    should likewise be fast on Oracle, but I haven't been able to figure out what
    is going on except that I suspect somehow a cartesian product is the result of
    the query on Oracle.
    ### SQLXML, XQUERY, PL/SQL syntax used ###
    Here is the key plsql code that calls the DBMS_XDB procedures:
    CREATE OR REPLACE TYPE "XDB"."RESOURCEARRAY" AS VARRAY(500) OF VARCHAR2(256);
    PROCEDURE AddOrReplaceResource(
    resourceUri VARCHAR2,
    resourceContents SYS.XMLTYPE,
    public_collections in ResourceArray
    ) AS
    b BOOLEAN;
    privateResourceUri path_view.path%TYPE;
    resource_exists EXCEPTION;
    pragma exception_init(resource_exists,-31003);
    BEGIN
    /* Store the document in private folder */
    privateResourceUri := GetPrivateResourceUri(resourceUri);
    BEGIN
    b := dbms_xdb.createResource(privateResourceUri, resourceContents);
    EXCEPTION
    WHEN resource_exists THEN
    DELETE FROM resource_view WHERE equals_path(res, privateResourceUri)=1;
    b := dbms_xdb.createResource(privateResourceUri, resourceContents);
    END;
    /* add a link in /public/<collection-name> for each collection passed in */
    FOR i IN 1 .. public_collections.count LOOP
    BEGIN
    dbms_xdb.link(privateResourceUri,public_collections(i),resourceUri);
    EXCEPTION
    WHEN resource_exists THEN
    dbms_xdb.deleteResource(concat(concat(public_collections(i),'/'),resourceUri));
    dbms_xdb.link(privateResourceUri,public_collections(i),resourceUri);
    END;
    END LOOP;
    COMMIT;
    END;
    FUNCTION GetPrivateResourceUri(
    resourceUri VARCHAR2
    ) RETURN VARCHAR2 AS
    BEGIN
    return concat('/ems/docs/',REGEXP_SUBSTR(resourceUri,'[a-zA-z0-9.-]*$'));
    END;
    ### Info for XML Querying ###
    Here is the XQuery and a sample of the output follows:
    declare namespace c2ns="urn:xmlns:NCC-C2IEDM";
    for $cotEvent in collection("/public")/event
    return
    <cotEntity>
    {$cotEvent}
    {for $d in collection("/public")/c2ns:OpContextMembership[c2ns:Entity/c2ns:EntityIdentifier
    /c2ns:EntityId=xs:string($cotEvent/@uid)]
    return
    $d
    </cotEntity>
    Sample output:
    <cotEntity><event how="m-r" opex="o-" version="2" uid="XXX541113454" type="a-h-G-" stale="2007-03-05T15:36:26.000Z"
    start="2007-03-
    05T15:36:26.000Z" time="2007-03-05T15:36:26.000Z"><point ce="" le="" lat="5.19098483230079" lon="-5.333597827082126"
    hae="0.0"/><de
    tail><track course="26.0" speed="9.26"/></detail></event></cotEntity>

    19-JUN-07 04:34:27 GMT
    UPDATE
    =======
    Hi Arnold,
    you wrote -
    Please use Sun JDK 1.5 java to perform the test case.Right now I have -
    $ which java
    /usr/bin/java
    $ java -version
    java version "1.4.2"
    gcj (GCC) 3.4.6 20060404 (Red Hat 3.4.6-3)
    sorry as I told you before I am not very knowledgeable in Java. Can you tell me what setting
    s I need to change to make use of Sun JDK 1.5. Please note I am testing on Linux
    . Do I need to test this on a SUN box? Can it not be modify to run on Linux?
    Thanks,
    Rakesh
    STATUS
    =======
    @CUS -- Waiting for requested information

  • Database.insert.vi

    I would like to now if the data saved using
    database.insert.vi
    are created/allocated statically or dynamically?

    Well, I for one do not understand your question. Whatever data you have is written to the table you specify. What do you mean by dynamic or static? Are you using the NI toolkit?

  • Slow record insertion when using millions of queries in one transaction

    For test purposes, we play a table creation scenario (no indexes) under multiple conditions : we insert records in bulk mode or one by one, with or without transactions, etc.
    In general, the record insertion is ok, but not when we try to insert 1 million record one by one (sending 1 million INSERT commands) in a single transaction: in this case, the insertion is quick enough for the first 100000 records approximatively, but then, it becomes extremely slow, so that it would take several days to complete. This doe not happen whithout the transaction.
    We were not able to find the database parameters to change to gain a better performance : rollback? transactions? undo? what else?
    Does anybody have an idea of teh parameters to modify?
    Thank-you in advance.

    >
    For test purposes, we play a table creation scenario (no indexes) under multiple conditions : we insert records in bulk mode or one by one, with or without transactions, etc.
    In general, the record insertion is ok, but not when we try to insert 1 million record one by one (sending 1 million INSERT commands) in a single transaction: in this case, the insertion is quick enough for the first 100000 records approximatively, but then, it becomes extremely slow, so that it would take several days to complete. This doe not happen whithout the transaction.
    >
    Hi
    How are you inserting the one million records when you do one at a time? If it's within a loop, you are probably doing a COMMIT as well within the loop. This will cause log file sync waits as LGWR will be too busy writing the redo entries. This will slow down the insert process as well as the performance of database. This is an expected bahaviour. Commit causes checkpoint to triiger which in turn will make log writer write the redo entries in the online redo logs. This is a serial process.
    You can do the following methods to insert bulk records
    a) INSERT /*+ APPEND */ into table select * from stage;
    This will cause direct path load to happen bypassing buffer cache and reducing the redo to a great extent. However if you have foreign keys enabled in the table, it will silently ignore the direct path directive and do the conventional load.
    b) Forall....SELECT ...BULK COLLECT..... This will be a good method when you do from PL/SQL
    c) When you do within a loop
        Declare
        v_commit_cnt Number := 0;
       Begin
        For i in (select col1, col2, col3 from billion_record_table)
        Loop
         v_commit_cnt := v_commit_cnt + 1;
         Insert into target values (i.col1, i.col2, i.col3);
         If (v_commit_cnt >= 50000) Then
          commit;
          v_commit_cnt := 0;
         End If;
       End Loop;
       COMMIT;
    End;
    /4) If the target table is a staging table, you can do a CTAS (Create table as SELECT)
    and many more options if you plan well in advance.

  • Slow database/schema - network OK

    We have an arrangement at a users site that has an IIS web server connected to an Oracle database Schema. The ping (<1ms) and trace route between them are very healthy. The web server is OK too. Yet when the server reads and writes data to the database the process is very slow.
    We’ve even taken the web server out of the equation and got the DBA to run a small script that inserts and deletes data and captures some timings. The result is the read and write (Select, update and delete) take at least 200% longer than in most other environments we ran the same test in. All in all the bottle neck is the Oracle 9 server.
    The database server itself is supposed to be very powerful. Any suggestions?

    I don't think there is anything wrong with the Actual SQL of the application, under normal circumstances it is quite efficient, we have it running on several servers with no problems. The test script was devised to prove it wasn't the SQL of the application or the application server itself. It creates a table with two columns and using loops inserts, updates and deletes the data. The test table is index on the ID column.
    In using this test script on other servers the time it took to complete the operations was very quick except for this one Oracle 9 server. We understand the server is hosting serveral Oracle databases each containing several users and schemas also in the afternoons the time the operation takes can be even longer pointing towards some sort of load management issue.

  • Slow Database Problem.....Plz Help

    We r having Oracle 8i(8.1.6.0) database on Windows NT 4.00.1381 with 512 mb RAM...on ACER Server..with 2 disks..of 80 gb.
    Users complaining about slow inset activity ..
    Slow insert actvity occurs in some master tables like
    coll90 contains 389565 rows
    bnkcol 260281
    ftrn 70500
    bnk_pay 145000
    adj_pay 230000
    tables are not partinitioned....
    these are some of the table used in accounting entery module....
    erp is written in d2k 6.0
    in entery program they have used query to find max of company's record_no and add 1 to that no and then call insert statement to add record in master tables..
    as soon as accountants start entering data Task Manager in Windows NT shows above 90% CPU usage
    how to find where is the problem
    some query depending on this master tables also performing slow ...
    plz guide me...
    thanks in advance....
    regards
    mits

    in entery program they have used query to find max of
    company's record_no and add 1 to that no and then
    call insert statement to add record in master
    tables..use trace as others have suggested to be sure where is the real problem, but as you've described this scenario it looks like it will never perform well. Getting max value and then incrementing it will be slow and subject to possible collisions in multi user environment, because doing things simultaneously a few users may get the same max value and increment it by 1. The real way to go there are sequences. They generate unique numbers, though there may be gaps. You have two choices either to use sequences, have gaps and perform well or do some kind of serialization using max or a different table, don't have gaps but perform slow.
    Gints Plivna
    http://www.gplivna.eu

  • Slow database link behaviour

    I've written an accouting program 15 years ago that runs on developer, with excellent performance overall. Now I'm trying to write an apex program that uses a database link to access the data on a 10.1.0.3.0 SE database. Two things are happening: 1) every once in a while, when I navigate on a view in an XE report (the usual "next 15 records" option,) apex asks if I want to "save" the file (f.bin) and then it gets real slow! Too slow for commercial use... I mean, it takes minutes to move between pages. 2) every two pages that I navigate, it gets really, really slow! It goes from the first page to the next fast; fast again; and then sloooow; fast, fast and then slow; fast, fast, and then: save the page. It is driving me nuts. I've tried using a 10g SE table@, I've tried using a 10g SE view@, and I've tried using an XE view on a 10g SE table - but I am having the same results... However, if I write an XE table, based on a 10g SE view, and do not use the database link, keeping everything local (using only the XE) - it gets very fast, and the strange behavour ends! I wanted to use a database link because it would be easier, much simpler to write the program, and very dynamic. If a have to transfer data between databases it will be much harder to do it.
    Any ideas? Has anyone come across such behaviour? Do you believe it could be a bad installation? By the way, the 10g SE runs on RHES4 and the XE runs on debian; I'am using apex 2.0. Thanks.

    An update on the problem: I've decided to go with the whole XE solution, and I wrote another application, that works just fine! It's normal, I mean, fast! That is, until I transfer data between databases - then it gets real slow until the user logs out. Interesting, isn't it? Later on I'll try the other way around, connecting to SE and transfering the data from XE. I am using something like:
    insert into table@dblink select * from table; delete from table;
    insert into table select * from table@dblink; delete from table@dblink;

Maybe you are looking for