Update Query Taking 23 seconds

I am not sure if I am in the correct location or if I should be in the SQL forum but here is my question:
I have an update statement that goes out through SQL 2000 through a local linked server to another SQL 2000 server on my machine. When I run the update in Query Analyzer it takes less than a second. When I run it in my C# code using the SqlCommand object and parameters it takes me ~23 seconds. If I remove one of the parameters it goes down to ~15 milliseconds. Has anyone heard of this happening?
The parameter that I remove is a simple char(10) column that isn't the primary key and is used in the WHERE statement. There isn't an index on the field.
23 Seconds
Update table Set column = @val WHERE field = @field AND other = @other
15 milliseconds
Update table Set column = @val WHERE field = 'values' AND other = @other

Evan:
Try executing the query that follows and list the results:
set showplan_text on
  Update table Set column = @val WHERE field = @field AND other = @other
set showplan_text off
set statistics profile on
  Update table Set column = @val WHERE field = @field AND other = @other
set statistics profile off
The SHOWPLAN_TEXT will give the estimated query plan and the STATISTICS PROFILE will give the actual query plan for your query.  You can also try the same thing for your other query.  Also, give us the table definitions for (1) all indexes associated with the target table, (2) the PRIMARY KEY of the table and (3) any UNIQUE constraints associated with the table.  This will give a strong view of what is going on with the query.
Dave

Similar Messages

  • Insert and update query to calculate the opening and closing balance

    create table purchase(productid number(5) ,dateofpurchase date,
    qty number(5));
    create table inventory(invid number(5),productid number(5),
    idate date,openingqty number(5),closingqty number(5));
    Records in inventory:
    1,1,'01-jan-2009', 10, 20
    2,1,'03-jan-2009', 20, 30
    3,1,'04-jan-2009', 40, 50
    when I enter the purchase invoice for 15 qty on 02-jan-2009
    after say '15-jan-09' , a new record should get inserted
    with opening balance = (closing balance before 02-jan-2009)
    and all the opening and closing balance for that product should
    get affected.
    If the invoice for 20 qty is entered for the existing date say
    '03-jan-2009' in inventory , then the closing balance
    for 03-jan-2009 should get updated and all the following records
    should get affected.
    I need the insert for the first one and update query for the
    second one.
    Vinodh

    <strike>You can do this in one statement by using the merge statement</strike>
    Hmm, maybe I spoke too soon.
    Edited by: Boneist on 25-Sep-2009 13:56
    Thinking about it, why do you want to design your system like this?
    Why not simply have your purchases table hold the required information and then either work out the inventory on the fly, or have a job that calls a procedure to add a row for the previous day?
    If you continue with this design, you're opening yourself up to a world of pain - what happens when the data doesn't match the purchases table? Also when is the inventory cut-off to reset the opening/closing balances? Monthly? Annually? Weekly? If it's set to one of those, what happens when the business request the inventory for a particular week?
    Edited by: Boneist on 25-Sep-2009 13:59

  • Is there an update to iOS 6? I've been having several technical issues since the upgrade. Uncontrolled zoom of the entire screen, Siri taking 25 seconds to respond and not recognizing names, locking up of the phone. All requiring a reset.

    Is there an update to iOS 6? I've been having several technical issues since the upgrade. Uncontrolled zoom of the entire screen, Siri taking 25 seconds to respond and not recognizing names, locking up of the phone. All requiring a reset.

    I would restore with backup and if that doesn't fix it try as new
    http://support.apple.com/kb/HT4137

  • Update query in sql taking more time

    Hi
    I am running an update query which takeing more time any help to run this fast.
    update arm538e_tmp t
    set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
    where m.vndr#=t.vndr#
    and m.cust_type_cd=t.cust_type
    and m.cust_type_cd<>13
    and m.yymm between 201301 and 201303
    group by m.vndr#,m.cust_type_cd;
    help will be appreciable
    thank you

    This is the Reports forum. Ask this in the SQL and PL/SQL forum.

  • Update query which taking more time

    Hi
    I am running an update query which takeing more time any help to run this fast.
    update arm538e_tmp t
    set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
    where m.vndr#=t.vndr#
    and m.cust_type_cd=t.cust_type
    and m.cust_type_cd<>13
    and m.yymm between 201301 and 201303
    group by m.vndr#,m.cust_type_cd;
    help will be appreciable
    thank you
    Edited by: 960991 on Apr 16, 2013 7:11 AM

    960991 wrote:
    Hi
    I am running an update query which takeing more time any help to run this fast.
    update arm538e_tmp t
    set t.qtr5 =(select (sum(nvl(m.net_sales_value,0))/1000) from mnthly_sales_actvty m
    where m.vndr#=t.vndr#
    and m.cust_type_cd=t.cust_type
    and m.cust_type_cd13
    and m.yymm between 201301 and 201303
    group by m.vndr#,m.cust_type_cd;
    help will be appreciable
    thank youUpdates with subqueries can be slow. Get an execution plan for the update to see what SQL is doing.
    Some things to look at ...
    1. Are you sure you posted the right SQL? I could not "balance" the parenthesis - 4 "(" and 3 ")"
    2. Unnecessary "(" ")" in the subquery "(sum" are confusing
    3. Updates with subqueries can be slow. The tqtr5 value seems to evaluate to a constant. You might improve performance by computing the value beforehand and using a variable instead of the subquery
    4. Subquery appears to be correlated - good! Make sure the subquery is properly indexed if it reads < 20% of the rows in the table (this figure depends on the version of Oracle)
    5. Is tqtr5 part of an index? It is a bad idea to update indexed columns

  • For Update Query

    Hi All,
    I having block level 8000 records, I Scroll down from First record to last record it is takeing more time.
    I Observed tkproof while scrolling two select statments are running..
    1) pre-query block level
    2) For update query
    For update query -> How is is forming? Any Property or some else?
    I am not able to find the second query..where it is forming..How to restrict the second query.
    Query Array size - 10
    Number of records buffered - 10
    Number of records Displayed - 10
    Query all records - No
    Locking mode - Immediate
    Key mode - Automatic
    Version - Oracle 10g
    Plz ........any

    The for update -query is generaetd by forms when its locking the record. If you didn't change anything in the record "by hand", check if there is some code in the POST-QUERY-trigger which forces the record to be locked. if its the POST-QUERY you can issue the following command to avoid the lock at the end of the POST-QUERY:
    SET_RECORD_PROPERTY(:SYSTEM.TRIGGER_BLOCK, :SYSTEM.TRIGGER_RECORD, STATUS, QUERY_STATUS);

  • Query taking lot of time to execute..

    Hi,
    I have a very complecated query which I am executing using JDBC. The query has an insert statement. This query takes 15 mins to complete. I'm running the query as stand alone java program. Can some one have some suggestions what is the best way to debug. I need to find out why the query is taking that long. I'm using oracle 10g with sql developer.
    ps = con.prepareStatement(query);
    ps.setString(1,date);
    ps.setString(2,code);
    timeStart = System.currentTimeMillis();
    ps.executeUpdate();
    timeEnd = System.currentTimeMillis();     
    System.out.println("Time Taken::"+(timeStart -timeEnd)+" ms");
    Thanks in advance
    Ajoo

    Perhaps you should post the query so we can see what you are doing.
    In the mean time, try writing a simple update query and run it. If it runs quickly, your original query has problems. If it runs slow, its caused by something other than your original query.
    P.S.:
    should be:
    System.out.println("Time Taken::"(timeEnd -timeStart)" ms");
    and not this:
    System.out.println("Time Taken::"(timeStart -timeEnd)" ms");
    Edited by: njbt7y on Jan 19, 2011 12:07 PM

  • Query taking so long to execute.

    I have one table with 211 rows, When i am executing Delete from TEHSIL_TBL; its taking too long time to delete 211 rows. I execute explain plan then i am getting the following results.
    SQL> explain plan for delete from TEHSIL_TBL;
    Explained.
    SQL> @C:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3350021484
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | DELETE STATEMENT | | 205 | 1435 | 1 (0)| 00:00:01 |
    | 1 | DELETE | TEHSIL_TBL | | | | |
    | 2 | INDEX FULL SCAN| PK_TEH_ID | 205 | 1435 | 1 (0)| 00:00:01 |
    Please suggest why that query taking so long tome to execute.
    Thanks in Advance...
    Asmit

    966523 wrote:
    I have one table with 211 rows, When i am executing Delete from TEHSIL_TBL; its taking too long time to delete 211 rows. I execute explain plan then i am getting the following results.
    SQL> explain plan for delete from TEHSIL_TBL;
    Explained.
    SQL> @C:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3350021484
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | DELETE STATEMENT | | 205 | 1435 | 1 (0)| 00:00:01 |
    | 1 | DELETE | TEHSIL_TBL | | | | |
    | 2 | INDEX FULL SCAN| PK_TEH_ID | 205 | 1435 | 1 (0)| 00:00:01 |
    Please suggest why that query taking so long tome to execute.Please quantify "long time".
    >
    >
    Thanks in Advance...
    AsmitEXPLAIN PLAN shows time of 1 SECOND!
    How must faster should it be?

  • Query taking more time

    Iam having nearly 2 crores records at present in my table..
    I want to get the avg of price from my table..
    i put the query like
    select avg(sum(price)) from table group by product_id
    The query taking more than 5 mins to execute...
    is that any other way i can simplify my query?

    Warren:
    Your first query gives:
    SQL> SELECT AVG(SUM(price)) sum_price
      2  FROM t;
    SELECT AVG(SUM(price)) sum_price
    ERROR at line 1:
    ORA-00978: nested group function without GROUP BYand your second gives:
    SQL> SELECT product_id, AVG(SUM(price))
      2  FROM t
      3  GROUP BY product_id;
    SELECT product_id, AVG(SUM(price))
    ERROR at line 1:
    ORA-00937: not a single-group group functionSymon:
    What exactly are you ttrying to accomplish. Your query as posted will calculate the average of the sums of the prices for all product_id values. That is, it is equivalent to:
    SELECT AVG(sum_price)
    FROM (SELECT SUM(price) sum_price
          FROM t
          GROUP BY product_id)So given:
    SQL> SELECT * FROM t;
    PRODUCT_ID      PRICE
    PROD1               5
    PROD1               7
    PROD1              10
    PROD2               3
    PROD2               4
    PROD2               5The sum of the prices per product_id is:
    SQL> SELECT SUM(price) sum_price
      2  FROM t
      3  GROUP BY product_id;
    SUM_PRICE
            22
            12 and the average of that is (22 + 12) / 2 = 17. Is that what you are looking for? If so, then the equivalent query I posted above is at least clearer, but may not be any faster. If this is not what you are looking for, then some sample data and expected results may help. Although, it appears that you need to full scan the table in either case, so that may be as good as it gets.
    John

  • Update query is slow with merge replication

    Hello friend,
    I have a database with enabling merge replication.
    Then the problem is update query is taking more time.
    But when I disable the merge triggers then it'll update quickly.
    I really appreciate your
    quick response.
    Thanks.

    Hi Manjula,
    According to your description, the update query is slow after configuring merge replication. There are some proposals for you troubleshooting this issue as follows.
    1. Perform regular index maintenance, update statistics, re-index, on the following Replication system tables.
        •MSmerge_contents
        •MSmerge_genhistory
        •MSmerge_tombstone
        •MSmerge_current_partition_mappings
        •MSmerge_past_partition_mappings
    2. Make sure that your tables involved in the query have suitable indexes. Also do the re-indexing and update the statistics for these tables. Additionally, you can use
    Database Engine Tuning Advisor to tune databases for better query performance.
    Here are some related articles for your reference.
    http://blogs.msdn.com/b/chrissk/archive/2010/02/01/sql-server-merge-replication-best-practices.aspx
    http://technet.microsoft.com/en-us/library/ms177500(v=sql.105).aspx
    Thanks,
    Lydia Zhang

  • Performance optimization : query taking 7mints

    Hi All ,
    Requirement : I need to improve the performance of  custom program ( Program taking more than 7 mints +dump).I checked in runtime analysis and below mention query taking more time .
    Please let me know the approach to minimize the  query time  .
    TYPES: BEGIN OF lty_dberchz1,
               belnr    TYPE dberchz1-belnr,
               belzeile TYPE dberchz1-belzeile,
               belzart  TYPE dberchz1-belzart,
               buchrel  TYPE dberchz1-buchrel,
               tariftyp TYPE dberchz1-tariftyp,
               tarifnr  TYPE dberchz1-tarifnr,
               v_zahl1  TYPE dberchz1-v_zahl1,
               n_zahl1  TYPE dberchz1-n_zahl1,
               v_zahl3  TYPE dberchz1-v_zahl3,
               n_zahl3  TYPE dberchz1-n_zahl3,
               nettobtr TYPE dberchz3-nettobtr,
               twaers   TYPE dberchz3-twaers,
             END   OF lty_dberchz1.
      DATA: lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
            WITH NON-UNIQUE KEY belnr belzeile
            INITIAL SIZE 0 WITH HEADER LINE.
    DATA: lt_dberchz1a LIKE TABLE OF lt_dberchz1 WITH HEADER LINE.
    *** ***********************************Taking more time*************************************************
    *Individual line items
        SELECT dberchz1~belnr dberchz1~belzeile
               belzart buchrel tariftyp tarifnr
               v_zahl1 n_zahl1 v_zahl3 n_zahl3
               nettobtr twaers
          INTO TABLE lt_dberchz1
          FROM dberchz1 JOIN dberchz3
          ON dberchz1~belnr = dberchz3~belnr
          AND dberchz1~belzeile = dberchz3~belzeile
          WHERE buchrel  EQ 'X'.
        DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.     
        LOOP AT lt_dberchz1.
          READ TABLE lt_dberdlb BINARY SEARCH
          WITH KEY billdoc = lt_dberchz1-belnr.
          IF sy-subrc NE 0.
            DELETE lt_dberchz1.
          ENDIF.
        ENDLOOP.
        lt_dberchz1a[] = lt_dberchz1[].
        DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
                              OR belzart EQ 'ZUTAX2'
                              OR belzart EQ 'ZUTAX3'.
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    ***************************second query************************************
    *  SELECT opbel budat vkont partner sto_opbel
        INTO CORRESPONDING FIELDS OF TABLE lt_erdk
        FROM erdk
        WHERE budat IN r_budat
          AND druckdat   NE '00000000'
          AND stokz      EQ space
          AND intopbel   EQ space
          AND total_amnt GT 40000.
    **************************taking more time*********************************
      SORT lt_erdk BY opbel.
      IF lt_erdk[] IS NOT INITIAL.
        SELECT DISTINCT printdoc billdoc vertrag
          INTO CORRESPONDING FIELDS OF TABLE lt_dberdlb
          FROM dberdlb
    * begin of code change by vishal
          FOR ALL ENTRIES IN lt_erdk
          WHERE printdoc = lt_erdk-opbel.
        IF lt_dberdlb[] IS NOT INITIAL.
          SELECT belnr belzart ab bis aus01
                 v_zahl1 n_zahl1 v_zahl3 n_zahl3
            INTO CORRESPONDING FIELDS OF TABLE lt_dberchz1
            FROM dberchz1
            FOR ALL ENTRIES IN lt_dberdlb
            WHERE belnr   EQ lt_dberdlb-billdoc
              AND belzart IN ('ZUTAX1', 'ZUTAX2', 'ZUTAX3').
        ENDIF. "lt_dberdlb
       endif.
    Regards
    Rahul
    Edited by: Matt on Mar 17, 2009 4:17 PM - Added  tags and moved to correct forum

    Run the SQL Trace and tell us where the time is spent,
    see here how to use it:
    SELECT dberchz1~belnr dberchz1~belzeile
               belzart buchrel tariftyp tarifnr
               v_zahl1 n_zahl1 v_zahl3 n_zahl3
               nettobtr twaers
          INTO TABLE lt_dberchz1
          FROM dberchz1 JOIN dberchz3
          ON dberchz1~belnr = dberchz3~belnr
          AND dberchz1~belzeile = dberchz3~belzeile
          WHERE buchrel  EQ 'X'.
    I assume that is this select, but without data is quite useless
    How large are the two tables  dberchz1 JOIN dberchz3
    What are the key fields?
    Is there an index on buchrel
    Please use aliases  dberchz1 as a
                                 INNER JOIN dberchz3 as b
    to which table does buchrel belong?
    I don't know you tables, but buchrel  EQ 'X' seems not selective, so a lot of data
    might be selected.
    lt_dberchz1 TYPE SORTED TABLE OF lty_dberchz1
            WITH NON-UNIQUE KEY belnr belzeile
            INITIAL SIZE 0 WITH HEADER LINE.
        DELETE lt_dberchz1 WHERE belzart NOT IN r_belzart.     
        LOOP AT lt_dberchz1.
          READ TABLE lt_dberdlb BINARY SEARCH
          WITH KEY billdoc = lt_dberchz1-belnr.
          IF sy-subrc NE 0.
            DELETE lt_dberchz1.
          ENDIF.
        ENDLOOP.
        lt_dberchz1a[] = lt_dberchz1[].
        DELETE lt_dberchz1 WHERE belzart EQ 'ZUTAX1'
                              OR belzart EQ 'ZUTAX2'
                              OR belzart EQ 'ZUTAX3'.
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    This is really poor coding, there is sorted table ... nice a compelelty different key is
    needed and used .... useless.
    Then there is a loop which is anywy a full processing no sort necessary.
    Where is the read if you use binary search on TABLE lt_dberdlb ?
    Then the tables are again process completely ...
        DELETE lt_dberchz1a WHERE belzart NE 'ZUTAX1'
                              AND belzart NE 'ZUTAX2'
                              AND belzart NE 'ZUTAX3'.
    What is that ???? Are you sure that anything can survive this delete???
    Siegfried

  • Pro*C CONNECT or RELEASE taking 20+ seconds

    The basic problems are
    <li> the EXEC SQL ROLLBACK WORK RELEASE; statement takes a long time (I've seen 25 seconds)
    <li> the EXEC SQL CONNECT :usrname; statement also takes a long time (I've seen 8 seconds)
    What are the things I should check?
    h1.
    Details:
    The database server is on Linux (Red Hat Enterprise Linux ES release 3 (Taroon Update 7)), while the run-time environment, Pro*C++ precompiler, C++ compiler, and client libraries are on Alpha (Tru64 v 5.1B-4).
    On the Alpha, the Oracle version is 9.2.0.8.0 (both SQL*Plus and 'proc'), and the C++ compiler is Compaq C++ V6.5-042 for HP Tru64 UNIX V5.1B (Rev. 2650). On the Linux, the database server version is 9.2.0.4.0
    There is very little activity on this database, or on the server (about 1% processor load). For the most recent problem with long disconnect times, the only other known activity is another Pro*C process waiting on a DBMS_LOCK (user lock) being held by this one. But because this one is taking so long to exit, the other Pro*C process times out failing to get the lock (after which it does an EXEC SQL ROLLBACK RELEASE). There are three other Pro*C processes, but they have exited prior to t=13 (see below).
    One observation is that when the second process times out and does the EXEC SQL ROLLBACK RELEASE; both processes then return from that statement, after which they exit.
    My environment variables, in the client environment, are
    NLS_LANG=american_america.US7ASCII
    ORACLE_BASE=/u01/app/oracle
    ORACLE_SID=LINUX
    DBA=/u01/app/oracle/admin
    TNS_ADMIN=/u01/app/oracle/product/9.2.0/db/network/admin
    ORACLE_HOME=/u01/app/oracle/product/9.2.0/db
    TWO_TASK=LINUXThe code fragments involved are
    const time_t startTime = time(0);   // seconds since epoch - executed at file scope
                           // when both processes begin.
    #define COUTVAR(x)       if (DEBUGGING_STATUS) cout << x << ", t=" << (time(0) - m_startTime) << ": "
    #define CERRVAR(x)                             cerr << x << ", t=" << (time(0) - m_startTime) << ": "
    #define SQLINFOHERE     /* code to stash __FILE__ and __LINE__ into global vars, used if an error occurs */
    OracleConnection::OracleConnection(const char *processTag, time_t startTime, const Autotester &)
      : m_processTag(processTag), m_startTime(startTime)
        EXEC SQL BEGIN DECLARE SECTION;
          char myUsername[] = "AUTOTESTER/xxxx";
        EXEC SQL END DECLARE SECTION;
        COUTVAR(m_processTag) << "About to EXEC SQL CONNECT" << endl;
        SQLINFOHERE;
        EXEC SQL CONNECT :myUsername;
        COUTVAR(m_processTag) << "EXEC SQL CONNECT successful" << endl;
    OracleConnection::~OracleConnection()
      // Function Try Blocks do not work in the HP compiler (at runtime, get this message:
      // Internal error: could not find live exception.)
      // Therefore nest normal try blocks.
      try
        try
          COUTVAR(m_processTag) << "About to EXEC SQL ROLLBACK RELEASE" << endl;    // Output t=13
          SQLINFOHERE;
          EXEC SQL ROLLBACK WORK RELEASE;
          COUTVAR(m_processTag) << "EXEC SQL ROLLBACK RELEASE successful" << endl;  // Output t=38 (25s later)
        catch(...)
          CERRVAR(m_processTag) << "Error in EXEC SQL ROLLBACK RELEASE" << endl;
      catch(...)
        // Here we don't do ANYTHING, because we don't want to throw (another) exception
        // if it happens to be processing another exception.
    }h2.
    Other things tried
    When I do this
    $ sqlplus autotester/xxxx < /dev/null
    it never takes more than 0.3 seconds to connect and disconnect. The environment variables are the same.
    The sqlnet.ora file has no mention of SQLNET.AUTHENTICATION_SERVICES, I have heard that setting it to (NTS) can slow things down, and someone else notes that you can generally comment it out. But we've never had it in our sqlnet.ora file.
    Using tnsping gives this:
    $ tnsping LINUX
    TNS Ping Utility for Compaq Tru64 UNIX: Version 9.2.0.8.0 - Production on 28-JUN-2012 12:02:16
    Copyright (c) 1997, 2006, Oracle Corporation.  All rights reserved.
    Used parameter files:
    /u01/app/oracle/product/9.2.0/db/network/admin/sqlnet.ora
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(Host = mylinuxhost.FICTIONAL.com)
      (Port = 1521)) (CONNECT_DATA = (SERVICE_NAME = unix.world)))
    OK (30 msec)What are the things I should check?
    Edited by: Logical_Star on 27-Jun-2012 20:36, to make code output narrower

    OK, I have converted the program to OCI (I had to convert the whole program, because Pro*C and OCI cannot be mixed) and found similar behaviour.
    The OCI version of the program is (sometimes) taking 28 seconds to disconnect, whereas the Pro*C version is now taking 40 seconds to disconnect. I note that the frequency is much higher in the Pro*C program.
    The OCI version has this code
    #define LEN_SQLGLM_MSG      512   /* Oracle manual: can be max of 512 chars */
    const time_t startTime = time(0);   // seconds since epoch - executed at file scope
                                      // when both processes begin.
    #define COUTVAR(x)       if (DEBUGGING_STATUS) cout << x << ", t=" << (time(0) - m_startTime) << ": "
    #define CERRVAR(x)                             cerr << x << ", t=" << (time(0) - m_startTime) << ": "
    #define CHECK_AND_THROW(r,m)   do {                                                    \
                         if(r) { OraText errbuf[100] = "Failed to get error"; int errcode; \
    OCIErrorGet((dvoid *)p_err, (ub4) 1, (text *) NULL, &errcode, errbuf, (ub4) sizeof(errbuf), OCI_HTYPE_ERROR); \
    string errMsg(m); errMsg += " ... "; errMsg += reinterpret_cast<const char*>(errbuf); \
    if (errMsg.size() > LEN_SQLGLM_MSG) errMsg.erase(LEN_SQLGLM_MSG);                     \
    throw ExDb(r, errMsg.c_str(), __LINE__, __FILE__);} } while(0)
    OracleConnection::OracleConnection(const char *processTag, time_t startTime, const Autotester &)
      : m_processTag(processTag), m_startTime(startTime), p_env(0), p_err(0), p_svc(0),
        p_sql(0), p_dfn(0), p_bnd(0)
      COUTVAR(m_processTag) << "About to connect" << endl;
      int rc = OCIInitialize((ub4) OCI_DEFAULT, (dvoid *)0,  /* Initialize OCI */
              (dvoid * (*)(dvoid *, size_t)) 0,
              (dvoid * (*)(dvoid *, dvoid *, size_t))0,
              (void (*)(dvoid *, dvoid *)) 0 );
      CHECK_AND_THROW(rc, "OCIInitialize error");
      /* Initialize evironment */
      rc = OCIEnvInit( (OCIEnv **) &p_env, OCI_DEFAULT, (size_t) 0, (dvoid **) 0 );
      CHECK_AND_THROW(rc, "OCIEnvInit error");
      /* Initialize handles */
      rc = OCIHandleAlloc( (dvoid *) p_env, (dvoid **) &p_err, OCI_HTYPE_ERROR,
              (size_t) 0, (dvoid **) 0);
      CHECK_AND_THROW(rc, "OCIHandleAlloc error 1");
      rc = OCIHandleAlloc( (dvoid *) p_env, (dvoid **) &p_svc, OCI_HTYPE_SVCCTX,
              (size_t) 0, (dvoid **) 0);
      CHECK_AND_THROW(rc, "OCIHandleAlloc error 2");
      /* Connect to database server */
      rc = OCILogon(p_env, p_err, &p_svc, "AUTOTESTER", 10, "xxxxx", 10, "LINUX", 5);
      CHECK_AND_THROW(rc, "OCILogon error");
      COUTVAR(m_processTag) << "Connect successful" << endl;
    OracleConnection::~OracleConnection()
      // Function Try Blocks do not work in the HP compiler (at runtime, get this message:
      // Internal error: could not find live exception.)
      // Therefore nest normal try blocks.
      try
        try
          COUTVAR(m_processTag) << "About to disconnect" << endl;     // Output t=13
          int rc = OCILogoff(p_svc, p_err);
          CHECK_AND_THROW(rc, "OCILogoff error");
          rc = OCIHandleFree((dvoid *) p_sql, OCI_HTYPE_STMT);
          CHECK_AND_THROW(rc, "OCIHandleFree error 1");
          //NOTE: This OCIHandleFree call should be done, in theory, but it was giving
          //      an unknown error for which OCIErrorGet was not retrieving the error.
          //NOTE rc = OCIHandleFree((dvoid *) p_svc, OCI_HTYPE_SVCCTX);
          //NOTE CHECK_AND_THROW(rc, "OCIHandleFree error 2");
          rc = OCIHandleFree((dvoid *) p_err, OCI_HTYPE_ERROR);
          CHECK_AND_THROW(rc, "OCIHandleFree error 3");
          COUTVAR(m_processTag) << "Disconnect successful" << endl;   // output t=41 (28s later)
        catch(ExDb &ex)
          CERRVAR(m_processTag) << "SQL Error while disconnecting" << endl;
          ex.print_error();
        catch(...)
          CERRVAR(m_processTag) << "Strange Error while disconnecting" << endl;
      catch(...)
        // Here we don't do ANYTHING, because we don't want to throw (another) exception
        // if it happens to be processing another exception.
    }Edited by: Logical_Star on 02-Jul-2012 00:53 (wrong TNSNAME)

  • Update query issue to update middle (n records) of the rows in a table

    Hi
    I have a below requirement for that I gone thru one below appoch to fulfill my requirement.
    Requirement:
    I need to pull 3 crore data thru ODI, source table does not have a primary key and it have 200 columns with 3 crores records (it has a 25 columns as composite key).
    I need to populate those 3 crores records into target oracle DB but when I tried to load 3 crores at time I got an error so I approch incremental load, since for that I need to update for each 1 crore records with flg1,flg2 anf flg3 (flag values), for this update I have added one more column in source table using those flag values I can populate 1 crore records in target machine.
    Approch
    For aove requirement I writem below query to update flg1 for first crores records update tbl_name set rownum_id='flg1' where rownum<=10000000; and it updated successfully without any error, for second, to update flg2 for 2nd crore records I wrote below update query and it execute 20min and given *0 rows updated* Please help if below query is wrong
    Query: update tbl_name set rownum_id='flg2' where rownum<=20000000 and rownum_id is NULL;
    Thanks in advance
    Regards,
    Phanikanth

    A couple of thoughts.
    1. This forum is in English and accessed by people in more than 100 countries. Use metric counts not crore. Translate to million, billions, trillions, etc.
    2. No database version ... makes helping you very hard.
    3. 200 columns with 25 in a composite key is a nightmare by almost any definition ... fix your design before going any further.
    4. What error message? Post the complete message not just your impression of it.
    5. Oracle tables are heap tables .. there is no such concept as the top, the bottom, or the middle.
    6. If you are trying to perform sampling ... use the SAMPLE clause (http://www.morganslibrary.org/reference/select.html#sssc).
    7. What is ODI? Do not expect your international audience to know what the acronym means.

  • Can not execute a simple update query with ms access

    Hi
    I am trying to execute a simple update query , this is the code I am running : I get no exeptions but the db is not being updated .
    When I copy my query into access it works !
    The table / columns are spelled correctly and I am able to execute select queries.
    can anyone figure out what is going on
    Thanks shahar
    public static void main(String[] args) {
    MainManager mainManager = MainManager.getInstance();
    Log log = Log.getInstance();
    try {
    log.writeDebugMessage("befor executing query");
    //stmt.executeUpdate("update trainee set Trainee_Name = 'shahar&Rafi' where Trainee_Id = 1 ");
    //stmt = null ;
    PreparedStatement stmt1 = con.prepareStatement("UPDATE Trainee SET Trainee_Name =? WHERE Trainee_Id =?");
    String name = new String("Shahar&Rafi");
    int Id = 1 ;
    stmt1.setString(1,name);
    stmt1.setInt(2,Id);
    System.out.println(stmt1.toString());
    stmt1.execute();
    log.writeDebugMessage("After executing query");
    catch (SQLException e ){
    System.err.println("SQLException: " + e.getMessage());
    }

    Hi All,
    got the problem solved at last.
    first, in the SQL string all the values must be within " ' "
    for example:
    INSERT INTO Trainee(Trainee_Id , Trainee_password , Trainee_Address, Trainee_Name , Trainee_Phone , Trainee_Gender , Trainee_SessionTime , Trainee_Purpose , Trainee_HealthStatus , Trainee_StartDate , Trainee_Birthday) VALUES (6,'333','hhhh','rafi',048231821,true,63,4,true, ('Feb 22, 2002'), ('Feb 22, 2002'))
    second and more important,
    a 'dummy' sql select query must be performed after an update query
    example for code:
    try{
    DB.MainManager.getInstance().ExecuteUpdateQuery(A);
    DB.MainManager.getInstance().getStatement().executeQuery("SELECT * FROM Trainee");
    where A is the update query.

  • Update query is waiting for async descriptor resize

    Hi All,
    One of the update query which was completing in 1-2 mins. Suddenly today it started taking more than 3-4 hours and still it is running.
    Below is the query, I can see the explain plain. I can see the query is waiting for async descriptor resize wait event
    UPDATE AP_INVOICE_DISTRIBUTIONS SET POSTED_FLAG = 'N' WHERE POST
    ED_FLAG = 'S' AND (ACCOUNTING_DATE >= :B2 OR :B2 IS NULL) AND (A
    CCOUNTING_DATE <= :B1 OR :B1 IS NULL)
    when i checked the P1text, P2text columns of v$session, it is showing outstanding #aio,current ai0 limit respectively.
    Explan plan
    CODEPLAN_TABLE_OUTPUT
    | Id  | Operation                      | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT               |                              |       |       |   250 (100)|          |
    |   1 |  UPDATE                        | AP_INVOICE_DISTRIBUTIONS_ALL |       |       |            |          |
    |   2 |   NESTED LOOPS                 |                              |       |       |            |          |
    |   3 |    NESTED LOOPS                |                              |    39 | 12480 |   250   (0)| 05:14:02 |
    |*  4 |     TABLE ACCESS BY INDEX ROWID| AP_ACCOUNTING_EVENTS_ALL     |    39 |   624 |   145   (0)| 03:02:09 |
    |*  5 |      INDEX RANGE SCAN          | AP_ACCOUNTING_EVENTS_N2      |  3954 |       |    14   (0)| 00:17:36 |
    |*  6 |     INDEX RANGE SCAN           | AP_INVOICE_DISTRIBUTIONS_N18 |     4 |       |     2   (0)| 00:02:31 |
    PLAN_TABLE_OUTPUT
    |*  7 |    TABLE ACCESS BY INDEX ROWID | AP_INVOICE_DISTRIBUTIONS_ALL |     1 |   304 |     4   (0)| 00:05:02 |
    CODE
    Please help me on this.
    Env Details --
    DB version -- 11.2.0.1
    OS - IBM AIX.
    Thanks in advance...

    This could be this bug
    Bug 9829397 - Excessive CPU and many "asynch descriptor resize" waits for SQL using Async IO (Doc ID 9829397.8)

Maybe you are looking for