SQL statement length downgrades performance

Hi all!!!
I wonder if someone have noticed this or knows an answer on it. When the length of an SQL sentence is greater than 2000 bytes (+ or -) the time to process grows dramatically. See examples and environment below.
No matter how complex the query is, the time to execute the 'executeQuery' can be x10 times slower than small queries. There is no need to send complicate queries. I have even test it using a simple 'SELECT SYSDATE FROM DUAL'. I'm unable to understand why the time to process this query can vary from 10ms to 130ms just by appending blank spaces into the query (i.e.: 'SELECT SYSDATE FROM DUAL ')
Examples:
The times shown are taken using 'System.currentTimeMillis()' just before and after the call to 'executeQuery()' or 'execute(sql):
'SELECT SYSDATE FROM DUAL' -> takes from 0ms to 30ms
'SELECT SYSDATE FROM DUAL' + 1000_blank_spaces (' ') -> takes from 0ms to 30ms (the same)
'SELECT SYSDATE FROM DUAL' + 2000_blank_spaces (' ') -> takes from 100ms to 130ms
I have made a loop increasing spaces 1 by 1 up to 2500. I found that the performance is always on the range 0-30ms up to a total length of 1970 characters. After that, all the queries takes 100-130ms.
I also tested with very complex queries (implying several tables and indexes) like 'SELECT field_1, field_2 FROM table_1, table_2 WHERE very_complex_condition'. This may take 10-30ms. However, if something is added to that query (i.e a 'field_3' on the SELECT, blank spaces, a new 'AND', etc...) that makes the string be larger than 1970 characters, the query takes then 100-130ms.
The same results are get using Statements or PreparedStatements.
I end up that when the SQL String is close to 2Kb the JDBC Thin Driver spends 100ms doing something unknown. This seems to be an issue on the Thin driver, since the same tests using OCI give homogeneous time (no mater the length) but they are slower than the Thin driver (70-80ms for the same 'SELECT SYSDATE FROM DUAL'). It does not seems to be related with the network because some of the tests used the same machine (server and client).
Does anyone have an explanation about this?. All suggestions are welcome.
Here comes the technical info:
SERVER
- Oracle 8.1.6 on Solaris 8 sparc 64 bits (Sun ultra-10).
CLIENTs
- Windows NT 4 / Solaris 8 (tha same box as the server)
- JDK 130 / JDK 131
- JDBC 8.1.6 / JDBC 8.1.7 (downloaded from Oracle site)
Best regards
Roberto.

More on this.
After increasing the length of the SQL statement up to 3600 bytes and more, the execution time goes back to 10-30ms. So performance is only bad in the range 2Kb-3.5Kb
I'm really confused about this issue.

Similar Messages

  • Oracle 7 - Maximum SQL statement length

    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96536/ch44.htm#288033
    Like above, maximum SQL Statement Length is clearly defined on the "Reference" document of Oracle 8, 9, and 10.
    But I could not find it for Oracle 7.
    Can someone help me?

    This info is available in the Oracle 7 Server Reference. It can be found via http://otn.oracle.com/documentation. Look at the "Previously Released Oracle Documentation" to access it. It's a pdf. Here's a direct link: http://download-uk.oracle.com/docs/pdf/A32589_1.pdf. The details can be found in chapter 5.
    It comes down to this: 64k.
    MHE

  • From 10g, "SQL Statement Length" description disappeared....

    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10755/limits003.htm#sthref3594
    From 10g, on the above 'Logical Database Limits' section of 'Reference' document, the description for 'SQL Statement Length' limitation (64K) has been dropped.
    Is this mean there is NO limitation on 10g or just mistake?

    Hi Tadaaki,
    Apologies for the delay in responding to you.
    Unfortunately I cannot answer this question. Although my group maintains the Documentation pages on OTN, we are not part of OTN, nor do we have input with regard to the content of documentation. We merely provide links to the documentation at: http://www.oracle.com/technology/documentation/index.html
    Please try the Members Feedback Forum at: Community Feedback (No Product Questions)
    Thanks and regards,
    Les

  • SQL statement for calculating performance targets

    Hi
    I have taken of the admin of a database which stores project goals and scores. I have to develop a way to calculate how well all projects meet these scores. The table concerned is called goal and looks like this:
    goal
    goal_name
    project_code
    current_value
    good_value
    bad_value
    This can be for many different goals, for example if a project wants to get up a goal of having no more than 5 bugs in the project. I can also set a bad value, say 20, so if any projects have 20 or more bugs, triggers or alerts can be sent So I can enter into this table. The reason for putting 5 for good not 20 is because these scores are to be realistic.
    project_code = foo
    goal_name = software bugs
    good_value = 5
    bad_value = 20
    However, some goals may have the values switched and be in a much higher range, or may even be a percentage. For example one for number of sales could be
    project_code = foo
    goal_name = software sales
    good_value = 200
    bad_value = 50
    or project delay
    project_code = foo
    goal_name = sproject delay
    good_value = 0%
    bad_value = 50%
    i am trying to develop a SQL statement so I can get a % score of how well a goal is performing, so I can see
    What is the goal % for all foo goals
    Or what is the goal % for goal 'software sales'
    And also more importantly, how well are the goals doing globally.
    The requirement for doing this is using a single SQL statement, well, one SQL statement for the requirements I listed above, so for example the semantics are
    SELECT average(goal perforance) WHERE .... project = foo .... or goal = software sales... etc
    I am having trouble doing this, I have been banging my head against mydesk all day. the biggest thing is thowing me off is that the good value can be higher or lower than the bad value, and I am having trouble visualizing how to but this conditional statement in SQL
    One more thing, the percentage returned should never be more than 0% or 100%.
    If anyone has any ideas or pointers, please help me out,
    Thanks for your time,
    Message was edited by:
    user473327

    I am having trouble doing this, I have been banging
    my head against mydesk all day. the biggest thing is
    thowing me off is that the good value can be higher
    or lower than the bad value, and I am having trouble
    visualizing how to but this conditional statement in
    SQLI haven't looked at your requirements in detail cos I don't have time for such cumbersome tasks. However, you could have two UNION'd select statements, one which caters for the good > bad and one which caters for the good < bad. Also and alternative would be the use of DECODE or CASE statements in your select which are good for switching things around based on conditions.
    ;)

  • SQL statement is not performing

    Hi community,
    I've a problem with a SQL statement.
    First of all here's the statement and the explain plan for it:
    select PPRJPOI.BBASE , PPRJPOI.CONTROLLINGAREA, PPRJPOI.COSTOBJTYPE , PPRJPOI.COSTOBJMAINPATH , PPRJPOI.COSTOBJSUBPATH ,
    PPRJPOI.PPOSNUM , PPRJPOI.PPOSTXT , PPRJPOI.PPOSBTG , PPRJPOI.PDCWABL , PPRJPOI.PDCWNUM , PPRJPOI.PDCWBUD , PPRJPOI.PKTOORI ,
    PPRJPOI.REFKEY1 , PPRJ.COSTCENTER , PPRJ.PATHELEM1 , PPRJSUB.PTPRTXT , LSUPPLIER.AADDRLINE1
    from PPRJPOI
    inner join PPRJ on PPRJ.BBASE = PPRJPOI.BBASE and PPRJ.CONTROLLINGAREA = PPRJPOI.CONTROLLINGAREA
    and PPRJ.COSTOBJTYPE = PPRJPOI.COSTOBJTYPE and PPRJ.COSTOBJMAINPATH = PPRJPOI.COSTOBJMAINPATH
    and PPRJ.COORGPATHELEM1 = ? and PPRJ.COORGPATHELEM2 = ? and PPRJ.COORGPATHELEM3 = ?
    and PPRJ.COORGPATHELEM4 = ? and PPRJ.COORGPATHELEM5 = ?
    inner join PPRJSUB on PPRJSUB.BBASE = PPRJPOI.BBASE and PPRJSUB.CONTROLLINGAREA = PPRJPOI.CONTROLLINGAREA
    and PPRJSUB.COSTOBJTYPE = PPRJPOI.COSTOBJTYPE and PPRJSUB.COSTOBJMAINPATH = PPRJPOI.COSTOBJMAINPATH
    and PPRJSUB.COSTOBJSUBPATH = PPRJPOI.COSTOBJSUBPATH
    left outer join LSUPPLIER on LSUPPLIER.BBASE = PPRJPOI.BBASE and LSUPPLIER.LSUPPLIERNUM = PPRJPOI.PLFTNUM
    where PPRJPOI.BBASE = ? and PPRJPOI.CONTROLLINGAREA = ? and PPRJPOI.COSTOBJTYPE = ?
    and PPRJPOI.SUMSTS = ? and (PPRJPOI.MOVCOSTOBJMAINPATH is null or PPRJPOI.CLEAREDITEM = ?)
    and (PPRJPOI.CLEAREDITEM ? or PPRJPOI.CLEAREDITEM is null)
    and PPRJPOI.COYEARID = ? and (PPRJPOI.COPERIODNUM between ? and ? )
    and PPRJPOI.BTS_CREATE <= TO_TIMESTAMP('27.10.2008 17:00:00')
    and PPRJPOI.PATHELEM2 = ? and PPRJPOI.PATHELEM3 = ?
    order by PPRJPOI.BBASE, PPRJPOI.CONTROLLINGAREA, PPRJPOI.COSTOBJTYPE,
    PPRJPOI.COSTOBJMAINPATH, PPRJPOI.COSTOBJSUBPATH
    | Id  | Operation                        | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                 |              |     1 |   489 |    40   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                   |              |     1 |   489 |    40   (5)| 00:00:01 |
    |*  2 |   FILTER                         |              |       |       |            |          |
    |   3 |    NESTED LOOPS OUTER            |              |     1 |   489 |    39   (3)| 00:00:01 |
    |   4 |     NESTED LOOPS                 |              |     1 |   439 |    38   (3)| 00:00:01 |
    |*  5 |      HASH JOIN                   |              |     1 |   262 |    37   (3)| 00:00:01 |
    |   6 |       TABLE ACCESS BY INDEX ROWID| PPRJ         |     1 |    95 |     1   (0)| 00:00:01 |
    |*  7 |        INDEX RANGE SCAN          | PPRJ_ORA4    |     1 |       |     1   (0)| 00:00:01 |
    |*  8 |       TABLE ACCESS BY INDEX ROWID| PPRJPOI      |    41 |  6847 |    36   (3)| 00:00:01 |
    |*  9 |        INDEX SKIP SCAN           | PPRJPOI_ORA3 |    83 |       |    26   (0)| 00:00:01 |
    |  10 |      TABLE ACCESS BY INDEX ROWID | PPRJSUB      |     1 |   177 |     1   (0)| 00:00:01 |
    |* 11 |       INDEX UNIQUE SCAN          | PK_233       |     1 |       |     1   (0)| 00:00:01 |
    |  12 |     TABLE ACCESS BY INDEX ROWID  | LSUPPLIER    |     1 |    50 |     1   (0)| 00:00:01 |
    |* 13 |      INDEX UNIQUE SCAN           | PK_177597    |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - filter(:13<=:14)
       5 - access("PPRJ"."BBASE"="PPRJPOI"."BBASE" AND
                  "PPRJ"."CONTROLLINGAREA"="PPRJPOI"."CONTROLLINGAREA" AND
                  "PPRJ"."COSTOBJTYPE"="PPRJPOI"."COSTOBJTYPE" AND
                  "PPRJ"."COSTOBJMAINPATH"="PPRJPOI"."COSTOBJMAINPATH")
       7 - access("PPRJ"."BBASE"=:6 AND "PPRJ"."CONTROLLINGAREA"=:7 AND
                  "PPRJ"."COSTOBJTYPE"=:8 AND "PPRJ"."COORGPATHELEM1"=:1 AND "PPRJ"."COORGPATHELEM2"=:2
                  AND "PPRJ"."COORGPATHELEM3"=:3 AND "PPRJ"."COORGPATHELEM4"=:4 AND
                  "PPRJ"."COORGPATHELEM5"=:5)
       8 - filter("PPRJPOI"."SUMSTS"=:9 AND ("PPRJPOI"."MOVCOSTOBJMAINPATH" IS NULL OR
                  "PPRJPOI"."CLEAREDITEM"=:10) AND ("PPRJPOI"."CLEAREDITEM" IS NULL OR
                  "PPRJPOI"."CLEAREDITEM"<>:11) AND "PPRJPOI"."BTS_CREATE"<=TO_TIMESTAMP('27.10.2008
                  17:00:00'))
       9 - access("PPRJPOI"."BBASE"=:6 AND "PPRJPOI"."CONTROLLINGAREA"=:7 AND
                  "PPRJPOI"."COSTOBJTYPE"=:8 AND "PPRJPOI"."COYEARID"=:12 AND "PPRJPOI"."COPERIODNUM">=:13
                  AND "PPRJPOI"."PATHELEM2"=:15 AND "PPRJPOI"."PATHELEM3"=:16 AND
                  "PPRJPOI"."COPERIODNUM"<=:14)
           filter("PPRJPOI"."PATHELEM3"=:16 AND "PPRJPOI"."PATHELEM2"=:15)
      11 - access("PPRJSUB"."BBASE"=:6 AND "PPRJSUB"."CONTROLLINGAREA"=:7 AND
                  "PPRJSUB"."COSTOBJTYPE"=:8 AND "PPRJSUB"."COSTOBJMAINPATH"="PPRJPOI"."COSTOBJMAINPATH"
                  AND "PPRJSUB"."COSTOBJSUBPATH"="PPRJPOI"."COSTOBJSUBPATH")
      13 - access("LSUPPLIER"."BBASE"(+)=:6 AND "LSUPPLIER"."LSUPPLIERNUM"(+)="PPRJPOI"."PLFT
                  NUM")Additional infos:
    Tablesize:
    PPRJPOI - 44.500.000 rows
    PPRJ - 7.013 rows
    PPRJSUB - 1.150.000 rows
    LSUPPLIER - 115.000 rows
    Used indexes:
    PPRJ_ORA4: index on PPRJ(BBASE, CONTROLLINGAREA, COSTOBJTYPE, COORGPATHELEM1, COORGPATHELEM2, COORGPATHELEM3 ,COORGPATHELEM4 , COORGPATHELEM5 , COORGPATHELEM6, COORGPATHELEM7, COORGPATHELEM8);
    PPRJPOI_ORA3: index on PPRJPOI(BBASE, CONTROLLINGAREA, COSTOBJTYPE, COYEARID, COPERIODNUM, PATHELEM2, PATHELEM3, PATHELEM4, PATHELEM5, PATHELEM6, PATHELEM7, PATHELEM8);
    PK_233: index on PPRJSUB(BBASE, CONTROLLINGAREA, COSTOBJTYPE, COSTOBJMAINPATH, COSTOBJSUBPATH);
    PK_177597: index on LSUPPLIER(BBASE, LSUPPLIERNUM);
    If I execute this statement I recieve a resultset which returns 5800 rows and takes about 70 seconds. Executing the same statement on DB2 returns the same number of rows but lasts only 15 seconds.
    I would really appreciate, if anybody can help me optimizing this statement so that the execution time will be equal to the DB2..
    Thanks in advance,
    Tobias Schmidt
    Edited by: tobiwan on Oct 31, 2008 1:01 PM

    tobiwan wrote:
    The statement we use is a prepared statement and the "?" are standing for the binding variables.
    I generated the explain plan by adding the prefix "explain plan for" to the statement and fetching the result by executing the statement "SELECT * FROM TABLE(dbms_xplan.display)". The plan table was created with the Oracle script ($ORACLE_HOME\RDBMS\ADMIN\utlxplan.sql).The cardinality estimates of the plan posted seem to be way off if you say that the statement returns 5,800 records, but since you're using bind variables the optimizer in the case of an "EXPLAIN PLAN" just applies default selectivities, like 1% for an equal comparison 5% for a range comparison etc.
    Note that you're using (a lot of) bind variables and therefore the output of EXPLAIN PLAN is only of limited help, because it doesn't/can't use the "bind variable peeking" that happens when the statement is actually executed.
    So you need to find out the actual execution plan(s) used at run time. You can use the convenient DBMS_XPLAN.DISPLAY_CURSOR function in 10g to obtain that information. You just need to find out the SQL_ID of your statement if it is cached in the Shared Pool, e.g. by searching the V$SQL* views available, or you can check V$SESSION if you know that the statement is currently being executed.
    You should check if you've histograms in place on the columns used with the bind variables, in a different thread I've provided already this useful blog note by the Pythian Group about this issue.
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Once you've obtained the actual execution plan(s, it could be multiple if you have histograms in place), post them here to find out if the cardinality estimates are still way off or what else could be the reason for the unexpected long execution time.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • UPDATE SQL statement has poor performance

    Hi All,
    We have setup regular run background process,setup to "throttle"  user submitted
    Batch Requests to Batch Processing System.  Purpose of this "Throttle" DB level background
    process  (submitted using DBMS_SCHEDULER)  to check for currently active Requests and
    then accordingly (based on prevailing System Load)   inject new requests for Batch Request accordingly.
    This background process is scheduled to run every minute.
    We find that UPDATE statement below performs well when Table being updated (FRM_BPF_REQUEST)
    even when Table has upto 1 million rows.  (Expected Production volume)  UPDATE takes only few seconds  (< 10 secs)
    at most to execute
    However, we find that when there is a burst of INSERTS happening to  same Table  (FRM_BPF_REQUEST)
    via another database session,  UPDATE statement suffers with severe degradation.  Same UPDATE which used
    to perform  in matter of few seconds, takes upto  40 minutes when heavy INSERTS are happenning to
    Table.  We are trying to understand why Performance gets severely degraded when INSERTS are heavy on the Table,
    Any thoughts or insights into issue would be greatly appreciated.
    We are using Oracle DB 11.2.0.3.4  (on Linux)
    CREATE OR REPLACE PROCEDURE BPF_DISPATCH_REQUEST_SP(V_THROTTLE_SIZE NUMBER DEFAULT 600) AS
    --    Change History
    --001 -Auro    -10/09/2013  -Initial Version
    --    v_throttle_size    NUMBER DEFAULT 600;
          v_active_cnt         NUMBER DEFAULT 0;
          v_dispatched_cnt   NUMBER DEFAULT 0;
        v_start_time    TIMESTAMP := SYSTIMESTAMP;
        v_end_time    TIMESTAMP;
            v_subject_str   VARCHAR2(100) := '';
            v_db_name       VARCHAR2(20) := '';
      BEGIN
        -- Determine Throttle Size
        SELECT THROTTLE_SIZE
        INTO   v_throttle_size
        FROM   FRM_BPF_REQUEST_CONTROL;
        -- Determine BPF Active Request Count
        SELECT COUNT(*)
        INTO   v_active_cnt
        FROM   FRM_BPF_REQUEST
        WHERE  STATUS IN('rm_pending','rm_ready','processing','worker_ready','failed','dependency_failed','recover_ready');
        IF v_active_cnt < v_throttle_size THEN
            UPDATE FRM_BPF_REQUEST
            SET    STATUS='dispatched'
            WHERE  ID IN (
                    SELECT ID FROM (
                   SELECT ID
                   FROM   FRM_BPF_REQUEST
                   WHERE  STATUS='new'
                   ORDER BY ID
                    ) WHERE ROWNUM <= (v_throttle_size - v_active_cnt)
            v_dispatched_cnt := SQL%ROWCOUNT;
            COMMIT;
        END IF;
         v_end_time := SYSTIMESTAMP;
        INSERT INTO FRM_BPF_REQUEST_DISPATCH_LOG
        VALUES (
            v_start_time,   
            v_active_cnt,
            v_dispatched_cnt,
            v_end_time,
            NULL
        COMMIT;
        EXCEPTION
                  WHEN OTHERS THEN
                ROLLBACK;
             v_end_time := SYSTIMESTAMP;
            INSERT INTO FRM_BPF_REQUEST_DISPATCH_LOG
            VALUES (
                v_start_time,   
                v_active_cnt,
                v_dispatched_cnt,
                v_end_time,
                NULL
            COMMIT;
                SELECT ORA_DATABASE_NAME
            INTO   v_db_name
            FROM   DUAL;
                   v_subject_str := v_db_name||' DB: Fatal Error in BPF Request Dispatch Process';
            -- Alert Support                   
                DBA_PLSQL.SEND_MAIL(P_RECIPIENTS     => '[email protected]',
                                        P_CC         => '[email protected]',
                                            P_BCC         => '[email protected]',
                                            P_SUBJECT         => v_subject_str,
                                            P_BODY         => SUBSTR(SQLERRM, 1, 250));
    END;
    show errors
    Thanks
    Auro

    What the heck is this:
      EXCEPTION
                  WHEN OTHERS THEN
                ROLLBACK;
             v_end_time := SYSTIMESTAMP;
            INSERT INTO FRM_BPF_REQUEST_DISPATCH_LOG
            VALUES (
                v_start_time,   
                v_active_cnt,
                v_dispatched_cnt,
                v_end_time,
                NULL
            COMMIT;
                SELECT ORA_DATABASE_NAME
            INTO   v_db_name
            FROM   DUAL;
                   v_subject_str := v_db_name||' DB: Fatal Error in BPF Request Dispatch Process';
            -- Alert Support                   
                DBA_PLSQL.SEND_MAIL(P_RECIPIENTS     => '[email protected]',
                                        P_CC         => '[email protected]',
                                            P_BCC         => '[email protected]',
                                            P_SUBJECT         => v_subject_str,
                                            P_BODY         => SUBSTR(SQLERRM, 1, 250));
    Why are you programming for failure to succeed, willing to accept time taking rollbacks, committing afterward, fooling with transactions, swallowing/hiding all errors, all that 'nice and safely hidden' in the notorious WHEN OTHERS exception NOT followed by a RAISE?
    Only catch errors you expect.
    Programming to let a program fail is to fail.

  • Popluating Unique data with a SQL Statement

    I have a table with millions of rows.Now i have to populate the
    data from that table into another table with the same
    structure,but with primary key defined on two columns. In the
    process of populating,data loss should not take place.
    How can this be achieved with a single SQL statement without
    effectin performance?
    Thanx in advance ...

    If you are sure that there a no violations of the new primary
    key, then something like
    INSERT INTO new nologging
    SELECT * from old
    will work. If you think there may be violations of the new
    primary key, then disable the primary key before doing the
    insert. After the insert completes
    ALTER table new
    ENABLE CONSTRAINT pk
    EXCEPTIONS INTO exptable
    You will need to use utlexcpt.sql to create the exceptions table.
    The primary key will not enable, but the exceptions table will
    hold rowids of the rows that violate the key. You can use this
    to fix them.

  • ORA-01555 caused by SQL statement below

    Dear all,
    How can I treat this error ?
    ORA-01555 caused by SQL statement below (SQL ID: 9kh4f608ty7un, Query Duration=88767 sec, SCN: 0x0000.db7b3cef): : Category 1
    Any ideas ? I did not touch the DB, so I am not sure why it came.
    Thanks in advance,
    Daniel.

    hi Daniel,
    You have to mentioned step by step solution (SQL statement) which you perform to resolve this issue.I am saying this for those sdn new users who have no S-user or access of SAP note.
    Thanks and Regards,
    majamil
    Danke???

  • SQL Statement ignored performing List of Values query

    Hi, New user just learning the basics. I have created a simple table PERSON with columns, ID, firstname, lastname, phone, city, State_ID
    Then clicked create Lookup table - State_Lookup with columns State_ID and State_Name.
    I create a page, include all columns from PERSON. For State the field is a select list that should do a lookup form the STATE_LOOKUP table. (I have entered 4 states in the table)
    I am getting the following error however:
    Error: ORA-06550: line 1, column 14: PL/SQL: ORA-00904: "STATE_ID": invalid identifier ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored performing List of Values query: "select STATE_ID d, STATE_ID v from STATE_ID_LOOKUP order by 1".
    I have not entered any sql, just selected all of my options using defaults and dropdowns. What is causing the error and what do I need to change?
    Thanks

    Okay, learned something: The database link name used, must not contain a dash. The DB_DOMAIN is appended automatically when you create a DB link, so if IT contains a dash, the db link name does as well. Check DBA_DB_LINKS to make sure you don't hit this well-hidden feature.
    Regards
    Martin Klier
    [http://www.usn-it.de|http://www.usn-it.de]

  • How to monitor worst performing sql statements

    Hi,
    I am new to oracle 9i release 2. I come from the Windows world, where we used sql server.
    When we performanced tested our product, we always monitor the worst performaning sql statement using the sql profiler. At the end of a 24 hour test, the profiler will list the sql statement with the longest execution time. What is the equivalent Oracle 9i tools that will allow me to monitor the worst performaning sql during a test that lasts between 10 to 24 hours?
    Thanks,
    Paul0al

    Besides statspack and OEM you have a few other options.
    If an SQL statement had been identified as performing poorly or a job that you can extract the SQL from then you have the option of explaining all the SQL statements and just reviewing the plans for reasonableness. You can also trace actual execution of the task or individual statemnts (alter session set sql_trace = true for basic 10046 event).
    When the SQL has not been identified in advance you can query the shared pool SQL areas for SQL statements that have relatively high physical, logical, or combined IO counts. Then you can perform tuning activities for these statements.
    HTH -- Mark D Powell --

  • Merged Dimension Performance vs. Multiple SQL Statements via Contexts

    Hi there,
    If you have a Webi reoprt and you select two measures, each from a different context, along with some dimensions and it generates two seperate SQL statements via a "Join", does that join happen outside of the scope of the Webi Processing Server?
    If it happens within the Webi Processing Server memory, how is the processing different from if you were to have two separate queries in your report and then merge the dimensions, with respect to performance?
    Thanks,
    Allan

    you can use the code as per your requirement
    but you need to do some performance tuning
    http://biemond.blogspot.com/2010/08/things-you-need-to-do-for-owsm-11g.html

  • Strage Performance Problems with SQL Statements....

    Hi,
    I have realized a strange performance collapse while changing simple things in SQL Statements...
    I have build a simple Statement:
    SELECT
    a.AAA,
    a.BBB,
    a.CCC,
    b.AAA
    from
    TABLE_A a,
    TABLE_B b,
    TABLE_C c
    where
    a.XXX=b.XXX
    and c.XXX = a.XXX
    and c.yyy = 'SOMETHING'
    Its very fast, even with complex XSL Transformations...
    After Putting a GROUP BY or DISTINCT into the statement (to surpress Dataset "Clones") it tooks around 100 times more Time then before....
    I have tested the statement in a SQLPlus, it was as fast as before, but in the XMLPublisher it tooks much longer.....
    Have you realizesd this Problem before?
    Greetings...

    PROBLEM SOLVED !
    It has to be 8.1.6 across the whole environment.
    null

  • Performance - SQL Statements- Script needed

    Hi All
    I am working on Performance Tuning on Oracle 10g/ Solaris environment.
    I am looking for a shell script that gives top 10 Time Consuming SQL Statements...... My client does not want me to use statspack or OEM for somereasons which i dont know. I am wondering if any of you might help me out ....
    thanks in advance
    riah

    .>> My client does not want me to use statspack
    Your client does not want you to use the scripts provided by Oracle that do exactly what you want but will allow you to run sripts that come from some unknown (to the client, at least) source????

  • How do I use SQL statements to perform calculations with form fields????

    Please help!!! I don't know how to use a SQL statement within my APEX form......
    My form is below. The user will enter the values in the form. Click on Submit. Then we need to run a SQL select statement with those values.
    Our form looks like this:
    Start_Date ____________
    Per_Period ____________
    Period ____________
    [Submit Button]
    The user will enter these 3 values in the form.
    This is an example of an user providing the values:
    Start_Date 03/14/08_______
    Per_Period $200.00________
    Period 4____________
    [Submit Button]
    Then they will click the Submit Button.
    The SQL statement (BELOW) returns output based on the users selections:
    START_DATE PER_PERIOD PERIOD
    14-MAR-2008 00:00 200 Week 1 of 4
    21-MAR-2008 00:00 200 Week 2 of 4
    28-MAR-2008 00:00 200 Week 3 of 4
    04-APR-2008 00:00 200 Week 4 of 4
    Total 800
    This is the full text of the SQL that makes the output above:
    with criteria as (select to_date('03/14/08', 'mm/dd/rr') as start_date,
    4 as periods,
    'Week' as period,
    200 per_period from dual),
    periods as (select 'Week' period, 7 days, 0 months from dual
    union all select 'BiWeek', 14, 0 from dual
    union all select 'Month', 0, 1 from dual
    union all select 'ByMonth', 0, 2 from dual
    union all select 'Quarter', 0, 3 from dual
    union all select 'Year', 0 , 12 from dual
    t1 as (
    select add_months(start_date,months*(level-1))+days*(level-1) start_date,
    per_period,
    c.period||' '||level||' of '||c.periods period
    from criteria c join periods p on c.period = p.period
    connect by level <= periods)
    select case grouping(start_date)
    when 1 then 'Total'
    else to_char(start_date)
    end start_date,
    sum(per_period) per_period,
    period
    from t1
    group by rollup ((start_date, period))
    THANKS VERY MUCH!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    You're just doing a parameterized report, where the input fields are your parameters.
    Check out the Advanced Tutorial titled Parameterized Report here:
    http://download.oracle.com/docs/cd/E10513_01/doc/appdev.310/e10497/rprt_query.htm#BGBEEBJA
    Good luck,
    Stew

  • Performance of SQL-Statements in Reports

    Hi
    I have a very complex SQL-Statement in a Region-Report with Items in the where-clause:
    select ....
    where  idt_1 like :P1_IDT
    and    idt_2 like :P1_IDT2
    ...it generates 100 Million records in the Temp Tablespace and produce either a timeout or an error-message that the Temp-Tablespace is not big enough.
    If I replace the Items with real values it runs in a few seconds in the SQL-Workshop.
    select ....
    where  idt_1 like 10
    and    idt_2 like 11
    ...If I use the Region-Type based "PL/SQL Function Body returning SQL Statement" and generate the Statment like this:
    v_statment:= 'select ... where idt_1 like ' || :P1_IDT;
    return v_statment;it runs in a few seconds too.
    Any explanations?
    Regards, Juergen

    Jürgen,
    John's recommendation is sound. Your last two examples ultimately use literal values in your query statement (that is, the query optimizer can use these values to determine the optimal query plan). The query plans for the last two queries may be entirely different than what was generated for your first query.
    Additionally, if the selectivity of your first query shifted dramatically across subsequent executions, the query plan initially generated may not be suitable again.
    Examining the tkprof output should elucidate all of this.
    Joel

Maybe you are looking for

  • Function Error

    When I call this function from a select statement select f_rec_cnt(table_name) from <AnyTable> -- it is always returning null and no rowcount as the table contains data initNULLThe AnyTable contains table_name column with value as actual table name i

  • Powerbook G4 freezing at blue bar startup

    Hi there I have a Powerbook G4 Titanium (1Ghz processor) 1Gb RAM. For nearly 5 years it has given me no problems. I have Tiger 10.4.8 installed. A week ago it suddenly started to get very slow and required restarting 3 or 4 times before the problem s

  • My Passport Ultra Not Recognised On PC Or Laptop

    I was using my WD My Passport Ultra 1TB hard drive up until recently on my XBOX ONE until with no apparent reason it stopped working/being recognised by XBOX? I removed it and attached to my PC (Windows 7) and laptop (Windows 8) to see if it was work

  • Mouse Pointer Speed vs. Resolution

    Greetings, I've purchased the HP 2.4GHz Wireless Optical Mobile Mouse. In the Mouse Control Center, I have the option of setting the Pointer Speed (slow to fast) and Resolution (low to high). What is the difference between pointer speed and resolutio

  • Oracle Linux 6.3 İnstallation

    Hi guys, I do not want to install the kernel parameters during installation,i want skip this the grub screen step, how do I know? [skip this step|http://docs.oracle.com/cd/E22368_01/html/E27242/figures/Linux6_Grub_kernel_select.jpg]