PreparedStatement 와 cursor sharing 파라메터에 관해 질문입니다.

PreparedStatement 와 cursor sharing 파라메터에 관해 질문입니다.
cursor_sharing 파라메터를 EXACT 가 아닌 similar, force를 이용하면
리터럴이나 바인드 변수를 제외한 나머지 문장이 동일하면 같은 문장으로 취급하여
동일한 실행 계획을 이용한것으로 알고 있습니다.
그런데 만약 EXACT인경우 는 SQL-PLUS 상에서
select * from dept where deptno = &dept 이렇게 바인드 변수를 사용해도
다른 문장으로 처리하더군요.... 그렇다면
PreparedStatement의 경우는 EXACT일때 어떻게 되는건인가요?
어떤 jdbc 책도 ps 를 사용하라고 나와있고 오라클 튜닝책에는 cursor_sharing을
적절하게 사용하라고 되어있습니다.
두가지에 대해 같이 언급하는것은 없는데....
EXACT일때도 ps가 가능한것인지 좀 가르쳐 주세요...

아래는 성능테스트를 한 상황입니다.
CURSOR_SHARING의 사용
===========================================
Explanation
1. 용례
1) initSID.ora에 기술 : CURSOR_SHARING = { FORCE | EXACT }
2) Scope : Dynamic ( ALTER SESSION, ALTER SYSTEM )
3) Default Value : EXACT
2. 설명
다음과 같은 두 개의 SQL 문장은 sql area의 cursor를 공유하지
못한다.
SELECT ename, empno FROM emp where deptno = 10;
SELECT ename, empno FROM emp where deptno = 20;
-> 위 sql 문장에서 다른 부분은 모두 동일하나 binding variable을
사용하지 않고 각각 상수인 10, 20을 사용한다.
만약 두 가지 sql 문장이 cursor를 공유하도록 하기 위해서는
sql 문장을 다음과 같이 바꿀 수 있다.
SELECT ename, empno FROM emp where deptno = :department_no;
하지만 이와 같이 바꾸는 것이 여의치 않을 경우 init 파라미터 파일이나
ALTER SYSTEM, ALTER SESSION 등의 명령으로 CURSOR_SHARING 값을
FORCE로 지정하면 된다.
3. 고려사항
CURSOR_SHARING 값을 FORCE로 지정하기 위해서는 다음과 같은 2가지 사항을
고려하여야만 한다.
1) SQL area에서 값을 제외한 다른 부분이 모두 동일한 문장이 많이 있는지
2) library cache miss가 높아 response time이 늦는지
* DSS 시스템에서는 CURSOR_SHARING 값을 FORCE로 지정할 경우 결과를 예측
하기 어려우므로 권장하지 않는다. 또한 복잡한 query나 query rewrite,
stored outline을 사용할 경우에도 CURSOR_SHARING 값을 FORCE로 지정하는
것을 권장하지 않는다.
4. 성능 분석
환경 : Windows NT 4.0 ( Pentium 233MHz 1 CPU, 128M )
Oracle 8.1.6
시나리오 : 특정 table에 만 개의 row insert
1) Binding Variable을 사용하지 않고 CURSOR_SHARING 값이 EXACT인 경우
-> Total time elapsed : 80,496 msec.
2) Binding Variable을 사용하지 않고 CURSOR_SHARING 값이 FORCE인 경우
-> Total time elapsed : 61,699 msec.
3) Binding Variable을 사용할 경우
-> Total time elapsed : 21,561 msec.
위 결과에 따르면 동일한 sql 문장을 반복적으로 처리할 경우 binding
variable을 사용하는 것이 가장 속도가 빠른 것으로 나타났으며, binding
variable을 사용하지 않을 경우에도 CURSOR_SHARING 값이 FORCE일 경우
그렇지 않을 경우 약 23% 정도의 속도 개선 효과가 있음을 알 수 있다.
글 수정:
minyh0124

Similar Messages

  • Please reply:how to avoid extra trailing spaces while using cursor sharing

    i am using cursor sharing with FORCE or SIMILAR.
    what is the solution to avoid extra trailing spaces without any java code change.
    do we have any option in oracle to avoid extra trailing spaces during the query processing ?
    I am using Oracle 10g
    CURSOR SHARING is a feature in which multiple sql statements
    which are same will have a shared cursor (in the library cache) for an oracle session,
    i.e, the first three steps of the sql processing (hard parse, soft parse, optimization)
    will be done only the first time that kind of statement is executed.
    There are two ways in which similar SQL statements with different condition values can be made to "SHARE" cursor during execution:
    1. Writing SQLs with Bind Variables: SQLs having no hard coded literals in them
    For e.g., the query below
    SELECT node.emp_name AS configid
    FROM emp node
    WHERE emp_no = :1
    AND dept_no =
    DECODE (SUBSTR (:2, 1, 3),
    :3, :4,
    (SELECT MAX (dept_no)
    FROM emp
    WHERE emp_no = :5 AND dept_no <= :6)
    AND node.dept_type = :7
    ORDER BY node.emp_name
    Here all the variables are dynamically bound during the execution. The ":X" represents BIND Variable and the actual values are bound to the SQL only at the 4th step of the execution of the SQL.
    In applications: The queries written with "?" as bind variables will be converted into ":X" and are sqls with Bind Variables.
    2. The CURSOR_SHARING parameter: Only Useful for SQL statements containing literals:
    For eg., the query below:
    SELECT node.emp_name AS configid
    FROM emp node
    WHERE emp_no = 'H200'
    AND dept_no =
    DECODE (SUBSTR (:1, 1, 3),
    'PLN', :2,
    (SELECT MAX (dept_no)
    FROM emp
    WHERE emp_no = :3 AND dept_no <= :4)
    AND node.dept_type = :5
    ORDER BY node.emp_name
    In the query above, there are two hard coded literals H200 , PLN. In this case when the same SQL executed with different values like (H2003 , PLN), oracle will create a new cursor for this statement and all the first three steps ( hard & soft parse and optimization plan) needs to be done again.
    This can be avoided by changing the CURSOR_SHARING parameter which can be set to any of three values:
    1. EXACT: Causes the mechanism not be used, i.e. no cursor sharing for statements with different literals. This is the default value.
    2. FORCE: Causes unconditional sharing of SQL statements that only differ in literals.
    3. SIMILAR: Causes cursor sharing to take place when this is known not to have any impact on optimization.
    So, FORCE and SIMILAR values of the parameter will be helping in cursor sharing and improve the performance of the SQLs having literals.
    But here the problem arises if we use the FORCE and SIMILAR other than EXACT.
    alter session set cursor_sharing ='EXACT'
    select 1 from dual;
    '1'
    1
    alter session set curson_sharing='FORCE'
    select 2 from dual;
    '2'
    2
    alter session set curson_sharing='SIMILAR'
    select 3 from dual;
    '3'
    3
    So, this will give extra trailing spaces in when we retrieve from java method and any
    further java processing based on the hardcoded literal values will fail. this needs lot of
    effort in remodifying the existing millions of lines of code.
    My question is i have to use cursor sharing with FORCE or SIMILAR and can't we do the trimming
    from the oracle query processing level ?
    please help me on this ?
    Message was edited by:
    Leeladhar
    Message was edited by:
    Leeladhar

    Please reply to this thread
    How to avoid extr trailing spaces using Cursor sharing opton FORCE, SIMILAR

  • Not a group by column error message - cursor sharing

    Hi,
    Our database is Oracle 11g.
    We faced problems about some sql's. These sql's were running in Oracle 10g.
    Our sql statements are like the below query:
    SELECT COLUMN_A,COLUMN_B,COLUMN_C,
    TO_CHAR(OPERATION_DATE,'DD/MM/YYYYY'), CUSTOMER_NAME || '-' || CUSTOMER_SURNAME,
    DECODE (IS_MANAGER, 0, ' Manager', 'Employee')
    SUM(TOTAL), SUM(AMOUNT), AVG(SALARY),
    FROM TABLES
    GROUP BY COLUMN_A,COLUMN_B,COLUMN_C,
    TO_CHAR(OPERATION_DATE,'DD/MM/YYYYY'), CUSTOMER_NAME || '-' || CUSTOMER_SURNAME,
    DECODE (IS_MANAGER, 0, ' Manager', 'Employee')
    ORDER BY COLUMN_A,COLUMN_B,COLUMN_C,
    TO_CHAR(OPERATION_DATE,'DD/MM/YYYYY'), CUSTOMER_NAME || '-' || CUSTOMER_SURNAME,
    DECODE (IS_MANAGER, 0, ' Manager', 'Employee')
    When we remove order by or when we disabled cursor sharing, problem is fixed.
    I think that sql looks correct. Why we are taking not a group by column error message?
    How can we fix this problem without changing querys( we have other reports which gives same error message) or disabling cursor sharing?
    Thank you, Bye

    Dear,
    SELECT   column_a
            ,column_b
            ,column_c
            ,TO_CHAR (operation_date, 'DD/MM/YYYYY')
            ,customer_name || '-' || customer_surname
            ,DECODE (is_manager, 0, ' Manager', 'Employee')
            ,SUM (total)
            ,SUM (amount)
            ,AVG (salary)
        FROM TABLES
    GROUP BY column_a,
             column_b,
             column_c,
             TO_CHAR (operation_date, 'DD/MM/YYYYY'),
             customer_name || '-' || customer_surname,
             DECODE (is_manager, 0, ' Manager', 'Employee')
    ORDER BY column_a,
             column_b,
             column_c,
             TO_CHAR (operation_date, 'DD/MM/YYYYY'),
             customer_name || '-' || customer_surname,
             DECODE (is_manager, 0, ' Manager', 'Employee')Here you are grouping by
    DECODE (is_manager, 0, ' Manager', 'Employee')What does represent the is_manager? it is a function ? it might be that you hit one of the group by restriction such as select are not allowed into the group by clause
    Best regards
    Mohamed Houri

  • Adaptive Cursor Sharing and OLTP databases

    Hi all,
    I am reading a book about performance that affirms that, for OLTP environments, it is recommended to disable the Adaptive Cursor Sharing feature (setting the hidden parameter OPTIMIZEREXTENDED_CURSOR_SHARING_REL to NONE and the CURSOR_SHARING to EXACT). The book recommends this to avoid the overhead related to the feature.
    I know that with this feature you can avoid the Bind Peeking issue as it creates more than one execution plan for the same query. So, it is really a good practice to disable it?
    Thanks in advance.

    OK, thanks for pointing out.
    Getting back to your original question:
    So, it is really a good practice to disable it?No, it is not, especially not without approval of Oracle Support.
    Furthermore, I feel confirmed by that point from the book review by Charles Hooper as well as his additional points:
    The book states that in an OLTP type database, “we probably want to disable the Adaptive Cursor Sharing feature to eliminate the related overhead.” The book then suggests changing the CURSOR_SHARING parameter to a value of EXACT, and the OPTIMIZEREXTENDED_CURSOR_SHARING_REL parameter to a value of NONE. First, the book should not suggest altering a hidden parameter without mentioning that hidden parameters should only be changed after consulting Oracle support. *Second, it is not the CURSOR_SHARING parameter that should be set to a value of EXACT, but the OPTIMIZERADAPTIVE_CURSOR_SHARING parameter that should be set to a value of FALSE (see Metalink (MOS) Doc ID 11657468.8). Third, the blanket statement that adaptive cursor sharing should be disabled in OLTP databases seems to be an incredibly silly suggestion for any Oracle Database version other than 11.1.0.6 (this version contained a bug that lead to an impressive number of child cursors due to repeated executions of a SQL statement with different bind variable values)*. (page 327)"
    http://hoopercharles.wordpress.com/2012/07/23/book-review-oracle-database-11gr2-performance-tuning-cookbook-part-2/

  • Cursor sharing&bind variables

    hii,
    what is the advantage of cursor sharing parameter?
    Is there ayy performance differences between using cursor sharing parameter only ( coding sql queries with literals) and using bind variables only ( no setting of cursor sharing parameter)

    user4030266 wrote:
    I am trying to optimize reports.
    I think I have three option.
    First is using literals and trust to the cursor sharing parameter.
    Second one is using plsql functions which returns cursors.
    Third one is using plsql functions which returns tables by using pipleined functions like http://mennan.kagitkalem.com/PerformanceAnalyzesOfPIPELINEDFunctionsInOracle.aspx.
    How would any of these optimise reports?
    Using literals and not bind variables is only a performance issue when the application fires off 100s of SQLs per second - and each need to be hard parsed. It is not a performance issue when the client is a reporting tool and happens not to use bind variables. Sure, it is desirable (most of the time) to have a client use bind variables. But what optimisation do you expect from a report query? It will definitely not be any faster - as the very same SQL will be executed.
    A PL/SQL function returning a ref cursor.. again, what optimisation do you expect? Does not matter how you pass a SQL to the Oracle SQL engine, ALL SQLs are parsed and stored as cursors. Every one. Now instead of the client (reporting tool) getting a cursor handle directly from its call to the SQL engine, you want it to call PL/SQL, and then give it a ref cursor handle.
    The only difference is basically who hands out the cursor handle. The SQL engine directly. Or your PL/SQL code indirectly? The SQL still remains the same. It will be as slow or as fast as before. Only now you're making things a tad more complex by introducing another layer (your PL/SQL code) in-between the client and the SQL engine.
    Yes, it can serve a good purpose at times by abstracting the actual SQL layer from the client. But do not expect any performance improvements. This does not make the SQL go any faster.
    Lastly, pipe line table functions. This is a great idea to make the SQL go slower. How does a PL/SQL pipeline works? It is called via the SQL engine. It is PL/SQL code. This code in turns calls the SQL engine again. It copies data from the SQL engine into PL/SQL. Then it copies (or pipes) that same data back to the SQL engine.
    How can this be faster than the client simply getting the data directly from the SQL engine? Pipeline table functions is a tool to address a specific type of problem dealing with ETL (extraction, transformation and loading). With emphasis on the transformation part - as there is where you want to use PL/SQL to transform the SQL data into something different before passing (piping) it back to the SQL engine.
    It is not a performance tool that make SQL go faster. In fact, it is slower than simply staying within the SQL engine and not move SQL data to and from the SQL engine to PL/SQL engine and back.
    You want to improve or optimise actual reporting? Then what you need to look at is the stuff that is actually relevant. Not cursors. Not pipeline tables.
    And the stuff that is relevant are:
    - the design of the data model
    - the physical implementation of the data model
    - the design and logic of the SQL report
    And that's it. Get that right, and you have addressed a significant portion of the performance problem of SQL reporting.
    And be very wary when so-called "+experts+" mention using ref cursors and pipeline tables and the like to optimise performance... as these are not the tools one use to address performance tools. (these are tools dealing with usability and not speed)

  • Huge version_count for cursor sharing for store method

    Hi,
    I'm using Oracle BPM 10.3.2, which is connected to database (11.2.0.3.0).
    For quite huge table with 250 columns and 10 000 rows we are using buil-in load and store method, which is available for BPMObject, which is created for database table.
    Build-in store method is updating always whole database object, no matter if any field was changed and all variables are bind so shared cursor are used.
    Unfortunately even if all updates are similar, database is creating new child because of "bind mismatch" of the variable.
    Bind mismatch is because, variable content is different and i.e. varchar2 can have 3 length 32, 128 and 2000 characters.
    If database find new combination of variable length to update, new version or cursor is created.
    Can I anyway force to use the same cursor for all updates (I've tried cursor_sharing parameter in sys.v_$parameter, but it doesn't work)?
    Any other ideas?
    BR,
    Wujek

    It might be that your shared pool is fragmented especially if your application is using literals and not using bind variables. High number of version count does not always mean a terrible thing, but most likely it is. You may also have bind_mismatch (the most common reason) that is causing your high version count of SQLs. Read about bind_mismatch and see if this could be the reason in your application. I suggest to execute the command 'alter system flush shared pool;' during idle time (at night when there is no activity) every day to clean up the SQLs that are considered a candidate to be flushed out of the shared pool. I am suggesting this because your application might using literals. REMEMBER, this is NOT a fix. It is just kind of Aspirin until you fix the real problem in your application. You may benchmark and see how it will perform. My last suggestion which is the most important, please think about upgrading your database. This version of Oracle is an old one and Oracle itself is not well supporting it.

  • Many child cursors on an insert statement

    Oracle: 10.2.0.5
    Cursor_sharing = exact
    I understand how you get many child cursors with cursor sharing set to exact. However, this is an insert statement. It has a plan_hash_value = 0 and there are not any explain plans in v$sql_plan.
    They all have the parsing_schema_name = 'GWYDB'
    This is not an issue with multiple copies of the same table in different schemas since the statement has <owner>.<table name>
    this is a 1 line insert statement.
    insert into schema.mytablue values (:b1.:b2,:b3);

    Guess2 wrote:
    Oracle: 10.2.0.5
    Cursor_sharing = exact
    I understand how you get many child cursors with cursor sharing set to exact. However, this is an insert statement. It has a plan_hash_value = 0 and there are not any explain plans in v$sql_plan.
    They all have the parsing_schema_name = 'GWYDB'
    This is not an issue with multiple copies of the same table in different schemas since the statement has <owner>.<table name>
    this is a 1 line insert statement.
    insert into schema.mytablue values (:b1.:b2,:b3);Oracle ERROR indicates something need to change.
    post SQL & error code and message that needs to be fixed.

  • Too many open cursors

    Could someone help me understand this problem, and how to remedy it? We're getting warnings as the number of open cursors nears 1200. I've located the V$OPEN_CURSOR view, and after investigating it, this is what I think:
    Currently:
    SQL> select count(*)
    2 from v$open_cursor;
    COUNT(*)
    535
    1) I have one session open in the database, and 40 records in this view. Does that mean my cursors are still in the cursor cache?
    2) Many of these cursors are associated with our analysts, and it looks like they are likely queries TOAD runs in order to gather meta-data for the interface. Can I overcome this?
    3) I thought that the optimizer only opened a new cursor when a query that didn't match one in the cache was executed. When I run the following, I get 105 SQL statements with the same hash_value and sql_id, of which, they total 314 of the 535 open cursors (60% of the open cursors):
    SQL> ed
    Wrote file afiedt.buf
    1 SELECT COUNT(*), SUM(cnt)
    2 FROM (SELECT hash_value,
    3 sql_id,
    4 COUNT(*) as cnt
    5 FROM v$open_cursor
    6 GROUP BY hash_value, sql_id
    7* HAVING COUNT(*) > 1)
    SQL> /
    COUNT(*) SUM(CNT)
    104 314
    4) Most of our connections in production will use Oracle Forms. Is there something we need to do in order to get Forms to use bind variables, or will it do so by default?
    Thanks for helping me out with this.
    -Chuck

    CURSOR_SHARING=EXACT
    OPEN_CURSORS=500
    CURSOR_SHARING
    From what I've read, cursor sharing is always in effect, although we have the most conservative method set. So I'm not sure how this affects things. Several identical queries are being submitted in several separate cursors.
    OPEN_CURSORS
    This value corresponds with the maximum number of cursors allowed for a single session. We're using shared servers, so I'm exactly sure if this is still 'per session' or 'per shared server', but 500 should be more than enough.
    It sounds like you're suggesting that a warning is being triggered based upon our init params. If that's the case, then what are people seeing as a limit for cursors on a 2-CPU Linux box with 2G of memory?
    -Chuck

  • Child cursors not getting purged 11.2.0.2

    select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    SQL_ID                         = 7bz1uuvdrkdbm
    ADDRESS                        = 000000031CEBB878
    CHILD_ADDRESS                  = 000000031CEBB418
    CHILD_NUMBER                   = 0
    ROLL_INVALID_MISMATCH          = Y
    All child cursors have same ROLL_INVALID_MISMATCH   = Y
    I have shared_pool_size=2 GB.
    select sql_id,child_number CN,plan_hash_value phv,executions,elapsed_time/executions/1000000 "sec" ,IS_BIND_SENSITIVE SEN,is_bind_aware AWR,IS_SHAREABLE SHA,LAST_LOAD_TIME LOAD_T,last_active_time ACTIVE_T,substr(sql_fulltext,1,40) from v$sql where sql_id='&sql_id' order by last_active_time;
    Enter value for sql_id: 7bz1uuvdrkdbm
    SQL_ID                 CN        PHV EXECUTIONS        sec S A S LOAD_T                    ACTIVE_T                  SUBSTR(SQL_FULLTEXT,1,40)
    7bz1uuvdrkdbm           7   57216169     204160 .043308046 Y N Y 2012-10-19/21:41:52       20.Oct.12/21:03:35        select instrops0_.detail_id as detail41_
    7bz1uuvdrkdbm           0   57216169     185981  .04124503 Y N Y 2012-10-20/21:03:44       21.Oct.12/18:02:37        select instrops0_.detail_id as detail41_
    7bz1uuvdrkdbm           1   57216169     252623 .043459744 Y N Y 2012-10-21/18:02:38       22.Oct.12/21:32:07        select instrops0_.detail_id as detail41_
    7bz1uuvdrkdbm           2   57216169     240260 .043797356 Y N Y 2012-10-22/21:32:35       23.Oct.12/21:16:31        select instrops0_.detail_id as detail41_
    7bz1uuvdrkdbm           3   57216169       9331 .047150057 Y N Y 2012-10-23/21:16:40       23.Oct.12/23:33:15        select instrops0_.detail_id as detail41_
    7bz1uuvdrkdbm           4   57216169      38965 .045418917 Y N Y 2012-10-23/23:33:25       24.Oct.12/04:18:29        select instrops0_.detail_id as detail41_
    select sql_id,loads,loaded_versions,invalidations,executions from v$sql where sql_id='7bz1uuvdrkdbm' order by child_number;
    SQL_ID              LOADS LOADED_VERSIONS INVALIDATIONS EXECUTIONS
    7bz1uuvdrkdbm          21               1            20     185981
    7bz1uuvdrkdbm          17               1            16     252623
    7bz1uuvdrkdbm          14               1            13     240260
    7bz1uuvdrkdbm          22               1            21       9331
    7bz1uuvdrkdbm          11               1            10      45902
    7bz1uuvdrkdbm           9               1             8     204160
    6 rows selected.I understand that child cursors are created due to object stats update by nightly dbms_stats.gather_schema_stats. (ROLL_INVALID_MISMATCH caused by DBMS_STATS.AUTO_INVALIDATE)
    My question is, why Oracle is not purging old child cursors if they are not being used. (For example....Child_number=7 is last used on 20.Oct.12/21:03:35)
    Is there any way i can have auto purge them after certain threshold ? Is that not misuse of shared pool memory? I believe rolling invalidation is to prevent hard parse (and latch contention issues) by cursor invalidation in high OLTP system. But if OLTP system is active 24 hr a day, we can still observe latch contention after 5 hr (_optimizer_invalidation_period) ...right ?

    Hi,
    shared pool housekeeping is not as simple as you imagine it to be. It's not like at any given moment of time there is only one "correct" child cursor, the rest being subject to purging, it's much more complex, and the exact housekeeping algorithms are not accessible to us users. Plus, you only have several child curors; I once had over a hundred and raised a SR to that effect -- the customer support represebtatuve said that cursor sharing mechanisms in Oracle aren't perfect and unless one has thousands of child cursors one shouldn't be worried.
    Best regards,
    Nikolay

  • Shared pool fragmentation

    Find below a modified version of a script retrieved from Metalink 146599.1 to check for shared pool fragmentation
    I am not sure if it is possible, but is there any way to incorporate into this script a query that will show the effectiveness of using an ALTER SYSTEM FLUSH SHARED_POOL command .
    select bucket, freespace,
    ROUND(ratio_to_report(freespace) over () * 100, 5) AS "Percentage"
    from
    (select '0 (<140)' BUCKET, sum(KSMCHSIZ) freespace
    from x$ksmsp
    where KSMCHSIZ<140
    and KSMCHCLS='free'
    UNION ALL
    select '1 (140-267)' BUCKET, sum(KSMCHSIZ) freespace
    from x$ksmsp
    where KSMCHSIZ between 140 and 267
    and KSMCHCLS='free'
    UNION ALL
    select '2 (268-523)' BUCKET, sum(KSMCHSIZ) freespace
    from x$ksmsp
    where KSMCHSIZ between 268 and 523
    and KSMCHCLS='free'
    UNION ALL
    select '3-5 (524-4107)' BUCKET, sum(KSMCHSIZ) freespace
    from x$ksmsp
    where KSMCHSIZ between 524 and 4107
    and KSMCHCLS='free'
    UNION ALL
    select '6+ (4108+)' BUCKET, sum(KSMCHSIZ) freespace
    from x$ksmsp
    where KSMCHSIZ >= 4108
    and KSMCHCLS='free');

    Running this SQL, flushing the shared pool, and the re-running should show what the difference before and after is - or does this not suffice?
    Flushing the shared pool is also not really addressing the root cause - which most often is non-sharable SQL.
    Thus I'm not exactly sure what you're trying to achieve here. The SQL identifies the symptoms - flushing the shared pool treats those symptoms. For a while.
    Surely you should rather be looking at the shared pool itself to determine what the problem is and try and fix that instead? A db logon trigger for a poorly written app can for example force cursor sharing for all sessions created by that app.

  • Situations when child cursors are created.....

    Hi ,
    I 'd like to ask when child cursor(s) is created..., when a similar or exact text string for sql statement is found on the SQL AREA (Library cache of shared pool) but for a/many reason(s) - described in the v$sql_shared_memory- the same sql area cannot be used..., or for any other reason....???
    Does the value of CURSOR SHARING parameter affect this matter...??????
    Many thanks,
    Simon
    Message was edited by:
    sgalaxy
    Message was edited by:
    sgalaxy

    Hi Simon,
    The purpose of child cursors is to allow Oracle to save some memory. If you have a SQL that that is an exact literal match, but can't be shared for some other reason (i.e. bind variable mismatch, optimizer environment mismatch, etc), a new child is created. This allows one literal SQL statement to have two separate execution plans, without wasting space in the library cache by duplicating SQL that is identical. When that happens, V$SQL_SHARED_CURSOR should tell you the reason for the extra child. (I say "should" cause there have been numerous bugs filed on various versions where there are cases that child cursors are created and no reason is populated in V$SQL_SHARED_CURSOR.)
    Now, how does CURSOR_SHARING fit into the picture? Well, if you have a well-designed application that uses bind variables to allow for sharable SQL, then CURSOR_SHARING is not required and should be set to exact. In cases where bind variables are not used, and the shared pool is getting slammed w/ lots of unique literal SQL, then setting CURSOR_SHARING to force or similar will replace literals with system generated bind variables, and allow for sharing of the resultant SQL.
    Hope that helps,
    -Mark

  • SQL Version Count in AWR and Child Cursor

    Hi,
    In 11.1.0.7, what is SQL Version count in AWR report? What does it mean? and how oracle does this version count? How do we define child cursor?
    Thanks

    Hi,
    when you issue a SQL statement, the database searches the library cache to find a cursor with matching SQL text. Then it can happen that even though the text matches, there are some other differences that prevent you from using existing cursor (e.g. different optimizer settings, different NLS settings, different permissions etc.). In such cases, a new child cursor is created. So basically child cursors are different versions of the same SQL statement.
    If you have SQL statements with thousands of versions, this could mean a problem for your shared pool (child cursors taking up lots of space and causing fragmentation), as well as a potential for performance problems due to plan instability (if the same SQL text is parsed to a new plan every time, sooner or later it will be a bad plan). That's why AWR report has this list.
    According to Oracle support, up to a couple of hundreds versions doesn't indicate a problem (cursor sharing mechanism isn't perfect), but when you have thousands or tens of thousands of versions, you should check your cursor sharing settings (first of all, CURSOR_SHARING parameter).
    Best regards,
    Nikolay

  • Prepared Statement with SQL 'IN' Clause

    Hi,
    I am trying to write a JDBC SQL call to a database using a prepared statement, the call looks something like:
    select *
    from table
    where field in (?, ? ,?)
    this thing is that i don't know how many 'IN' parameters are needed until runtime (they come from a List), so is there an easy way of dealing with this, I haven't been able to find any information on this problem anywhere?

    >
    Hmmm...more expensive than say doing a query on on 2 billion rows with no index?
    More expensive than doing a cross server join?
    More expensive than doing a restore?
    I knew that someone would point this out. :)
    I just tried to exaggerate the importance of cursor sharing. This is one of the most important topic in DBMS world, but quite often ignored by JAVA world. I hope that you understand my good intention.
    >
    2. Insert data corresponding to bind variable to "T". Interesting idea. Please provide the algorithm for that. The only ones I can come up with
    1. Involved creating a "dynamic" SQL for the insert
    2. Doing multiple cross network inserts.
    The first of course is exactly what you said your solution prevented. The second will be more expensive than sending a single dynamically created select.Hopefully, this is not just an "interesting" idea, but very common technique in DBMS. Actually one of the common techniques. There are couple of ways to handle this kind(variable number of bind variables in "IN" clause) of problem.
    What i commented was that the simplest one. It's like this:
    (based on Oracle)
    SQL> create global temporary table bind_temp(value int);
    PreparedStatement stmt = con.prepareStatement("INSERT INTO bid_temp VALUES(?)");
    for(...) {
         stmt.setInt(1, aValue)
         stmt.addBatch();
    stmt.executeBatch();
    Statement stmt2 = con.executeQuery("SELECT * FROM target_table WHERE id IN (bind_temp)");
    ...Doesn't it look pretty? Pretty for both Java developers and DBAs.
    By virtue of the mechanism of batch processing, the total DBMS call is just twice and you need just 2 completely sharable SQL statements.
    (Hopefully you might understand that Oracle global temporary table is just session scope and we don't need them to be stored permanently)
    Above pattern is quite beneficial than these pattern of queries.
    SELECT * FROM target_table WHERE id IN (?)
    SELECT * FROM target_table WHERE id IN (?,?)
    SELECT * FROM target_table WHERE id IN (?,?,?)
    SELECT * FROM target_table WHERE id IN (?,?,?,?,.......,?)
    If you have large quantity of above patterns of queries, you should note that there are another bunch of better techniques. I noted just one of them.
    Hope this clairfies my point.

  • Force statement to use a given rule or execution plan

    Hi!
    We have a statement that in our production system takes 6-7 seconds to complete. The statement comes from our enterprise application's core code and we are not able to change the statement.
    When using a RULE-hint (SELECT /*+RULE*/ 0 pay_rec...........) for this statement, the execution time is down to 500 milliseconds.
    My question is: Is there any way to pin a execution plan to a given statement. I have started reading about outlines, which seems promising. However, the statement is not using bind-variables, and since this is core code in an enterprise application I cannot change that either. Is it possible to use outlines with such a statement?
    Additional information:
    When I remove all statistics for the involved tables, the query blows away in 500 ms.
    The table tran_info_types has 61 rows and is a stable table with few updates
    The table ab_tran_info has 1 717 439 records and is 62 MB in size.
    The table query_result has 777 015 records and is 216 MB in size. This table is constantly updated/insterted/deleted.
    The query below return 0 records as there is no hits in the table query_result.
    This is the statement:
    SELECT  /*+ALL_ROWS*/
           0 pay_rec, abi.tran_num, abi.type_id, abi.VALUE
      FROM ab_tran_info abi,
           tran_info_types ti,
           query_result qr1,
           query_result qr2
    WHERE abi.tran_num = qr1.query_result
       AND abi.type_id = qr2.query_result
       AND abi.type_id = ti.type_id
       AND ti.ins_or_tran = 0
       AND qr1.unique_id = 5334549
       AND qr2.unique_id = 5334550
    UNION ALL
    SELECT 1 pay_rec, abi.tran_num, abi.type_id, abi.VALUE
      FROM ab_tran_info abi,
           tran_info_types ti,
           query_result qr1,
           query_result qr2
    WHERE abi.tran_num = qr1.query_result
       AND abi.type_id = qr2.query_result
       AND abi.type_id = ti.type_id
       AND ti.ins_or_tran = 0
       AND qr1.unique_id = 5334551
       AND qr2.unique_id = 5334552;Here is the explain plan with statistics:
    Plan
    SELECT STATEMENT  HINT: ALL_ROWSCost: 900  Bytes: 82  Cardinality: 2                           
         15 UNION-ALL                      
              7 NESTED LOOPS  Cost: 450  Bytes: 41  Cardinality: 1                 
                   5 NESTED LOOPS  Cost: 449  Bytes: 1,787,940  Cardinality: 59,598            
                        3 NESTED LOOPS  Cost: 448  Bytes: 19,514,824  Cardinality: 1,027,096       
                             1 INDEX RANGE SCAN UNIQUE TRADEDB.TIT_DANIEL_2 Search Columns: 1  Cost: 1  Bytes: 155  Cardinality: 31 
                             2 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_7 Search Columns: 1  Cost: 48  Bytes: 471,450  Cardinality: 33,675 
                        4 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 11  Cardinality: 1       
                   6 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 11  Cardinality: 1            
              14 NESTED LOOPS  Cost: 450  Bytes: 41  Cardinality: 1                 
                   12 NESTED LOOPS  Cost: 449  Bytes: 1,787,940  Cardinality: 59,598            
                        10 NESTED LOOPS  Cost: 448  Bytes: 19,514,824  Cardinality: 1,027,096       
                             8 INDEX RANGE SCAN UNIQUE TRADEDB.TIT_DANIEL_2 Search Columns: 1  Cost: 1  Bytes: 155  Cardinality: 31 
                             9 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_7 Search Columns: 1  Cost: 48  Bytes: 471,450  Cardinality: 33,675 
                        11 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 11  Cardinality: 1       
                   13 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 11  Cardinality: 1            Here is the execution plan when I have removed all statistics (exec DBMS_STATS.DELETE_TABLE_STATS(.........,..........); )
    Plan
    SELECT STATEMENT  HINT: ALL_ROWSCost: 12  Bytes: 3,728  Cardinality: 16                           
         15 UNION-ALL                      
              7 NESTED LOOPS  Cost: 6  Bytes: 1,864  Cardinality: 8                 
                   5 NESTED LOOPS  Cost: 6  Bytes: 45,540  Cardinality: 220            
                        3 NESTED LOOPS  Cost: 6  Bytes: 1,145,187  Cardinality: 6,327       
                             1 TABLE ACCESS FULL TRADEDB.TRAN_INFO_TYPES Cost: 2  Bytes: 104  Cardinality: 4 
                             2 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_6 Search Columns: 1  Cost: 1  Bytes: 239,785  Cardinality: 1,547 
                        4 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 26  Cardinality: 1       
                   6 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 26  Cardinality: 1            
              14 NESTED LOOPS  Cost: 6  Bytes: 1,864  Cardinality: 8                 
                   12 NESTED LOOPS  Cost: 6  Bytes: 45,540  Cardinality: 220            
                        10 NESTED LOOPS  Cost: 6  Bytes: 1,145,187  Cardinality: 6,327       
                             8 TABLE ACCESS FULL TRADEDB.TRAN_INFO_TYPES Cost: 2  Bytes: 104  Cardinality: 4 
                             9 INDEX RANGE SCAN UNIQUE TRADEDB.ATI_DANIEL_6 Search Columns: 1  Cost: 1  Bytes: 239,785  Cardinality: 1,547 
                        11 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 26  Cardinality: 1       
                   13 INDEX UNIQUE SCAN UNIQUE TRADEDB.QUERY_RESULT_INDEX Search Columns: 2  Bytes: 26  Cardinality: 1            Our Oracle 9.2 database is set up with ALL_ROWS.
    Outlines: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/outlines.htm#13091
    Cursor sharing: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3696883368520

    Hi!
    We are on Oracle 9iR2, running on 64-bit Linux.
    We are going to upgrade to Oracle 10gR2 in some months. Oracle 11g is not an option for us as our application is not certified by our vendor to run on that version.
    However, our performance problems are urgent so we are looking for a solution before we upgrade as we are not able to upgrade before we have done extensive testing which takes 2-3 months.
    We have more problem sql's than the one shown in this post. I am using the above SQL as a sample as I think we can solve many other slow running SQL's if we solve this one.
    Is the SQL Plan management an option on Oracle 9i and/or Oracle 10g?

  • Problems with explain plan and statement

    Hi community,
    I have migrated a j2ee application from DB2 to Oracle.
    First some facts of our application and database instance:
    We are using oracle version 10.2.0.3 and driver version 10.2.0.3. It runs with charset Unicode 3.0 UTF-8.
    Our application is using Tomcat as web container and jboss as application server. We are only using prepared statements. So if I talk about statements I always mean prepared statements. Also our application is setting the defaultNChar property to true because every char and varchar field has been created as an nchar and nvarchar.
    We have some jsp sites that contains lists with search forms. Everytime I enter a value to the form that returns a filled resultset, the lists are performing great. But everytime I enter a value that returns an empty resultset, the lists are 100 times slower. The jsp sites are running in the tomcat environment and submitting their statements directly to the database. The connections are pooled by dbcp. So what can cause this behaviour??
    To anaylze this problem I started logging all statements and filled-in search field values and combinations that are executed by the lists described above. I also developed a standalone helper tool that reads the logged statements, executes them to the database and generates an explain plan for every statement. But now there appears a strange situation. Every statement, that performs really fast within our application, is now executed by the helper tool extremely slow. So I edited some jsp pages within our application to force an explain plan from there (tomcat env). So when I'm executing the same statement I'm getting with the exactly same code two completely different explain plans.
    First the statement itself:
    select LINVIN.BBASE , INVINNUM , INVINNUMALT , LINVIN.LSUPPLIERNUM , LSUPPLIERNUMEXT , LINVIN.COMPANYCODE , ACCOUNT , INVINTXT , INVINSTS , INVINTYP , INVINDAT , RECEIPTDAT , POSTED , POSTINGDATE , CHECKCOSTCENTER , WORKFLOWIDEXT , INVINREFERENCE , RESPONSIBLEPERS , INVINSUM_V , INVINSUMGROSS_V , VOUCHERNUM , HASPOSITIONS , PROCESSINSTANCEID , FCURISO_V , LSUPPLIER.AADDRLINE1 from LINVIN, LSUPPLIER where LINVIN.BBASE = LSUPPLIER.BBASE and LINVIN.LSUPPLIERNUM = LSUPPLIER.LSUPPLIERNUM and LINVIN.BBASE = ? order by LINVIN.BBASE, INVINDAT DESC
    Now the explain plan from our application:
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 1 | NESTED LOOPS | | 101 | 28583 | 55 (0)| 00:00:01 |
    | 2 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| 25 (0)| 00:00:01 |
    |* 3 | INDEX RANGE SCAN | LINV_INVDAT | 101 | | 1 (0)| 00:00:01 |
    | 4 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 1 | 148 | 1 (0)| 00:00:01 |
    |* 5 | INDEX UNIQUE SCAN | PK_177597 | 1 | | 1 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("LINVIN"."BBASE"=:1)
    filter("LINVIN"."BBASE"=:1)
    5 - access("LSUPPLIER"."BBASE"=:1 AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    Now the one from the standalone tool:
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 93773 | 25M| | 12898 (1)| 00:02:35 |
    | 1 | SORT ORDER BY | | 93773 | 25M| 61M| 12898 (1)| 00:02:35 |
    |* 2 | HASH JOIN | | 93773 | 25M| 2592K| 7185 (1)| 00:01:27 |
    | 3 | TABLE ACCESS BY INDEX ROWID| LSUPPLIER | 16540 | 2390K| | 332 (0)| 00:00:04 |
    |* 4 | INDEX RANGE SCAN | LSUPPLIER_HAS_BASE_FK | 16540 | | | 11 (0)| 00:00:01 |
    | 5 | TABLE ACCESS BY INDEX ROWID| LINVIN | 93709 | 12M| | 6073 (1)| 00:01:13 |
    |* 6 | INDEX RANGE SCAN | LINVOICE_BMDT_FK | 93709 | | | 84 (2)| 00:00:02 |
    Predicate Information (identified by operation id):
    2 - access("LINVIN"."BBASE"="LSUPPLIER"."BBASE" AND "LINVIN"."LSUPPLIERNUM"="LSUPPLIER"."LSUPPLIERNUM")
    4 - access("LSUPPLIER"."BBASE"=:1)
    6 - access("LINVIN"."BBASE"=:1)
    The size of the tables are: LINVIN - 383.692 Rows, LSUPPLIER - 115.782 Rows
    As you can see the one executed from our application is much faster than the one from the helper tool. So why picks oracle a completely different explain plan for the same statement? An why is a hash join much slower than a nested loop? Because If I'm right a nested loop should only be used when the tables are pretty small..
    I also tried to play with some parameters:
    I set optimizer_index_caching to 100 and optimizer_index_cost_adj to 30. I also changed optimizer_mode to FIRST_ROWS_100.
    I would really appreciated, if somebody can help me with this issue, because I'm really getting more and more distressed...
    Thanks in advance,
    Tobias
    Edited by: tobiwan on Sep 3, 2008 11:49 PM
    Edited by: tobiwan on Sep 3, 2008 11:50 PM
    Edited by: tobiwan on Sep 4, 2008 12:01 AM
    Edited by: tobiwan on Sep 4, 2008 12:02 AM
    Edited by: tobiwan on Sep 4, 2008 12:04 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:06 AM
    Edited by: tobiwan on Sep 4, 2008 12:07 AM

    tobiwan wrote:
    Hi again,
    Here ist the answer:
    The problem, because I got two different explain plans, was that the external tool uses the NLS sesssion parameters coming from the OS which are in my case "de/DE".
    Within our application these parameters are changed to "en/US"!! So if I'm calling in my external tool the java function Locale.setDefault(new Locale("en","US")) before connecting to the database the explain plans are finally equal.That might explain why you got two different execution plan, because one plan was obviously able to avoid a SORT ORDER BY operation, whereas the second plan required to run SORT ORDER BY operation, obviously because of the different NLS_SORT settings. An index by default uses the NLS_SORT = 'binary' order whereas ORDER BY obeys the NLS_SORT setting, which probably was set to 'GERMAN' in your "external tool" case. You can check the "NLS_SESSION_PARAMETERS" view to check your current NLS_SORT setting.
    For more information regarding this issue, see my blog note I've written about this some time ago:
    http://oracle-randolf.blogspot.com/2008/09/getting-first-rows-of-large-sorted.html
    Now let me make a guess why you observe the behaviour that it takes so long if your result set is empty:
    The plan avoiding the SORT ORDER BY is able to return the first rows of the result set very quickly, but could take quite a while until all rows are processed, since it requires potentially a lot of iterations of the loop until everything has been processed. Your front end probably by default only display the first n rows of the result set and therefore works fine with this execution plan.
    Now if the result set is empty, depending on your data, indexes and search criteria, Oracle has to work through all the data using the inefficient NESTED LOOP approach only to find out that no data has been found, and since your application attempts to fetch the first n records, but no records will be found, it has to wait until all data has been processed.
    You can try to reproduce this by deliberately fetching all records of a query that returns data and that uses the NESTED LOOP approach... It probably takes as long as in the case when no records are found.
    Note that you seem to use bind variables and 10g, therefore you might be interested that due to the "bind variable peeking" functionality you might potentially end up with "unstable" plans depending on the values "peeked" when the statement is parsed.
    For more information, see this comprehensive description of the issue:
    http://www.pythian.com/blogs/867/stabilize-oracle-10gs-bind-peeking-behaviour-by-cutting-histograms
    Note that this changes in 11g with the introduction of the "Adaptive Cursor Sharing".
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

Maybe you are looking for

  • Moving average price (average price)

    Dear All, Is there any standard report in SAP to have moving average price (average price) for single material and for specific period? Please let me know the details. Amit

  • Execute a string containing a PL/SQL block

    Hi, I would like to build a string containing a PL/SQL block and execute it dynamically. Is there way to do this. Note - The reason I want to this is because, based on certain table data dictionary views the declaration section of the PL/SQL block th

  • Basic develop tab has disappeared in Develop module

    I'm stuck, I can't process any images as the basic tab has suddenly vanished. Anyone had this and how can I get it back?

  • Database Control - Dynamically setting the @jc.sql statement 'attribute' ?

    Hi, I have a JCX file that extends a database control. I wan't to be able to set the @jc.sql statement 'attribute' value dynamically. There does not seem to be a setProperty() api in the DatabaseControl (interface) like there is in some other types o

  • Question about using USB Hub over 20 Ft

    I currently have a Mac Pro which has a USB 2.0 Hub attached to it (D-Link DUB-H7) and I need to add a 2nd hub. This 2nd hub would plug into this first hub and then would need to run approximately 20 ft. I would use a powered USB 2.0 like above but am