Clients - Parse, Execute, Fetch - Starttime

Hello guys,
i have a look at the some php scripts which are connecting to an oracle database and it looks like the following example.
-> $db = @ora_logon("scott@testdb29","tiger")
Ok here is a connect done to oracle
-> $curs = ora_open($db);
Ok a cursor is opened
-> $sql = "SELECT * FROM dept"; ora_parse($curs,$sql)
Ok the sql statement is parsed, with the previous opened cursor
-> ora_exec($curs);
Statement is executed
-> while (ora_fetch_into($curs, $results)) { $results[0]; }
Fetches from the select data
In which step does oracle starts to read the data (logical i/o or phyiscal i/o)?
Does it start at execute or does it start at the fetch?
When it starts at execute... what happens if i don't fetch the data? for example if oracle had to sort the data in the pga...
When it doesn't start at execute... what is done on the execute command?
Are these steps the same in every client (for example sqlplus) or can it vary?
Thanks and Regards
Stefan

Hello Khurram,
ok no problem ... but the php documentation does not contain this information :-(
I think that is a general question... i found a german pdf which describes the sql phases of 8i.
I am translating some parts of it:
http://members.aon.at/hermann.zauner/Oracle8i.pdf
Point 2.1 SQL
-> 2.1.2.1The cursor
This point is clear
-> 2.1.2.2 The Open-Phase
Also clear
-> 2.1.2.3 The Parse-Phase
Also clear
-> 2.1.2.4 The Execute-Phase
A SCN will be observed in the cursor. This is the guarantee, that a sql statement is always consistent, that means from the sql's point of view, that the data has not changed since the start of the command.
In case of a select statement with a sort or a group by, the data will be formatted and put in some temporary area ( => i think, the creator of the pdf means the sort area or the temp tablespace)
-> 2.1.2.5 The Fetch-Phase
This phase will only be perform in case of a select statement. In the fetch phase the data will be transfered to the client. The data will be read out of data/index blocks or out of the temporary area, which was prepared in the execution phase.
-> 2.1.2.6 The Close-Phase
also clear
Ok but the pdf description is a little bit unexact in the part of the temporary areas and the fetch data.
Please correct me if i misunderstand the behaviour of the sql execution phases in case of SELECTs
- The execute phase can contain some data reads (or writes) in cases if the data must be converted (i mean sorts, groups, sums, etc..).
- A normal "select * from EMP" will initiate oracle to read the data (out of the data files into the buffer cache, if they don't already exists in there) in the fetch phase.
Maybe someone of you got more detailed information, but this was the only one i could find about that topic...
Maybe J. Lewis knows more in detail :-)
Regards
Stefan

Similar Messages

  • Database parse execute and fetch shows 9 counts.

    for one of the SQLs, database parse execute and fetch shows 9 counts along with query - fetch 21 the rows - fetch shows 2,
    I also observe the below for the other sql statements,
    this stats have been collected by level 12 trace and then generating tkprof. please let me know few relevant links / notes through which can dig further into this.
    call count cpu elapsed disk query current rows
    Parse 10 0.01 0.00 0 0 0 0
    Execute 33 0.01 0.00 0 0 0 0
    Fetch 33 0.00 0.00 0 199 0 33
    total 76 0.02 0.01 0 199 0 33
    Regars.

    Exactly, you have understood it right(Sorry i was way involved with this tkprof when I initially posted the thread) .
    I've couple of sql's in my tkprof(leverl 12 trace) which are showing similar results and looks like its doing lot of work and fetching less number of rows.
    Bbackground
    1.A program was taking less than a minute to complete and now it is taking approximately 30 mins( The data being fetched is all the same) ..
    2.There are no changes applied to the database since last run.
    3.DB - 10.2.0.3.0
    would like to know ways to dig further into it.

  • Prepared=True Not working. Parse:Execute ration is one in tkprof report

    Hello,
    DB Version 9.2.0
    OS NT
    Provider: OraOLEDB 9.2.0.1.0
    I have a small .NET application. I use bind variables all overe my application. But my parse:execute ratio is 1 for some of all SELECT statements. There are many softparses in my application. I have also set session_cached_cursors.
    While using ADODB command object I set the "Prepared" parameter to true. But even though
    there are many parses.
    Heres is the simple block of code ..
    strCmd = "SELECT DISTINCT TO_CHAR(exp_date,'Month') Months, " _
    TO_CHAR(exp_date,'MM') MM FROM expenses ORDER BY MM"
    cmd1 = New ADODB.Command()
    cmd1.ActiveConnection = cConn
    cmd1.CommandText = strCmd
    cmd2 = New ADODB.Command()
    cmd2.ActiveConnection = cConn
    cmd2.CommandText = strCmd
    cmd2.Prepared = True
    For intLoop = 1 To 4
    cmd1.Execute()
    Next intLoop
    For intLoop = 1 To 4
    cmd2.Execute()
    Next intLoop
    Heres the tkprof trace output
    SELECT DISTINCT TO_CHAR(exp_date,'Month') Months, TO_CHAR(exp_date,'MM') MM
    FROM
    expenses ORDER BY MM
    call count cpu elapsed disk query current rows
    Parse 8 0.00 0.00 0 0 0 0
    Execute 8 0.04 0.06 0 0 0 0
    Fetch 8 0.01 0.01 0 56 0 226
    total 24 0.06 0.07 0 56 0 226
    Misses in library cache during parse: 1
    Optimizer goal: CHOOSE
    Parsing user id: 62
    Rows Row Source Operation
    23 SORT ORDER BY
    23 TABLE ACCESS FULL EXPENSES
    Logically it should be one parse and 8 execution. Will anybody please suggest me, how do I minimise the number of parses?
    Thanks in advance
    Sameer

    "db file parallel read" is likely to be associated with something like index prefetching.
    See:
    http://www.freelists.org/post/oracle-l/RE-Calculating-LIOs,11
    http://aprakash.wordpress.com/2012/05/29/index-range-scan-and-db-file-scattered-read-as-session-wait-event/
    http://jonathanlewis.wordpress.com/2006/12/15/index-operations/
    Tune the SQL.
    Review the execution plan.
    Check whether the statistics are accurate.
    Review whether the index hint (and others that we can't see) is appropriate.

  • SharePoint 2013 Client Object Model fetch image rendition info

    Hi,
    Below is the scenario I am trying with SharePoint 2013 managed client object model.
    FrontEnd: ASP.Net web application
    BackEnd: SharePoint 2013.
    I have a publishing image with rendition applied and I am able to fetch the image URL like "/{site}/{lib}/{imagename}?RenditionId={id}".
    I am able to download the file "/{site}/{lib}/{imagename}" using File.OpenBinaryDirect() 
    This is giving the original file without any renditions applied. If I use the same method with URL "/{site}/{lib}/{imagename}?RenditionId={id}", it gives me 400 Bad request error.
    I have also tried using file.OpenBinaryStream() method with URL "/{site}/{lib}/{imagename}?RenditionId={id}" whichi still gives the original file without rendition
    Please let me know how to fetch the below info using the rendition id?
    RenditionVersion;
    SourceImageWidth;
    SourceImageHeight;
    CropStartX;
    CropStartY;
    CropWidth;
    CropHeight;
    Alternatively, let me know if I could get the download file with rendition applied without requiring the above attributes.
    Note: I am looking for solutions in client side programming (client object model/ web services (REST) .  The server object model has classes "ImageRenditionCollection" and "ImageRendition" which can provide the above image info.
    Thanks,
    Srikanth

    To enable FQL, you have to copy the default result source and modify the Query Transformation string {?{searchTerms} -ContentClass=urn:content-class:SPSPeople}, at one of these
    levels -- Search Service Application (SSA), Site Collection, or Site -- and in one of the following ways:
    Remove the KQL filter, -ContentClass:urn:content-class:SPSPeople, from the Query Transformation. The resulting Query Transformation string will be: {?{searchTerms}}
    Replace the Query Transformation string with an FQL equivalent, such as {?andnot({searchTerms},filter(contentclass:"urn:content-class:SPSPeople*"))}.
    Source :http://msdn.microsoft.com/en-us/library/office/jj163973.aspx
    Bala

  • Lync client can't fetch the address book files

    Hi,
    We have 3 FE Lync 2010 servers. All the clients are giving an error "cannot synchronize with the corporate address book".
    I can access Internal Web services URL and download the AB file.  I have also made an entry for Lync Internal web services in the host file and signed into the Lync. Then the client was able
    to download the Address book files successfully. Any ideas to resolve this.?
    Thank you !

    Force a contact list update to make sure that your information is synchronized. To this, follow these steps:
    Locate one of the following folders, depending on your operating system:
    For Lync 2013:
    Windows 7 and Windows 8: %localappdata%\Microsoft\Office\15.0\Lync\sip_<sign-in name>
    For Lync 2010:
    Windows 8, Windows 7, or Windows Vista: %localappdata%\Microsoft\Communicator\sip_<sign-in name>
    Windows XP: %userprofile%\Local Settings\Application Data \Microsoft\Communicator\sip_<sign-in name>
    Delete the following files:
    Galcontacts.db
    galcontacts.db.idx
    CoreContact.cache
    ABS_<sign-in name>.cache
    Mfugroup.cache
    PersonalLISDB.cache
    PresencePhoto.cache
    Restart Lync, and then wait for 30 minutes for resynchronization to finish
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

  • System.AccessViolationException from Oracle.DataAccess.Client.OpsSql.Execut

    We support a VB.NET application which uses Oracle Data Access Components (ODAC) 10.2.0.2.21 to access an Oracle 11g database. The database is on another server, so from time to time application loses its database connection because of comms failures. To cater for this eventuality it checks for a valid connection before every database call and reopens the database if it can’t find one. This works, but subsequent database calls fail and return a System.AccessViolationException.
    Here is an extract from the trace.
    2010-09-08 17:08:42,284 ERROR - Database connection lost. Oracle.DataAccess.Client.OracleException ORA-03135: connection lost contact
    at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure)
    at Oracle.DataAccess.Client.OracleException.HandleError(Int32 errCode, OracleConnection conn, String procedure, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src)
    at Oracle.DataAccess.Client.OracleCommand.ExecuteNonQuery()
    2010-09-08 17:08:42,377 [5] ERROR xxxxxxxx [(null)] - Database connection established.
    2010-09-08 17:08:42,440 [5] ERROR xxxxxxxx [(null)] - ERROR - error in xxxxxxxx. System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
    at Oracle.DataAccess.Client.OpsSql.ExecuteNonQuery(IntPtr opsConCtx, IntPtr& opsErrCtx, IntPtr& opsSqlCtx, IntPtr& opsDacCtx, IntPtr opsSubscrCtx, Int32& isSubscrRegistered, OpoSqlValCtx*& pOpoSqlValCtx, String pCommandText, IntPtr& pUTF8CommandText, IntPtr[] pOpoPrmValCtx, String[] ppOpoPrmRefCtx, OpoMetValCtx*& pOpoMetValCtx, Int32 prmCnt)
    at Oracle.DataAccess.Client.OracleCommand.ExecuteNonQuery()
    Has anyone come across this before? Any help will be gratefully received

    Try posting this to the ODP.NET forum.
    You might need to flush out the connection pool, but they can help you with that.

  • Code for finding CPU utilisation for executing query

    Hi, i need code for finding CPU utilisation for executing the particular query.

    Use session tracing, then in trace file you can find cpu utilization for particular statement on each phase: parse, execute, fetch and the overall.
    Or You can use the dbms_utility.get_cpu_time (if your database is 10g) in pl/sql:
    declare
    cpt1 pls_integer;
    cpt2 pls_integer;
    cputime pls_integer;
    begin
    cpt1:=sys.dbms_utility.get_cpu_time;
    <some code here>
    cpt2:=sys.dbms_utility.get_cpu_time;
    cputime:=cpt2-cpt1;
    end;
    good luck

  • Tkprof - High query during parse

    Hi
    The following are the Parse/Execute/Fetch statistics and timings for the same query (one with index, one without index)
    Can you please help to explain what makes the query increment during phase stage. Thanks
    With index
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.16       0.16         17       1628          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          2          0           0
    total        3      0.16       0.16         17       1630          0           0without index
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.01          0        256          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          7          0           0
    total        3      0.01       0.01          0        263          0           0

    Kok Aik wrote:
    Hi
    The following are the Parse/Execute/Fetch statistics and timings for the same query (one with index, one without index)
    Can you please help to explain what makes the query increment during phase stage. Thanks
    With index
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.16       0.16         17       1628          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          2          0           0
    total        3      0.16       0.16         17       1630          0           0without index
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.01          0        256          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      0.00       0.00          0          7          0           0
    total        3      0.01       0.01          0        263          0           0
    I believe that you ran the query for the first time in the 1st part that you have shown. There are disk IOs happening in the 1st part but there are none in the 2nd. That's why the first part is getting more ios happening. IN the case of 2nd , proably the result is already cached and the ios are much lesser.
    Just my 2 cents.
    HTH
    Aman....

  • Dbms_lob , where did my time go ?

    Hi all
    After using 10046 to identify the sql that is causing the slowness in a program “ less commits cause my program to go slower” i realised that i am missing something ,
    There was a lot of time missing in the tkprof file , and no sql or wait event allocate the missing time , so i put the following test case together in an attempt to understand where the time is going .
    Version of test database : 11.1.0.6.0
    Name of test database: stdby ( :-) used my standby database)
    Database non-default values
    #     Parameter     Value1
    1:     audit_file_dest     /u01/app/oracle/admin/stdby/adump
    2:     audit_trail     DB
    3:     compatible     11.1.0.0.0
    4:     control_files     /u01/app/oracle/oradata/stdby/control01.ctl
    5:     control_files     /u01/app/oracle/oradata/stdby/control02.ctl
    6:     control_files     /u01/app/oracle/oradata/stdby/control03.ctl
    7:     db_block_size     8192
    8:     db_domain     
    9:     db_name     stdby
    10:     db_recovery_file_dest     /u01/app/oracle/flash_recovery_area
    11:     db_recovery_file_dest_size     2147483648
    12:     diagnostic_dest     /u01/app/oracle
    13:     dispatchers     (PROTOCOL=TCP) (SERVICE=stdbyXDB)
    14:     memory_target     314572800
    15:     open_cursors     300
    16:     processes     150
    17:     remote_login_passwordfile     EXCLUSIVE
    18:     undo_tablespace     UNDOTBS1More accurately I used existing example from http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4084920819312
    I hope Tom does not mind .
    create table t ( x clob );
    create or replace procedure p( p_open_close in boolean default false,
                                     p_iters in number default 100 )
      as
          l_clob clob;
      begin
          insert into t (x) values ( empty_clob() )
          returning x into l_clob;
          if ( p_open_close )
          then
              dbms_lob.open( l_clob, dbms_lob.lob_readwrite );
          end if;
          for i in 1 .. p_iters
          loop
              dbms_lob.WriteAppend( l_clob, 5, 'abcde' );
          end loop;
          if ( p_open_close )
          then
        dbms_lob.close( l_clob );
    end if;
    commit;
    end;I did the tracing and the run of the pkg with this
    alter session set timed_statistics = true;
    alter session set max_dump_file_size = unlimited;
    alter session set tracefile_identifier = 'test_clob_commit';
    alter session set events '10046 trace name context forever, level 12';
    exec p(TRUE,20000);
    exitDid the tkprof of the 10046 trace file with
    tkprof stdby_ora_3656_test_clob_commit.trc stdby_ora_3656_test_clob_commit.trc.tkp sort=(prsela,exeela,fchela) aggregate=yes waits=yes sys=yesWith output of
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.02       0.02          0          0          0           0
    Execute      1     46.89     147.81      38915     235267     492471           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     46.92     147.83      38915     235267     492471           1
    Misses in library cache during parse: 1
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      SQL*Net message to client                       2        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      latch: shared pool                             24        0.05          0.07
      latch: row cache objects                        2        0.00          0.00
      log file sync                                   1        0.01          0.01
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse      117      0.11       0.10          0          0          2           0
    Execute    426      0.37       0.40          6          4          9           2
    Fetch      645      0.17       0.51         63       1507          0        1952
    total     1188      0.65       1.03         69       1511         11        1954
    Misses in library cache during parse: 22
    Misses in library cache during execute: 22
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                     19778        1.12         30.31
      direct path write                           19209        0.00          0.44
      direct path read                            19206        0.00          0.37
      log file switch completion                      8        0.20          0.70
      latch: cache buffers lru chain                  5        0.01          0.02
        3  user  SQL statements in session.
      424  internal SQL statements in session.
      427  SQL statements in session.And it’s here where the time is being lost.The time of the main pkg p(TRUE,2000) takes 147.83 sec, which is correct , but what is making this time up.
    From sorted trace file
    SQL ID : catnjk0zv6jz1
    BEGIN p(TRUE,20000); END;
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.02       0.02          0          0          0           0
    Execute      1     46.89     147.81      38915     235267     492471           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     46.92     147.83      38915     235267     492471           1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 81
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      latch: shared pool                             24        0.05          0.07
      latch: row cache objects                        2        0.00          0.00
      log file sync                                   1        0.01          0.01
      SQL*Net message to client                       1        0.00          0.00
      SQL*Net message from client                     1        0.00          0.00
    SQL ID : db78fxqxwxt7r
    select /*+ rule */ bucket, endpoint, col#, epvalue
    from
    histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by bucket
    intresting , oracle is still using the rule hint in 11g ?
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        3      0.00       0.00          0          0          0           0
    Execute     98      0.05       0.05          0          0          0           0
    Fetch       98      0.04       0.17         28        294          0        1538
    total      199      0.10       0.22         28        294          0        1538
    Misses in library cache during parse: 0
    Optimizer mode: RULE
    Parsing user id: SYS   (recursive depth: 3)
    Rows     Row Source Operation
         20  SORT ORDER BY (cr=3 pr=1 pw=1 time=8 us cost=0 size=0 card=0)
         20   TABLE ACCESS CLUSTER HISTGRM$ (cr=3 pr=1 pw=1 time=11 us)
          1    INDEX UNIQUE SCAN I_OBJ#_INTCOL# (cr=2 pr=0 pw=0 time=0 us)(object id 408)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                        28        0.02          0.12
    SQL ID : 5n1fs4m2n2y0r
    select pos#,intcol#,col#,spare1,bo#,spare2,spare3
    from
    icol$ where obj#=:1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        2      0.00       0.00          0          0          0           0
    Execute     19      0.03       0.03          0          0          0           0
    Fetch       60      0.00       0.04          1        120          0          41
    total       81      0.04       0.08          1        120          0          41
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: CHOOSE
    Parsing user id: SYS   (recursive depth: 2)
    Rows     Row Source Operation
          1  TABLE ACCESS BY INDEX ROWID ICOL$ (cr=4 pr=0 pw=0 time=0 us cost=2 size=54 card=2)
          1   INDEX RANGE SCAN I_ICOL1 (cr=3 pr=0 pw=0 time=0 us cost=1 size=0 card=2)(object id 42)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file sequential read                         1        0.04          0.04None of the parse , execute ,fetch and wait times makes up the 147.83 seconds.
    So i turned to oracles trcanlzr.sql that Carlos Sierra wrote and parsed the same trace file to find the offending sql .
    And it starts getting intrestting
    Trace Analyzer 11.2.6.2 Report: trcanlzr_75835.html
    stdby_ora_3656_test_clob_commit.trc (6970486 bytes)
    Total Trace Response Time: 148.901 secs.
    2009-MAY-03 20:03:51.771 (start of first db call in trace).
    2009-MAY-03 20:06:20.672 (end of last db call in trace).
    RESPONSE TIME SUMMARY
    ~~~~~~~~~~~~~~~~~~~~~
                                              pct of                  pct of                  pct of
                                    Time       total        Time       total        Time       total
    Response Time Component    (in secs)   resp time   (in secs)   resp time   (in secs)   resp time
                        CPU:      47.579       32.0%
              Non-idle Wait:       0.467        0.3%
         ET Unaccounted-for:     100.825       67.7%
           Total Elapsed(1):                             148.871      100.0%
                  Idle Wait:                               0.001        0.0%
         RT Unaccounted-for:                               0.029        0.0%
          Total Response(2):                                                     148.901      100.0%
    (1) Total Elapsed = "CPU" + "Non-Idle Wait" + "ET Unaccounted-for".
    (2) Total Response = "Total Elapsed Time" + "Idle Wait" + "RT Unaccounted-for".
    Total Accounted-for = "CPU" + "Non-Idle Wait" + "Idle Wait" = 148.872 secs.
    Total Unccounted-for = "ET Unaccounted-for" + "RT Unaccounted-for" = 100.854 secs.{font:Courier}
    {color:red}
    {size:19}100.825 seconds Wow , that is a lot 67.7 % of the time is not accounted for {size}
    {color}
    {font}
    I even used TVD$XTAT TriVaDis eXtended Tracefile Analysis Tool with the same conclution .
    {font:Courier}
    {color:green}
    {size:19}Looking at the raw trace file i see a lot of lines like this{size}
    {color}
    {font}
    WAIT #7: nam='direct path read' ela= 11 file number=4 first dba=355935 block cnt=1 obj#=71067 tim=1241337833498756
    WAIT #7: nam='direct path write' ela= 12 file number=4 first dba=355936 block cnt=1 obj#=71067 tim=1241337833499153
    WAIT #7: nam='db file sequential read' ela= 1095 file#=4 block#=399 blocks=1 obj#=71067 tim=1241337833501366{font:Courier}
    {color:green}
    {size:19}
    What is even more interesting is the sql for "PARSING IN CURSOR #7" is not in the trace file !
    The question is where is the time going or is the parser of the 10046 trace file just not putting the detail in ? How do i fix this, without speculating, if I do not know where the problem is ?
    I thought of doing a strace on the process . Where else can i look for my 100 sec
    Please point me in a direction where i can look for my 100,825 seconds as this is a test case with a production system that is loosing the same amount of time but with a lot more sql arround its dbms_lob.writeappend.
    {size}
    {color}
    {font}
    Edited by: user5174849 on 2009/05/16 11:17 PM

    user5174849 wrote:
    After using 10046 to identify the sql that is causing the slowness in a program “ less commits cause my program to go slower” i realised that i am missing something ,
    There was a lot of time missing in the tkprof file , and no sql or wait event allocate the missing time , so i put the following test case together in an attempt to understand where the time is going .
    Version of test database : 11.1.0.6.0
    What is even more interesting is the sql for "PARSING IN CURSOR #7" is not in the trace file !
    The question is where is the time going or is the parser of the 10046 trace file just not putting the detail in ? How do i fix this, without speculating, if I do not know where the problem is ?
    I thought of doing a strace on the process . Where else can i look for my 100 sec
    Please point me in a direction where i can look for my 100,825 seconds as this is a test case with a production system that is loosing the same amount of time but with a lot more sql arround its dbms_lob.writeappend.I guess that the separate cursor that is opened for the LOB operation is where the time is spent, and unfortunately this part is not very well exposed via the usual interfaces (V$SQL, 10046 trace file etc).
    You might want to read this post where Kerry identifies the offending SQL via V$OPEN_CURSOR: http://kerryosborne.oracle-guy.com/2009/04/hidden-sql-why-cant-i-find-my-sql-text/
    The waits of this cursor #7 are quite likely rather relevant since they probably show you what the LOB operation is waiting for.
    The LOB is created with the default NOCACHE attribute therefore it's read and written using direct path operations.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Trace file generation using oradebug

    DB version : 10.2.0.4.0
    OS : Solaris 5.10
    I am trying to trace a session started by a Java application.
    So, i've logged in as SYS. Got pid, spid from v$process. Then started the debug
    SYS@MN_PROD>oradebug setospid 2523
    SYS@MN_PROD>oradebug event 10046 trace name context forever, level 32and closed it using
    oradebug event 10046 trace name context offWhen i type
    ORADEBUG TRACEFILE_NAMEIt shows a filename in bdump directory. But i can't find this file in bdump or udump.
    Is there any init.ora setting i need to do to get this file generated?

    Decimal Binary Description
    1       0001   Emit statistics for parse, execute, fetch, commit, and rollback database calls (standard sql_trace)
    2       0010   Unknown
    4       0100   Emit values for SQL bind variables (also called “placeholders”)
    8       1000   Emit statistics for Oracle kernel internal function calls (also called “wait events”) listed in v$event_name
    These levels can be combined as if by a bitwise or function to produce combinations of data in an Oracle trace file.
    A value of 15 is just a combination of all 4 preceeding values

  • Rule Based Optimization

    Hi,
    Rule Based Optimization is a deprecated feature in Oracle 10g.We are in the process of migrating from Oracle 9i to 10g.I have never heard of this Rule based Optimization earlier.I have googled for the same.But, got confused with the results.
    Can anybody shed some light on the below things...
    Is this Optimization done by Oracle or as a developer do we need to take care of the rules while writing SQL statements?
    There is another thing called Cost Based Optimization...
    Who will instruct the Oracle whether to use Rule Based Optimization or cost Based Optimization?
    Thanks & Regards,
    user569598

    Hope the following explanation would be helpful.
    Whenever a statement is fired, Oracle should goes through the following stages:
    Parse -> Execute -> Fetch (fetch only for select statement).
    During Parse, Oracle first evaluates, Syntatic checking (SELECT, FROM, WHERE, ORDER BY ,GROUP and etc) and then Semantic Checking (columns names, table name, user permission on the objects and etc). Once these two stages passes, then, it has to decided whether to do soft parse or hard parse. If similar cursor(statement) doesn't exits in the shared pool, Oracle goes for Hard parse where Optimizer comes in picture for generating query plan.
    Oracle has to decide either RBO or CBO. It also depends on the OPTIMIZER_MODE parameter value. If RULE hint is used, RBO will be used, if there are no statistics for those tables involved in the query, Oracle decides RBO, (condition applies). If statistics are available, or dynamic samplying is defined then Oracle use CBO to prepare the Optimal execution plan.
    RBO is simply relies on set of rules where CBO relies on statistical information.
    Jaffar

  • Trace Info

    When a trace a session we generally see the outputs in a tabular fashion for the PARSE,EXECUTE,FETCH phase like count,disk,elapsed time,query,current. According to the doc
    QUERY: Total number of buffers retrived in consistent mode for each parse,execute,fetch phase.
    CURRENT:Total number of buffers retrived in current mode for each parse,execute,fetch phase.
    Could you please elaborate what that buffer retrived in consistent and current mode means.

    " Current mode
    A current mode get, also called a db block get, is a retrieval of a block as it currently appears in the buffer cache. For example, if an uncommitted transaction has updated two rows in a block, then a current mode get retrieves the block with these uncommitted rows. The database uses db block gets most frequently during modification statements, which must update only the current version of the block.
    Consistent mode
    A consistent read get is a retrieval of a read-consistent version of a block. This retrieval may use undo data. For example, if an uncommitted transaction has updated two rows in a block, and if a query in a separate session requests the block, then the database uses undo data to create a read-consistent version of this block (called a consistent read clone) that does not include the uncommitted updates. Typically, a query retrieves blocks in consistent mode."
    http://docs.oracle.com/cd/E11882_01/server.112/e25789/memory.htm#CNCPT89169

  • Session Monitor

    Hi,
    We have an Oracle 11gR2 + RAC + ASM + Exadata + Linux data warehouse environment. I need to monitor some user session. I know user by using DBMS_MONITOR we can monitor the session but I do not have much idea, if we execuet dbms_monitor.session_trace_enable(sid,serial#,true); after what we need to do, and where we find all the trace record (in udump or need to query again).
    Please advise.
    Regards,

    Decimal Binary Description
    1       0001   Emit statistics for parse, execute, fetch, commit, and rollback database calls (standard sql_trace)
    2       0010   Unknown
    4       0100   Emit values for SQL bind variables (also called “placeholders”)
    8       1000   Emit statistics for Oracle kernel internal function calls (also called “wait events”) listed in v$event_name
    These levels can be combined as if by a bitwise or function to produce combinations of data in an Oracle trace file.
    A value of 15 is just a combination of all 4 preceeding values
    ALTER session SET EVENT=’10046 trace name context forever, level 4’ SCOPE=spfile;as shown above "level 4" traces only Bind Variables
    Additional details are available with "level 12".

  • SQLJ and reparsing?

    My trace outputs show that when using SQLJ it seems that all
    statements being used seem to be reparsed whenever they are
    used (identical parse/execute/fetch counts from tkprof). Is
    there any way in SQLJ to do a sort of PREPARE for e.g. e SELECT
    or UPDATE and then just re-execute it, avoiding the re-parsing
    overhead?
    Thanks,
    Erwin
    null

    Oracle Product Development Team wrote:
    : Unfortunately NO in 8i
    : But the good news is that we are working on
    : Statement Caching and this feature should be available
    : in 8.1.6
    : Erwin Heute (guest) wrote:
    : : My trace outputs show that when using SQLJ it seems that all
    : : statements being used seem to be reparsed whenever they are
    : : used (identical parse/execute/fetch counts from tkprof). Is
    : : there any way in SQLJ to do a sort of PREPARE for e.g. e
    SELECT
    : : or UPDATE and then just re-execute it, avoiding the re-
    parsing
    : : overhead?
    : : Thanks,
    : : Erwin
    : Oracle Technology Network
    : http://technet.oracle.com
    Thanks for the info. I'll be looking forward to it.
    Regards,
    Erwin
    null

  • Execute to Parse的意义

    网友的提问:
    AWR 中的"Execute to Parse"指标有何意义?

    Execute to Parse 指标反映了执行解析比
    其公式为 1-(parse/execute) , 目标为100% 及接近于只 执行而不解析
    在oracle中解析往往是执行的先提工作,但是通过游标共享 可以解析一次 执行多次, 执行解析可能分成多种场景:
    1.hard coding => 硬编码代码 硬解析一次 ,执行一次, 则理论上其执行解析比 为 1:1 ,则理论上Execute to Parse =0 极差,且soft parse比例也为0%
    2.绑定变量但是仍软解析=》 软解析一次,执行一次 , 这种情况虽然比前一种好 但是执行解析比(这里的parse,包含了软解析和硬解析)仍是1:1, 理论上Execute to Parse =0 极差, 但是soft parse比例可能很高
    3. 使用 静态SQL、动态绑定、session_cached_cursor、open cursors等技术实现的 解析一次,执行多次, 执行解析比为N:1, 则 Execute to Parse= 1- (1/N) 执行次数越多 Execute to Parse越接近100% ,这种是我们在OLTP环境中喜闻乐见的!
    通俗地说 soft parse反映了软解析率, 而软解析在oracle中仍是较昂贵的操作, 我们希望的是解析1次执行N次,如果每次执行均需要软解析,那么虽然soft parse%=100% 但是parse time仍可能是消耗DB TIME的大头。
    Execute to Parse反映了 执行解析比,Execute to Parse和soft parse% 都很低 那么说明却是没有绑定变量 , 而如果 soft parse% 接近99% 而Execute to Parse 不足90% 则说明没有执行解析比低, 需要通过 静态SQL、动态绑定、session_cached_cursor、open cursors等技术减少软解析。
    Edited by: Maclean Liu on 2012-8-22 下午7:35
    Edited by: Maclean Liu on 2012-8-22 下午7:38

Maybe you are looking for

  • Vista 64 and Itunes 7.6.0.29

    I downloaded itunes 7.6 this morning on my Vista 64 system. It installed without a hitch. I then plugged up my iphone which I have been syncing on my son's 32 bit XP system, it recognized the iphone after self installing drivers. I then actually down

  • Opening a .Mac email using Safari

    I feel a little stupid on this but, I configured the iPhone mail app for my girlfriend. Everything works perfectly as it should. I wanted to check my email and rather than re-configuring the mail app I accessed my .Mac account with Safari. Logged in

  • Waiting the best solution to strange characters for non-latin songs?

    I've tried to find a solution to the "Strange Characters" when importing songs with non-latin characters (I'm fan of greek music)... I`ve changed the ID3 tags to 2.4 and nothing happened...should I wait for the upgrade?

  • Need to get the top bar back.

    I lost the bar at the top that says "File", "Edit", "Bookmarks", "History", etc. How do I get it back?

  • All discs are unreadable

    Neither my PowerBook nor my iMac will read video DVDs. When I insert the disc in either machine, I get the error message "Disc Insertion The disk you inserted was not readable by this computer.", along with the options of "Ignore" & "Eject". I don't