오라클 9i에서 trace와 tkprof 관련 질문입니다.

오라클 9i에서 trace 한것을 와 tkprof 로 확인 할때
row source operation에 아래와 같이 소요시간 표시하도록 할 수 없나요?
trace level을 12로 해도 " 10 INDEX RANGE SCAN XSOFT_TEMP_N1" 이부분 까지만 나와서요.
Rows Row Source Operation
10 INDEX RANGE SCAN XSOFT_TEMP_N1 (cr=1277 pr=1276 pw=0 time=242396 us)(object id 844722)
답변 부탁드립니다.

10g 에서만 가능합니다.

Similar Messages

  • SQL TRACE/TKPROF VS SQL ACCESS ADVISOR

    Hi All,
    Can anyone please tell me what's the exact difference between SQL TRACE/TKPROF and SQL ACCESS ADVISOR in Oracle 10g.(By showing some examples)
    And also why should I go for SQL ACCESS ADVISOR ?? (Since I have used the former all these days) :)
    Why can't I still use SQL TRACE/TKPROF.
    Please if anyone can pour light on this. Thanks in Advance.
    Regards,
    Marlon.

    Better go through below link........
    http://www.remote-dba.net/oracle10g_tuning/t_sqlaccess_advisor.htm_
    -Ek

  • Sql trace - tkprof

    Hi..
    My requirement is to
    run a sql trace, with binds, replicate the issue in the SQL Report Wizard and then upload both the raw trace and tkprof'd output.
    Can anyone give me in detail explanation for the above?
    Rgds
    Geeta Mutyaboyina

    You might find [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]this link useful.
    Regards,
    Rob.

  • Analysis of a trace - tkprof

    Hello, I would like to locate the root of a very slow execution of a package at the base. For this to make a trace session on the last run it took 77 minutes to complete.
    Then he saw the information on after turning it with tkprof. In the first part are the most expensive operations, and ultimately the overall results.
    Database Version: 10.2.0.4.0
    Standard Edition - RAC - ASM
    total RAM        16G
    sga_target      1504M
    db_cache_size 0
    owner            XAJTDB
    --Package/procedure executed (from a job)
    BEGIN HISR_FUTURE.p_hisr_future_all; END;--file  xa212_j000_14811348.trc
    TKPROF: Release 10.2.0.4.0 - Production on Fri Feb 1 15:23:26 2013
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Trace file: xa212_j000_14811348.trc
    Sort options: fchela 
    *SELECT MAX(UTCTIME)*
    *FROM*
    *CONECT_01 WHERE POINTNUMBER=:device_1*
    call             count       cpu        elapsed       disk      query     current    rows
    Parse         1776        0.05       0.03           0                0           0            0
    Execute     1776        0.04       0.11           0               0            0            0
    Fetch         3553       51.58     2256.05      405620  552615     0           1776
    total       7105        51.67     2256.21      405620  552615     0           1776
    Misses in library cache during parse: 0
    Misses in library cache during execute: 1
    Parsing user id: 57  (XAJTDB)   (recursive depth: 2)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          0   SORT (AGGREGATE)
          0    FIRST ROW
          0     INDEX   MODE: ANALYZED (FULL SCAN (MIN/MAX)) OF
                    'CONECT_01_PK' (INDEX (UNIQUE))
    *SELECT MAX(UTCTIME)*
    *FROM*
    *STATUS_01 WHERE POINTNUMBER=:device_1*
    call          count       cpu    elapsed       disk      query    current     rows
    Parse       8206      0.20       0.15          0          0          0           0
    Execute   8206      0.25       0.55          0          0          0           0
    Fetch     16412     23.09      38.80       39     869129      0        8206
    total      32824     23.54      39.50        39     869129      0        8206
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 57  (XAJTDB)   (recursive depth: 2)
    Rows     Row Source Operation
          1  SORT AGGREGATE (cr=35 pr=33 pw=0 time=209038 us)
          1   FIRST ROW  (cr=35 pr=33 pw=0 time=209020 us)
          1    INDEX FULL SCAN (MIN/MAX) STATUS_01_PK (cr=35 pr=33 pw=0 time=209019 us)(object id 79195)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
          1    FIRST ROW
          1     INDEX   MODE: ANALYZED (FULL SCAN (MIN/MAX)) OF
                    'STATUS_01_PK' (INDEX (UNIQUE))
    *OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS*
    call     count       cpu    elapsed       disk      query    current     rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute    0      0.00       0.00          0          0          0           0
    Fetch       0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
    *OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS*
    call          count        cpu      elapsed           disk         query      current        rows
    Parse       97568       4.17         4.94              0             38              2           0
    Execute 111405    146.92    2382.98     465031    1337316    1715361       85314
    Fetch      46033     75.20     2297.66     406057    1461528               0       36908
    total      255006    226.29    4685.59     871088    2798882    1715363      122222
    Misses in library cache during parse: 8451
    Misses in library cache during execute: 153
    94798  user  SQL statements in session.
    4557  internal SQL statements in session.
    99355  SQL statements in session.
       48  statements EXPLAINed in this session.
    Trace file: xa212_j000_14811348.trc
    Trace file compatibility: 10.01.00
    Sort options: fchela 
           1  session in tracefile.
       94798  user  SQL statements in trace file.
        4557  internal SQL statements in trace file.
       99355  SQL statements in trace file.
        8398  unique SQL statements in trace file.
          48  SQL statements EXPLAINed using schema:
               XAJTDB.prof$plan_table
                 Default table was used.
                 Table was created.
                 Table was dropped.
      707680  lines in trace file.
        2502  elapsed seconds in trace file.I think the memory settings are correct.
    May bring me ideas on these overall results indicate?
    The tables involved have a large volume of data (to 15 million, as appropriate). They are not partitioned (remember is Standard Edition) , is this one of the reasons why the job run so slow?.
    Please require me more information if necessary.
    Thanks!
    Edited by: user12086565 on 26/03/2013 07:38
    Edited by: user12086565 on 26/03/2013 09:31

    Hi, here the output of the information request:
    PROMPT ALTER TABLE xajtdb.status_01 ADD CONSTRAINT status_01_pk PRIMARY KEY
    ALTER TABLE xajtdb.status_01
      ADD CONSTRAINT status_01_pk PRIMARY KEY (
        utctime,
        pointnumber
      USING INDEX
        TABLESPACE xa_hisr_hist_data_ts
        PCTFREE   10
        INITRANS   2
        MAXTRANS 255
        STORAGE (
          INITIAL 1310720 K
          NEXT          0 K
          MINEXTENTS    1
          MAXEXTENTS    UNLIMITED
          PCTINCREASE   0
          FREELISTS     1
          FREELIST GROUPS 1
          BUFFER_POOL DEFAULT
        LOGGING
        NOCOMPRESS
    PROMPT ALTER TABLE xajtdb.conect_01 ADD CONSTRAINT conect_01_pk PRIMARY KEY
    ALTER TABLE xajtdb.conect_01
      ADD CONSTRAINT conect_01_pk PRIMARY KEY (
        utctime,
        pointnumber
      USING INDEX
        TABLESPACE xa_hisr_hist_data_ts
        PCTFREE   10
        INITRANS   2
        MAXTRANS 255
        STORAGE (
          INITIAL  647168 K
          NEXT          0 K
          MINEXTENTS    1
          MAXEXTENTS    UNLIMITED
          PCTINCREASE   0
          FREELISTS     1
          FREELIST GROUPS 1
          BUFFER_POOL DEFAULT
        LOGGING
        NOCOMPRESS
    select
    table_name,num_rows,
    blocks,avg_space,
    chain_cnt,avg_row_len,
    sample_Size
    from dba_tables
    where table_name = 'CONECT_01';
    TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
    CONECT_01  16195993  92861      1738         0          24        5484
    ===============================================================================
    select
    table_name,num_rows,
    blocks,avg_space,
    chain_cnt,avg_row_len,
    sample_Size
    from dba_tables
    where table_name = 'STATUS_01';
    TABLE_NAME NUM_ROWS BLOCKS AVG_SPACE CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE
    STATUS_01  46835851 262443       948         0          23        5563
    ===============================================================================
    select
    index_name,num_rows,
    leaf_blocks,sample_Size
    from dba_indexes
    where table_name = 'CONECT_01';
    INDEX_NAME        NUM_ROWS LEAF_BLOCKS SAMPLE_SIZE
    CONECT_01_PK      15883091       84054      199734
    CONECT_01_I_POINT 16110657       85489      217286
    ===============================================================================
    select
    index_name,num_rows,
    leaf_blocks,sample_Size
    from dba_indexes
    where table_name = 'STATUS_01';
    INDEX_NAME        NUM_ROWS LEAF_BLOCKS SAMPLE_SIZE
    I_STATUS_01_POINT 45950607      235104      210693
    STATUS_01_PK      46651458      242459      204916
    ===============================================================================
    select OWNER,TABLE_NAME,NUM_DISTINCT,LAST_ANALYZED
    from ALL_TAB_COLUMNS
    where TABLE_NAME = 'CONECT_01’;
    OWNER  TABLE_NAME NUM_DISTINCT LAST_ANALYZED
    XAJTDB CONECT_01          5646 01/04/2013 22:19:12
    XAJTDB CONECT_01          2777 01/04/2013 22:19:12
    XAJTDB CONECT_01           695 01/04/2013 22:19:12
    XAJTDB CONECT_01             5 01/04/2013 22:19:12
    ===============================================================================
    select OWNER,TABLE_NAME,NUM_DISTINCT,LAST_ANALYZED
    from ALL_TAB_COLUMNS
    where TABLE_NAME = 'STATUS_01';
    OWNER  TABLE_NAME NUM_DISTINCT LAST_ANALYZED
    XAJTDB STATUS_01          5626 01/04/2013 22:21:17
    XAJTDB STATUS_01          8342 01/04/2013 22:21:17
    XAJTDB STATUS_01            27 01/04/2013 22:21:17
    XAJTDB STATUS_01             2 01/04/2013 22:21:17
    ===============================================================================Thanks!

  • SQL Trace(tkprof) option in TOAD

    I am getting insufficient privilages error for the same.
    Pl help me.

    There is a hidden parameter called tracefiles_public that you can normally convince the DBAs to set for development instances that will set the umask for trace files to be globally readable then you can either:
    (1) get access to the server machine at the o/s level and copy across your trace files and trace them locally
    (2) telnet/ssh/something on to the server and trace them there
    (3) If you search this forum there are postings on how to use external tables/utl_file to open the remote .trc file on the server and transfer it to your client.
    If you can't get tracefiles_public (search asktom.oracle.com for some reinforcements if you need some backup to get permissions) then you'll need to bug the DBAs regularly to get read permissions on specific trace files. In general experience you won't need to bug them for more than 3 trace file permission changes before they give you access, unless it is a locked down prod box.

  • Trace application activity in Database side

    Hi all
    We are using Oracle database 11g (Release 11.1.0.6.0 - 64bit Production) for database and a billing system for client application. Every end of month we are running bill calculation from the front end of our billing system.
    The thing i want to do is to know or to get step by step each transaction executed by the Billing application when we launch the bill calculation and bill generation task from the application.
    Someone does knows how to proceed on that.
    I want to know thwe tables and procedures or function used by the Bill system engine for tuning purpose.
    The Os is Linux red hat 5.
    Thank you.
    Lucienot.

    Use TRACE/TKPROF (with wait events) to trace the session that runs the bill calculation.
    You might need some help from your DBA.
    See:
    http://www.oracle-base.com/articles/10g/sql-trace-10046-trcsess-and-tkprof-10g.php
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/sqltrace.htm#PFGRF01010
    http://docs.oracle.com/cd/E11882_01/server.112/e16638/sqltrace.htm#PFGRF01020

  • ESYU: R12 - R12 Trace/Debug file을 생성하는 방법

    Purpose
    Version: 12.0
    Information in this document applies to any platform.
    R12에서 Trace와 Debug file 생성 방법을 알아본다.
    Solution
    1. Navigate Responsibility: System Administrator> Profile> System>Query
    User: Trace를 생성할 user 명을 입력
    Profile: Initialization SQL Statement - Custom
    2. User columne에 아래의 내용을 입력
    begin fnd_ctl.fnd_sess_ctl('','','TRUE','TRUE','LOG','ALTER SESSION SET EVENTS='||''''||'10046 TRACE NAME CONTEXT FOREVER,LEVEL 12'||''''); end;
    3. 입력한 내용을 저장
    FND debug messages를 생성:
    4. Navigate Responsibility: System Administrator> Profile> System>Query
    User: Debug를 생성할 user 명을 입력
    Profile: FND:%Debug%
    5. 아래 profile options 값을 user level로 set
    FND: Debug Log Enabled Yes
    FND: Debug Log Filename <empty>
    FND: Debug Log Level STATEMENT
    FND: Debug Log Mode Asynchronous with Cross-Tier Sequencing
    FND: Debug Log Module %
    6. 입력한 내용을 저장
    Example:
    7. Navigate: Payables Responsibility> Other> Request> Run> Select and Submit the Report
    (특정 Report를 실행한다)
    8. Trace 와 FND Debug messages를 disable
    9. FND Debug message를 찾기위해 아래 query를 실행
    select log_sequence, timestamp, module,message_text
    from fnd_log_messages fnd
    where trunc(timestamp) = trunc(sysdate)
    and (module like '%xla.%' or module like '%ap.%')
    and trunc(timestamp) = trunc(sysdate)
    order by timestamp;
    SELECT log_sequence, message_text,substr(module,1,100)
    FROM fnd_log_messages msg
    , fnd_log_transaction_context tcon
    WHERE msg.TRANSACTION_CONTEXT_ID = tcon.TRANSACTION_CONTEXT_ID
    AND tcon.TRANSACTION_ID= /*Give the request id of accounting program*/
    ORDER BY LOG_SEQUENCE desc
    Debugging의 활성화를 위해 아래 profile option을 사용:
    FND: Debug Log Enabled : Yes
    FND: Debug Log Level : Statement
    Debug message를 얻기 위해 아래 query 문을 사용:
    SELECT substr(module,1,70), MESSAGE_TEXT, timestamp, log_sequence
    FROM fnd_log_messages msg, fnd_log_transaction_context tcon
    WHERE msg.TRANSACTION_CONTEXT_ID = tcon.TRANSACTION_CONTEXT_ID
    AND tcon.TRANSACTION_ID = <your child request ID>
    ORDER BY LOG_SEQUENCE
    10. 아래 SQL을 이용해 trace file이 생성되는 위치를 확인, raw trace와 tkprof'd trace
    file을 upload
    select value
    from v$parameter
    where name = 'user_dump_dest';
    Reference
    Note 458371.1

  • SELECT query taking long time

    Hi All,
    I am trying to run one SELECT statement which uses 6 tables. That query generally take 25-30 minutes to generate output.
    Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.
    What else I should check in order to figure out why my SELECT statement is taking time?
    Any help will be much appreciated.
    Thanks!

    Please let me know if you still want me to provide all the information mentioned in the link.Yes, please.
    Before you can even start optimizing, it should be clear what parts of the query are running slow.
    The links contains the steps to take regarding how to identify the things that make the query run slow.
    Ideally you post a trace/tkprof report with wait events, it'll show on what time is being spent, give an execution plan and a database version all in once...
    Today it is running from more than 2 hours. I have checked there are no locks on those tables and no other process is using them.Well, something must have changed.
    And you must indentify what exactly has changed, but it's a broad range you have to check:
    - it could be outdated table statistics
    - it could be data growth or skewness that makes Optimizer choose a wrong plan all of a sudden
    - it could be a table that got modified with some bad index
    - it could be ...
    So, by posting the information in the link, you'll leave less room for guesses from us, so you'll get an explanation that makes sense faster or, while investigating by following the steps in the link, you'll get the explanation yourself.

  • Best practice for index creation

    Hello,
    I am working on Oracle 10g and AIX.
    I have one table with 9 columns.
    The sql queries on this table are such that out of 10 column it always having 5 column in the where clause.
    So ,we have concatenated index on these 5 columns.
    Other columns 4 , can come in to where clause in any order and number , like of only 2 columns can be using in where clause or 3 columns
    or non of these columns.
    Is this better to create index on all 4 columns , it should be concatenated index or individual index on all 4 columns.
    i do not have all sql statement , becasue as per developers there are 10 modules accessing this tables and each may have 100 sql statements on this table..
    Any idea...what i can do in this scenario ..create concatenated index on all 4 columns or individual index or no index..

    youre coming at it wrong. you could do more harm then good taking that approach, you need to isolate the individual sql statements hitting that table. system trace, tkprof, statspack, awr are your friends here
    Once you identify the queries, get yourself a dev version of the table and start playing the indexes, get tkprof and explain plans for the queries. In general, only the predicates in the sql select are candidates for index usage.
    if you have 5 cols and any one of the cols could be used in combination or individual, maybe start creating 5 different indexes and then with the combo indexes. but only after you tkprof / explain plan them before and after, youre just guessing otherwise.

  • For loop taking more than 24 hours to complete

    Hi Frens,
    This for loop took close to 24 hours to complete.I cant seem to detect the problem.Can someone help to advice?
    vpath := CS_UTILS.GET_UTL_PATH ;
    For Rec in Lot_Ship_Assy_Filename Loop
    v_sysdate := to_char(sysdate-1,'YYYYMMDD');
    Filename:= Rec.shipfrom||'_'||Rec.shipto||'_'||substr(v_sysdate,1,8)||'_'||Rec.invoiceno||'.txt';
    edi_hat_data := utl_file.fopen(vpath,filename,'W');
    var1 := to_char(sysdate,'YYYYMMDD HH24MISS');
    dbms_output.put_line('Start '||var1);
    Hdr_Sql := 'H'||to_char(sysdate-1,'YYMMDD');
    utl_file.put_line(edi_hat_data,Hdr_sql);
    nRecord := '0';
    For Lotcur in Lot_Ship_Assy (Rec.InvoiceNo) Loop
    vLotCode := ' ';
    If instr(vLotCode,'RR') > 0 or instr(vLotCode,'CR') > 0 or
    instr(vLotCode,'RT') > 0 or instr(vLotCode,'RE') > 0 or
    instr(vLotCode,'CE') > 0 or instr(vLotCode,'RW') > 0 Then
    vLotCode := 'REWORK';
    Elsif (instr(vLotCode,'E') > 0 and instr(vLotCode,'IE') = 0 and instr(vLotCode,'PPE') = 0) or instr(vLotCode,'Q') > 0 Then
    vLotCode := 'ENGINEERING';
    Else
    vLotCode := 'PRODUCTION';
    End if;
    Unix_Sql := rpad(LotCur.Record_Id,1,' ')||rpad(Lotcur.InvoiceNo,20,' ')||rpad(Lotcur.MAWB,35,' ')||rpad(NVL(Lotcur.HAWB,' '),35,' ')||
    rpad(Lotcur.Ship_Out_Date,6,' ')||rpad(Lotcur.Ship_Out_Time,6,' ')||
    rpad(Lotcur.From_Org_Code,3,' ')||
    rpad(Lotcur.To_Org_Code,3,' ')||
    rpad(Lotcur.Intersil_Lot_No,20,' ')||rpad(Lotcur.Dept,8,' ')||rpad(Lotcur.WaferLotNumber,20,' ')||
    rpad(Lotcur.Part_Name,25,' ')||
    rpad(Lotcur.PD_Part_Name,25,' ')||lpad(Lotcur.Die_Qty,10,'0')||
    rpad(Lotcur.Wafer_Quantity,2,'0')||
    rpad(NVL(Lotcur.Tracecode,' '),10,' ')||rpad(NVL(Lotcur.Datecode,' '),4,' ')||rpad(vLotCode,20,' ')||
    rpad(' ',30,' ');
    utl_file.put_line(edi_hat_data,Unix_sql);
    nRecord := nRecord + 1;
    End Loop;
    Hdr_Sql := 'T'||lpad(nRecord,5,'0');
    utl_file.put_line(edi_hat_data,Hdr_sql);
    v_msg := 'Close Unix File';
    utl_file.fclose(edi_hat_data);
    End loop;

    Can someone help to advice?You or your DBA should trace/tkprof (with wait events) that code.
    The reason I have two loops is because I need to generate multiple files with different invoices.This does not work with one loop.I beg to differ on that: probably it can be done in a single loop (but it's hard to prove without seeing the cursor queries).
    The question is what difference in time it will make. We don't have your cursors (or machines, or database version or data or...)
    Therefore you need to test your queries, check the explain plans, or better: have a run of that code being traced and check the outcomes.

  • How to include OSTC.rate in this query

    How to include OSTC.rate in this query,Im using it for crystal report.
    DECLARE @sVAT NVARCHAR(max) 
    DECLARE @sCess NVARCHAR(max) 
    DECLARE @sCST NVARCHAR(max) 
    DECLARE @nDocentry  INT 
    SET @sVAT='1'  SET @sCess='7' SET @sCST ='4'
    SELECT DocEntry
      ,DocDate,VehicleNo,Driver,NumAtCard
      ,Building,Block,Street,City,District,State,Country
      ,Series,DocNum
      ,BTinNo,BCstNo,BCeRegNo,BPanNo,BCeRange,BCeComRate,BCeDivision,BEccNo
      ,Type
      ,CardName,[Delivery Addr]
      ,ECCNo,CERange,CERegNo,CEDivis,CEComRate
      ,PAN,CST,STN
      ,[Deliver At]
      ,LineNum
      ,Dscription,HSNumber,Quantity,Rate,LineTotal,Discount,Vat [VAT],Cess [Cess],Total,GTotal
      ,TotalExpns
      ,MfgName
      ,MFGBuilding,MFGBlock,MFGStreet,MFGDistrict,MFGCity
      ,MCERegNo,MCERange,MCEDivis,MCEComRate,MPAN,MCST,MSTN
      ,SupName
      ,SUPBuilding,SUPBlock,SUPStreet,SUPDistrict,SUPCity
      ,SCERegNo,SCERange,SCEDivis,SCEComRate,SPAN,SCST,SSTN
      , (select substring((select upper(name)+',' from OUBR where isnull(U_SeriesGrp,'')<>'' order by Code FOR XML PATH ('')),1,LEN((select upper(name)+',' from OUBR order by Code FOR XML PATH ('')))-1))[Branches]
    FROM
    SELECT
      /*OBTN.DistNumber*/
      INV1.DocEntry
      ,OINV.DocDate,OINV.U_VehicleNo VehicleNo,OINV.U_Driver Driver,OINV.NumAtCard
      ,CAST(OLCT.Building AS VARCHAR(255))Building,OLCT.Block,OLCT.Street,OLCT.City,OLCT.County+' - '+OLCT.ZipCode[District],OCST.Name State,OCRY.Name Country
      , NNM1.SeriesName [Series], OINV.DocNum
      ,OLCT.TinNo [BTinNo],OLCT.CstNo [BCstNo],OLCT.CeRegNo [BCeRegNo],OLCT.PanNo [BPanNo],OLCT.CeRange [BCeRange]
      ,OLCT.CeComRate [BCeComRate],OLCT.CeDivision [BCeDivision],OLCT.EccNo [BEccNo]
      ,OBTN.U_InvType Type
      ,OINV.CardName,OINV.Address2[Delivery Addr]
      ,CE_CRD7.ECCNo,CE_CRD7.CERegNo,CE_CRD7.CERange,CE_CRD7.CEDivis,CE_CRD7.CEComRate
      ,CRD7.TaxId0[PAN],CRD7.TaxId1[CST],CRD7.TaxId11 [STN]
      ,OINV.U_Address [Deliver At]
      ,INV1.LineNum,INV1.ItemCode
      ,INV1.Dscription,OITM.SWW [HSNumber]
      ,INV1.Quantity,INV1.PriceBefDi Price,INV1.LineTotal,INV1.Price Rate,(INV1.PriceBefDi-INV1.Price)*INV1.Quantity Discount
      ,INV4.Vat
      ,INV4.Cess
      ,INV1.LineTotal+INV1.VatSum Total 
      ,OINV.DocTotal GTotal
      ,OINV.TotalExpns
      ,OBTN.U_MfgName MfgName
      ,convert(nvarchar(250),MFG_CRD1.Building) MFGBuilding,MFG_CRD1.Block MFGBlock,MFG_CRD1.Street MFGStreet,MFG_CRD1.City MFGCity,MFG_CRD1.ZipCode[MFGDistrict]
      ,OBTN.U_MCERegNo MCERegNo,OBTN.U_MCERange MCERange,OBTN.U_MCEDivis MCEDivis,OBTN.U_MCEComRate MCEComRate
      ,OBTN.U_MPAN MPAN,OBTN.U_MCST MCST,OBTN.U_MSTN MSTN
      ,OBTN.U_SupName SupName
      ,convert(nvarchar(250),SUP_CRD1.Building) SUPBuilding,SUP_CRD1.Block SUPBlock,SUP_CRD1.Street SUPStreet,SUP_CRD1.City SUPCity,SUP_CRD1.ZipCode[SUPDistrict]
      ,OBTN.U_SCERegNo SCERegNo,OBTN.U_SCERange SCERange,OBTN.U_SCEDivis SCEDivis,OBTN.U_SCEComRate SCEComRate
      ,OBTN.U_SPAN SPAN,OBTN.U_SCST SCST,OBTN.U_SSTN SSTN
    FROM
      OINV
      INNER JOIN INV1 ON OINV.DocEntry=INV1.DocEntry
      INNER JOIN OITM ON INV1.ItemCode=OITM.ItemCode
      INNER JOIN 
      select
      INV4.DocEntry,INV4.LineNum
      ,CASE WHEN INV4.staType IN (@sVAT,@sCST) THEN sum(INV4.TaxSum) ELSE 0 END Vat
      ,CASE WHEN INV4.staType=@sCess THEN sum(INV4.TaxSum) ELSE 0 END Cess
      from
      INV4
      where
      INV4.DocEntry={?DocKey@} and INV4.RelateType=1
      group by INV4.DocEntry,INV4.LineNum,INV4.staType
      )INV4 ON INV1.DocEntry=INV4.DocEntry AND INV1.LineNum=INV4.LineNum
      INNER JOIN OLCT ON INV1.LocCode=OLCT.Code
      INNER JOIN OCST ON OLCT.State=OCST.Code
      INNER JOIN OCRY ON OLCT.Country=OCRY.Code and OCST.Country=OCRY.Code
      INNER JOIN INV12 ON OINV.DocEntry=INV12.DocEntry
      INNER JOIN OITL ON INV1.BaseType=OITL.ApplyType AND INV1.BaseEntry=OITL.ApplyEntry AND INV1.BaseLine=OITL.ApplyLine
      INNER JOIN ITL1 ON OITL.LogEntry=ITL1.LogEntry
      INNER JOIN OBTN ON ITL1.MdAbsEntry=OBTN.AbsEntry and ITL1.SysNumber=OBTN.SysNumber and ITL1.ItemCode=OBTN.ItemCode
      LEFT JOIN OCRD MFG_OCRD ON MFG_OCRD.CardCode=OBTN.U_MfgCode
      LEFT JOIN CRD1 MFG_CRD1 ON MFG_OCRD.CardCode=MFG_CRD1.CardCode AND MFG_OCRD.BillToDef=MFG_CRD1.Address and MFG_CRD1.AdresType='B'
      LEFT JOIN OCRD SUP_OCRD ON SUP_OCRD.CardCode=OBTN.U_SupCode
      LEFT JOIN CRD1 SUP_CRD1 ON SUP_OCRD.CardCode=SUP_CRD1.CardCode AND SUP_OCRD.BillToDef=SUP_CRD1.Address and SUP_CRD1.AdresType='B'
      LEFT JOIN NNM1 ON OINV.Series=NNM1.Series
      LEFT JOIN CRD7 ON OINV.CardCode=CRD7.CardCode AND CRD7.Address='' AND CRD7.AddrType='S' --Tax Details
      LEFT JOIN CRD7 CE_CRD7 ON OINV.CardCode=CE_CRD7.CardCode AND OINV.ShipToCode=CE_CRD7.Address AND CE_CRD7.AddrType='S' -- Central Excise Details
      WHERE
      INV1.DocEntry={?DocKey@}
    )INVOICE
    GROUP BY
      DocEntry
      ,DocDate,VehicleNo,Driver,NumAtCard
      ,Building,Block,Street,City,District,State,Country
      ,Series,DocNum
      ,BTinNo,BCstNo,BCeRegNo,BPanNo,BCeRange,BCeComRate,BCeDivision,BEccNo
      ,Type
      ,CardName,[Delivery Addr]
      ,ECCNo,CERange,CERegNo,CEDivis,CEComRate
      ,PAN,CST,STN
      ,[Deliver At]
      ,LineNum
      ,Dscription,HSNumber,Quantity,Rate,LineTotal,Discount,Vat,Cess,Total,GTotal
      ,TotalExpns
      ,MfgName
      ,MFGBuilding,MFGBlock,MFGStreet,MFGDistrict,MFGCity
      ,MCERegNo,MCERange,MCEDivis,MCEComRate,MPAN,MCST,MSTN
      ,SupName
      ,SUPBuilding,SUPBlock,SUPStreet,SUPDistrict,SUPCity
      ,SCERegNo,SCERange,SCEDivis,SCEComRate,SPAN,SCST,SSTN

    You're double posting ( how to change join condition in this query ) , stop doing that, since you'll only be distracting and diverting by doing so.
    Take the time to read the SQL and PL/SQL FAQ @ https://forums.oracle.com/forums/ann.jspa?annID=1535, since you're not even mentioning a database version, while 9i {noformat} != {noformat} 10g {noformat} != {noformat}11g...
    Have your DBA trace/tkprof the query, and so on, if you cannot do that yourself.
    And then provide the feedback the volunteers, including Ace (Directors)' s need ( and you were very lucky, from that point of view, I think, from looking at both your posts ;) )

  • Too many Oracle Locks

    Hi Guys,
    We are facing this situation where in our Prod environment is getting many many Oracle Locks.
    Users are not able to perform their work smoothly and this hits hard during the month ends.
    DB Version= 10.2.0.4
    Apps Version = 11.5.10.2
    I have performed some investigation and found the below:-
    - The users perform certain operations and it is during this operation that we start getting Oracle locks frequently.
    - This issue cannot be recreated in any lower instance.
    - The sessions that are blocking are INACTIVE sessions with module 'ARXCWMAI'.
    - The sql that is run is also a select query (does not look like a problem area)
    - Trace/tkprof do not have enough information.
    - These sessions also tend to lock the table AR.AR_PAYMENT_SCHEDULES_ALL (found from v$locked_object)
    - These blocking sessions are killed and thats the temporary solution.
    Any ideas or suggestions would be greatly appreciated!
    Thanks,
    Trith

    Trith wrote:
    Pierre, How do I go forward after taking the backups of session,sql and lock tables.
    First goal should be to know what kind of lock type is blocking in your case and what is the related database object.
    To do this you need to understand how Oracle locking works, how different locks are represented in V$LOCK and how to check blocker/blocking session in V$LOCK. First step is to read relevant Concepts Guide section and try the examples and checking at the same time what is exactly in V$LOCK: http://docs.oracle.com/cd/E11882_01/server.112/e25789/consist.htm#i5704. Especiially you need to understand TM and TX lock types and to check what kind of blocking locks you have in your case.
    Other very good documentation about Oracle locking can be found in Tom Kyte Expert Oracle Database Architecture and Jonathan Lewis Oracle Core books. If you have access to My Oracle Support there are also some good notes on Oracle locking.
    Edited by: P. Forstmann on 14 mars 2012 13:52
    Edited by: P. Forstmann on 14 mars 2012 14:04

  • Listener  errors in logs  TNS-001184 TNS-12502

    Oracle Enterprise Edition 10.2.0.3
    HP-UX
    2 Node RAC
    Hi
    I have seen repetitive errors in listener log file of one of our databases.
    The errors are like this,
    07-DEC-2010 12:35:27 * service_register * EXS1 * 1184
    TNS-01184: Listener rejected registration or update of service handler "DEDICATED"
    TNS-01185: Registration attempted from a remote node
    08-DEC-2010 12:35:27 * service_died * EXS1 * 12537
    08-DEC-2010 12:35:27 * service_update * EXS1 * 0
    There are also errors like the following,
    TNS-12560: TNS:protocol adapter error
    TNS-00530: Protocol adapter error
    TNS-12502: TNS:listener received no CONNECT_DATA from client
    I referred this oracle metalink note, ID 275058.1 and it says this is because LOCAL_LISTENER and REMOTE_LISTENER are not configured properly, But I see errors like this being frequent (every minute) yesterday and only once today.
    Our Application team complains about problems in performance of various procedures at various time intervals. Can this be the cause of such performance degradation ?

    The first error is similar when copy/paste listener.ora to other node (ex. test enviroment) and not modify listener.ora then when you tyr start listener, it connect and register to original node.
    If your application team have poor performance, maybe you must trace (tkprof or sqlnet trace) to concrete the problem.
    Good luck

  • Wait events 'direct path write'  and 'direct path read'

    Hi,
    We have a query which is taking more that 2 min. It's a 9.2.0.7 database. We took the trace/tkprof of the query,and identified that there are so manay 'direct path write' and 'direct path read' wait events in the trace file.
    WAIT #3: nam='direct path write' ela= 5 p1=201 p2=70710 p3=15
    WAIT #3: nam='direct path read' ela= 170 p1=201 p2=71719 p3=15
    In the above, "p1=201" is a file_id, but we could not find any data file, temp file, control file with that id# 201.
    Can you please let us know what's "p1=201" here, how to identify the file which is causing the issue.
    Thanks
    Sravan

    What does:
    show parameter db_filesreturn? My guess, is that it returns 200.
    The direct file read and direct file write events are reads and writes to TEMP tablespace. In those wait events, the file# is reported as db_files+temp file id. So, 201 means temp file #1.
    Now, as to your actual performance problem.
    Without seeing the SQL and the corresponding execution plan, it's impossible to be sure. However, the most common causes of temp writes are sort operations and group by operations.
    If you decide to post your SQL and execution plan, please be sure to make it readable by formatting it. Information on how to do so can be found here.
    Hope that helps,
    -Mark
    Edited by: mbobak on May 1, 2011 1:50 AM

  • Nested Tables and Full Table Scans

    Hello,
    I am hoping someone help can help me as I am truly scratching my head.
    I have recently been introduced to nested tables due to the fact that we have a poor running query in production. What I have discovered is that when a table is created with a column that is a nested table a unique index is automatically created on that column.
    When I do an explain plan on this table A it states that a full scan is being doine on Table A and on the nested table B. I can add an index to the offending columns to remove the full scan on Table A but the explain plan still identifies that a full scan is being done on the nested table B. Bare in mind that the column with the nested table has a cardinality of 27.
    What can I do? As I stated, there is an index on this nested table column but clearly it is being ignored.The query bombed out after 4 hours and when I ran a query to see what the record count was it was only 2046.
    Any suggestions would be greatly appreciated.
    Edited by: user11887286 on Sep 10, 2009 1:05 PM

    Hi and welcome to the forum.
    Since your question is in fact a tuning request, you need to provide us some more insights.
    See:
    [How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]
    and also
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
    In short:
    - database version
    - the actual queries you're executing
    - the execution plans (explain plans)
    - trace/tkprof output if available, or ask your DBA for it
    - utopiamode a small but concisive testcase would be ideal (create table + insert statements). +/utopiamode+

Maybe you are looking for