Query ake Long Time to execute.

Hi All,
I Have query that joins on 4 tables.
The Query takes 12 minutes to execute.
the sga size is 50m.
Please tell me whats to do now.
Thanks in advance.
Prathamesh.

Prathamesh,
Please go thru Performance Tuning Guide to isolate your problem:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/toc.htm
Try to find out:
Is it a new query or earlier it used to return result quickly?
Check Explain Plan etc to find out what exactly is going on behind the query.
If SGA is not properly tuned then it will effect all query, it'll not be isolated on single one.
Regards,
Sayan

Similar Messages

  • Query takes long time to execute.

    Hi All,
    I have one query that takes 5 minutes to execute.
    The query depends on four tables.
    The table IBS_WORK_BANKDATA and IBS_ORG_BANKDATA contains 25 lack records. Table IBS_CURRENCYMASTER have 250 records and IBS_CURRENCYEEXCHANGERATE have 50 records.
    Out put of query contains 3500 records.
    Oracle version is 9.0.1.1 and OS is windows 2003 server.
    Query:
    select distinct trim(wrk.bd_alcd) as ALCD, wrk.bd_typecd as TypeCD, wrk.bd_forcd as FORCD, wrk.bd_curcd as CURCD,
    wrk.bd_councd as COUNCD, wrk.bd_sectcd as SECCD,
    wrk.bd_matcd as MATCD, wrk.bd_c_u_cd as C_U_CD, wrk.bd_s_u_cd as S_U_CD,
    0 as Org_FCBal,0 as ORG_Bal,case when wrk.bd_type='O' then wrk.bd_fc_bal else 0 end as Main_FCBal,
    case when wrk.bd_type='O' then (wrk.bd_fc_bal * nvl(exchg.cer_exchangerate, 1)) else 0 end as main_Bal,
    wrk.bd_rs_int,wrk.bd_rs_bal,wrk.bd_fc_int,wrk.bd_fc_bal,
    ' ' as TrackChangs
    from ibs_work_bankdata wrk inner join ibs_org_bankdata org ON org.bd_yrqtr = wrk.bd_yrqtr and org.bd_bkcode=wrk.bd_bkcode and org.bd_forcd = wrk.bd_forcd
    and wrk.BD_YRQTR=20044 and wrk.BD_BKCODE ='000'
    and wrk.BD_ALCD = '51' and wrk.BD_FORCD ='IN' and wrk.BD_TYPECD = '11'
    left join ibs_currencymaster curmst on curmst.cur_code = wrk.bd_curcd
    left join ibs_currencyexchangerate exchg on exchg.cer_currencyid = curmst.cur_id
    and exchg.cer_yearqtr = 20051 and exchg.CER_ACTIVE=1
    Explain Plan:
    SELECT STATEMENT, GOAL = CHOOSE               Cost=26     Cardinality=1     Bytes=157
    SORT UNIQUE               Cost=26     Cardinality=1     Bytes=157
    TABLE ACCESS BY INDEX ROWID     Object owner=RBI     Object name=IBS_ORG_BANKDATA     Cost=2     Cardinality=204     Bytes=2856
    NESTED LOOPS               Cost=26     Cardinality=1     Bytes=157
    NESTED LOOPS OUTER               Cost=24     Cardinality=1     Bytes=143
    NESTED LOOPS OUTER               Cost=23     Cardinality=1     Bytes=93
    TABLE ACCESS BY INDEX ROWID     Object owner=RBI     Object name=IBS_WORK_BANKDATA     Cost=22     Cardinality=1     Bytes=52
    INDEX SKIP SCAN     Object owner=RBI     Object name=IBS_WORK_BANKDATA_IDX     Cost=7     Cardinality=1     
    TABLE ACCESS FULL     Object owner=RBI     Object name=IBS_CURRENCYMASTER     Cost=1     Cardinality=178     Bytes=7298
    TABLE ACCESS FULL     Object owner=RBI     Object name=IBS_CURRENCYEXCHANGERATE     Cost=1     Cardinality=19     Bytes=950
    INDEX RANGE SCAN     Object owner=RBI     Object name=IBS_ORG_BANKDATA_IDX     Cost=1     Cardinality=204     
    Please help me.
    Thanks in advance,
    Prathamesh.

    Hi prathemesh,
    Check whether the tables accessed by the query are recently analyzed.
    Thanks,
    Sathis.

  • Query takes long time to execute on exceptional cases

    Hi Tuning Experts,
    My query runs on prod db on daily basis on fixed scheduled time, out of seven days, for 1-2 exeptional days it takes 7-10 hr which takes 10-15 min on rest of the days.
    I want to do RCA for it, so please help me how i can proceed. either i go with
    AUTOTRACE,TKPROF, EXPLAIN PLAN or STATSPACK, which is best method to find out the real culprit.
    Regards
    Asif

    To answer your question...
    Running AUTOTRACE and EXPLAIN PLAN against the query in your development environment won't help diagnose the abnormal executions in production. Statspack is a global thing. It's a good thing to run but probably of insufficient granularity for here.
    TKPROF is an anlyzing tool. You need something to analyze. Read this article on Interpreting Wait Events to find out how to set the 10046 event to gather evidence.
    Cheers, APC

  • Stopping a Query taking more time to execute in runtime in Oracle Forms.

    Hi,
    In the present application one of the oracle form screen is taking long time to execute a query, user wanted an option to stop the query in between and browse the result (whatever has been fetched before stopping the query).
    We have tried three approach.
    1. set max fetch record in form and block level.
    2. set max fetch time in form and block level.
    in above two method does not provide the appropiate solution for us.
    3. the third approach we applied is setting the interaction mode to "NON BLOCKING" at the form level.
    It seems to be worked, while the query took long time to execute, oracle app server prompts an message to press Esc to cancel the query and it a displaying the results fetched upto that point.
    But the drawback is one pressing esc, its killing the session itself. which is causing the entire application to collapse.
    Please suggest if there is any alternative approach for this or how to overcome this perticular scenario.
    This kind of facility is alreday present in TOAD and PL/SQL developer where we can stop an executing query and browse the results fetched upto that point, is the similar facility is avialable in oracle forms ,please suggest.
    Thanks and Regards,
    Suraj
    Edited by: user10673131 on Jun 25, 2009 4:55 AM

    Hello Friend,
    You query will definitely take more time or even fail in PROD,becuase the way it is written. Here are my few observations, may be it can help :-
    1. XLA_AR_INV_AEL_SL_V XLA_AEL_SL_V : Never use a view inside such a long query , becuase View is just a window to the records.
    and when used to join other table records, then all those tables which are used to create a view also becomes part of joining conition.
    First of all please check if you really need this view. I guess you are using to check if the records have been created as Journal entries or not ?
    Please check the possbility of finding it through other AR tables.
    2. Remove _ALL tables instead use the corresponding org specific views (if you are in 11i ) or the sysnonymns ( in R12 )
    For example : For ra_cust_trx_types_all use ra_cust_trx_types.
    This will ensure that the query will execute only for those ORG_IDs which are assigned to that responsibility.
    3. Check with the DBA whether the GATHER SCHEMA STATS have been run atleast for ONT and RA tables.
    You can also check the same using
    SELECT LAST_ANALYZED FROM ALL_TABLES WHERE TABLE_NAME = 'ra_customer_trx_all'.
    If the tables are not analyzed , the CBO will not be able to tune your query.
    4. Try to remove the DISTINCT keyword. This is the MAJOR reason for this problem.
    5. If its a report , try to separate the logic in separate queries ( using a procedure ) and then populate the whole data in custom table, and use this custom table for generating the
    report.
    Thanks,
    Neeraj Shrivastava
    [email protected]
    Edited by: user9352949 on Oct 1, 2010 8:02 PM
    Edited by: user9352949 on Oct 1, 2010 8:03 PM

  • Query is taking long time to execute after migrating to 10g r2

    Hi
    We recently migrated the database from 9i to 10gr2 ((10.2.0.2.0).. This query was running in acceptable time before the upgrade in 9i.. Now it is taking a long long time to execute this... Can you please let me know what should i do to improve the performance now.. We are running stats everyday..
    Thanks for your help,
    Shree
    ======================================================================================
    SELECT cr.cash_receipt_id
    ,cr.pay_from_customer
    ,cr.receipt_number
    ,cr.receipt_date
    ,cr.amount
    ,cust.account_number
    ,crh.gl_date
    ,cr.set_of_books_id
    ,sum(ra.amount_applied) amount_applied
    FROM AR_CASH_RECEIPTS_ALL cr
    ,AR_RECEIVABLE_APPLICATIONS_ALL ra
    ,hz_cust_accounts cust
    ,AR_CASH_RECEIPT_HISTORY_ALL crh
    ,GL_PERIOD_STATUSES gps
    ,FND_APPLICATION app
    WHERE cr.cash_receipt_id = ra.cash_receipt_id
    AND ra.status = 'UNAPP'
    AND cr.status <> 'REV'
    AND cust.cust_account_id = cr.pay_from_customer
    AND substr(cust.account_number,1,2) <> 'SI' -- Don't allocate Unapplied receipts FOR SI customers
    AND crh.cash_receipt_id = cr.cash_receipt_id
    AND app.application_id = gps.application_id
    AND app.application_short_name = 'AR'
    AND gps.period_name = 'May-07'
    AND crh.gl_date <= gps.end_date
    AND cr.receipt_number not like 'WH%'
    -- AND cust.customer_number = '0000079260001'
    GROUP BY cr.cash_receipt_id
    ,cr.pay_from_customer
    ,cr.receipt_number
    ,cr.receipt_date
    ,cr.amount
    ,cust.account_number
    ,crh.gl_date
    ,cr.set_of_books_id
    HAVING sum(ra.amount_applied) > 0;
    =========================================================================================
    Here is the explain plan in 10g r2 (10.2.0.2.0)
    PLAN_TABLE_OUTPUT
    Plan hash value: 2617075047
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 92340 | 10M| | 513K (1)|
    |* 1 | FILTER | | | | | |
    | 2 | HASH GROUP BY | | 92340 | 10M| 35M| 513K (1)|
    | 3 | TABLE ACCESS BY INDEX ROWID | AR_RECEIVABLE_APPLICATIONS_ALL | 2 | 34 |
    | 4 | NESTED LOOPS | | 184K| 21M| | 510K (1)|
    |* 5 | HASH JOIN | | 99281 | 9M| 3296K| 176K (1)|
    |* 6 | TABLE ACCESS FULL | HZ_CUST_ACCOUNTS | 112K| 1976K| | 22563 (1)|
    |* 7 | HASH JOIN | | 412K| 33M| 25M| 151K (1)|
    | 8 | TABLE ACCESS BY INDEX ROWID | AR_CASH_RECEIPT_HISTORY_ALL | 332K| 4546K|
    | 9 | NESTED LOOPS | | 498K| 19M| | 26891 (1)|
    | 10 | NESTED LOOPS | | 2 | 54 | | 4 (0)|
    | 11 | TABLE ACCESS BY INDEX ROWID| FND_APPLICATION | 1 | 8 | | 1 (0)|
    |* 12 | INDEX UNIQUE SCAN | FND_APPLICATION_U3 | 1 | | | 0 (0)|
    | 13 | TABLE ACCESS BY INDEX ROWID| GL_PERIOD_STATUSES | 2 | 38 | | 3 (0)
    |* 14 | INDEX RANGE SCAN | GL_PERIOD_STATUSES_U1 | 1 | | | 2 (0)|
    |* 15 | INDEX RANGE SCAN | AR_CASH_RECEIPT_HISTORY_N2 | 332K| | | 1011 (1)
    PLAN_TABLE_OUTPUT
    |* 16 | TABLE ACCESS FULL | AR_CASH_RECEIPTS_ALL | 5492K| 235M| | 108K
    |* 17 | INDEX RANGE SCAN | AR_RECEIVABLE_APPLICATIONS_N1 | 4 | | | 2
    Predicate Information (identified by operation id):
    1 - filter(SUM("RA"."AMOUNT_APPLIED")>0)
    5 - access("CUST"."CUST_ACCOUNT_ID"="CR"."PAY_FROM_CUSTOMER")
    6 - filter(SUBSTR("CUST"."ACCOUNT_NUMBER",1,2)<>'SI')
    7 - access("CRH"."CASH_RECEIPT_ID"="CR"."CASH_RECEIPT_ID")
    12 - access("APP"."APPLICATION_SHORT_NAME"='AR')
    14 - access("APP"."APPLICATION_ID"="GPS"."APPLICATION_ID" AND "GPS"."PERIOD_NAME"='May-07')
    filter("GPS"."PERIOD_NAME"='May-07')
    15 - access("CRH"."GL_DATE"<="GPS"."END_DATE")
    16 - filter("CR"."STATUS"<>'REV' AND "CR"."RECEIPT_NUMBER" NOT LIKE 'WH%')
    17 - access("CR"."CASH_RECEIPT_ID"="RA"."CASH_RECEIPT_ID" AND "RA"."STATUS"='UNAPP')
    filter("RA"."CASH_RECEIPT_ID" IS NOT NULL)
    Here is the explain plan in 9i
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=445977 Card=78530 By
    tes=9423600)
    1 0 FILTER
    2 1 SORT (GROUP BY) (Cost=445977 Card=78530 Bytes=9423600)
    3 2 HASH JOIN (Cost=443717 Card=157060 Bytes=18847200)
    4 3 HASH JOIN (Cost=99563 Card=94747 Bytes=9758941)
    5 4 TABLE ACCESS (FULL) OF 'HZ_CUST_ACCOUNTS' (Cost=12
    286 Card=110061 Bytes=1981098)
    6 4 HASH JOIN (Cost=86232 Card=674761 Bytes=57354685)
    7 6 TABLE ACCESS (BY INDEX ROWID) OF 'AR_CASH_RECEIP
    T_HISTORY_ALL' (Cost=17532 Card=542304 Bytes=7592256)
    8 7 NESTED LOOPS (Cost=17536 Card=809791 Bytes=332
    01431)
    9 8 NESTED LOOPS (Cost=4 Card=1 Bytes=27)
    10 9 TABLE ACCESS (BY INDEX ROWID) OF 'FND_APPL
    ICATION' (Cost=1 Card=1 Bytes=8)
    11 10 INDEX (UNIQUE SCAN) OF 'FND_APPLICATION_
    U3' (UNIQUE)
    12 9 TABLE ACCESS (BY INDEX ROWID) OF 'GL_PERIO
    D_STATUSES' (Cost=3 Card=1 Bytes=19)
    13 12 INDEX (RANGE SCAN) OF 'GL_PERIOD_STATUSE
    S_U1' (UNIQUE) (Cost=2 Card=1)
    14 8 INDEX (RANGE SCAN) OF 'AR_CASH_RECEIPT_HISTO
    RY_N2' (NON-UNIQUE) (Cost=1740 Card=542304)
    15 6 TABLE ACCESS (FULL) OF 'AR_CASH_RECEIPTS_ALL' (C
    ost=60412 Card=8969141 Bytes=394642204)
    16 3 TABLE ACCESS (FULL) OF 'AR_RECEIVABLE_APPLICATIONS_A
    LL' (Cost=337109 Card=15613237 Bytes=265425029)

    Hi,
    The plan between 9i and 10g is pretty the same but the amount of data fetched has considerably increased. I guess the query was performing slow even in 9i.
    The AR_CASH_RECEIPT_HISTORY_ALL is presently having 332000 rows in 10g where as it was 17532 in 9i.
    AR_CASH_RECEIPT_HISTORY_N2 is now having 332,000 rows in 10g where as in 9i it had 1,740
    Try creating some indexes on
    AR_CASH_RECEIPTS_ALL
    hz_cust_accounts

  • How to know if executing a query cost long time

    Hi,
    I have a question about how to figure out if execution of a query takes long time. I am building a web application in java. The back end database is oracle. If a query is too large, I want to put show the user the error message to let the user make more specific query. but how can I tell if the query execution takes long time? Thanks.

    The following link may be of help.
    http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96536/ch3175.htm#1123208

  • Taking long time to execute views

    Hi All,
    my query is taking long time to execute(i am using standard views in my query)
    XLA_INV_AEL_GL_V , XLA_WIP_AEL_GL_V -----these standard views itself taking long time to execute ,but i need the info from this views
    WHERE gjh.je_batch_id = gjb.je_batch_id AND
    gjh.je_header_id = gjl.je_header_id AND
    gjh.je_header_id = xlawip.je_header_id AND
    gjl.je_header_id = xlawip.je_header_id AND
    gjl.je_line_num = xlawip.je_line_num AND
    gcc.code_combination_id = gjl.code_combination_id AND
    gjl.code_combination_id = xlawip.code_combination_id AND
    gjb.set_of_books_id = xlawip.set_of_books_id AND
    gjh.je_source = 'Inventory' AND
    gjh.je_category = 'WIP' AND
    gp.period_set_name = 'Accounting' AND
    gp.period_name = gjl.period_name AND
    gp.period_name = gjh.period_name AND
    gp.start_date +1 between to_date(startdate,'DD-MON-YY') AND
    to_date(enddate,'DD-MON-YY') AND
    gjh.status =nvl(lstatus,gjh.status)
    Could any one help me to execute it fast?
    Thanks
    Madhu

    [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]When your query takes too long...

  • Query taking long time

    Hi
    I have a query in which, its a 3 table join but takes a long time to execute. I had checked with plan table.. it shows one of the table is FULL ACCESS.
    I have 2 clarifications.
    1. Will the status checking as NULL - (it shouldn't use index)
    2. Is the case statements are recommended for queries.
    Query
    Select .........
    FROM CLIENT LEFT OUTER JOIN INTERNET_LOGIN ON INTERNET_LOGIN.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID,
    POLI_MOT.
    WHERE
    POLI_MOT.NUM_CLIENT_ID=CLIENT.NUM_CLIENT_ID
    AND
    (POLI_MOT.CHR_CANCEL_STATUS='N'
    OR
    POLI_MOT.CHR_CANCEL_STATUS IS NULL)
    AND
    CLIENT.NUM_CONTACT_TYPE_ID IN (1,3)
    AND
    (NVL(POLI_MOT.VCH_NEW_IC_NO,'A') =
    CASE WHEN (NVL(null,NULL) IS NULL) THEN
    NVL(POLI_MOT.VCH_NEW_IC_NO,'A')
    ELSE
    NVL(null,NULL)
    END
    OR
    POLI_MOT.VCH_OLD_IC_NO =
    CASE WHEN nvl(null,null) IS NULL THEN
    POLI_MOT.VCH_OLD_IC_NO
    ELSE
    NVL(null,NULL)
    END )
    AND POLI_MOT.VCH_POLICY_NO =
    CASE WHEN UPPER(nvl(NULL,null)) IS NULL THEN
    POLI_MOT.VCH_POLICY_NO
    ELSE
    NVL(NULL,NULL)
    END
    AND POLI_MOT.VCH_VEHICLE_NO =
    CASE WHEN UPPER(NVL('123',NULL)) IS NULL THEN
    POLI_MOT.VCH_VEHICLE_NO
    ELSE
    NVL('123',NULL)
    END

    Hi,
    There is nothing wrong in having a full table access. When you do the explain plan please check for which table costs you the maximun. try to work on that table.
    To tune the performance of your query you can try either indexing or parallel access.
    the syntax for parallel index is
    /*+ PARALLEL("TBL_NM",100) */(any number)...
    for index please use the index name of the table you want to index..
    regards
    Bharath

  • Rank Function taking a long time to execute in SAP HANA

    Hi All,
    I have a couple of reports with rank function which is timing out/ or taking a really long time to execute, Is there any way to get the result in less time when rank functions are involved?
    the following is a sample of how the Query looks,
    SQL 1:
    select      a.column1,
                    b.column1,
                    rank () over(partition by a.column1 order by sum(b.column2) asc)
    from         "_SYS_BIC"."Analyticview1"         b
                    join          "Table1"            a
                      on          (a.column2 = b.column3)
    group by  a.column1,
    b.column1;
    SQL 2:
    select    a.column1,
                    b.column1,
                    rank () over( order by min(b.column1) asc) WJXBFS1
    from         "_SYS_BIC"."Analytic view2"         b
                    cross join                "Table 2"               a
    where      (a.column2  like '%a%'
    and b.column1  between 100 and 200)
    group by  a.column1,
                    b.column1
    when I visualize the execution plan,the rank function is the one taking up a longer time frame. so I executed the same SQL without the rank() or partition or order by(only with Sum() in SQL1 and Min() in SQL 2) even that took a around an hour to get the result.
    1.Does anyone have an any idea to make these queries to execute faster?
    2. Does the latency have anything to do with the rank function or could it be size of the result set?
    3. is there any workaround to implement these rank function/partition inside the Analytic view itself? if yes, will this make it give the result faster?
    Thank you for your help!!
    -Gayathri

    Krishna,
    I tried both of them, Graphical and CE function,
    It is also taking a long time to execute
    Graphical view giving me the following error after 2 hr and 36 minutes
    Could not execute 'SELECT ORDER_ID,ITEM_ID,RANK from "_SYS_BIC"."EMMAPERF/ORDER_FACT_HANA_CV" group by ...' in 2:36:23.411 hours .
    SAP DBTech JDBC: [2048]: column store error: search table error:  [2620] executor: plan operation failed
    CE function - I aborted after 40 mins
    Do you know the syntax to declare local variable to use in CE function?

  • Getting long time to execute

    hi all,
    Table have more than 1.6milion records. to execute following query taking long time.
    select DISTINCT
    invoice_id,
    invoice_number,
    invoice_dis_id,
    dis_line,
    batch_id,
    invoice_date,
    cancelled_date,
    accounting_date,
    invoice_desc,
    dist_desc,
    invoice_id || '!' || invoice_dis_id || '!' || batch_id as unique_string
    FROM test.ORA_test_INVOICE_T
    I did following workarounds and try to increase performance yes it's retrieving 30 rows per second.
    analyzed the table, index, schema
    tried to use hints.
    can some propose me solution to increase the performance. I am using oracle 11.2
    Thanks,
    krish

    As fifranken pointed out you have no WHERE clause and therefore are probably doing a full table scan to get all of the rows.
    You are selecting too many columns to make creating an index to get the table data instead practical (the idea being that more index "rows" can be read per block than table rows, requiring fewer read operations) unless you have a LOT of columns in the table.
    If you have the parallel query option license you can try PQO but I'm doubful it will improve performance for 1.6M rows on 11g - if you have the licence you can try it. You did say "more than 1.6M rows", which could be anything. 1.6M rows is a lot of data but 11g should handle it well. DBA_FILE_MULTIBLOCK_READ_COUNT will affect how efficient full table scans are, and tablespaces can be configured with larger block sizes to make full table scans be more efficient.
    How long is your query taking? Are you concerned about actual execution time or the time it takes to display the results on-screen? Is the length of time happening in SQL*PLUS or PL/SQL, a GUI tool like SQL*DEVELOPER, or something else?
    You will need to get execution statistics (AUTOTRACE in SQL*PLUS and SQL*DEVELOPER is an easy way to do this) to see where the slowness is being caused. An execution plan to confirm query execution will also be helpful.
    Edited by: riedelme on Dec 28, 2009 7:12 AM

  • Getting Long time to execute select count(*) statement.

    Hi all,
    My table have 40 columns and it doesn't have the primary key column. it contain more than 5M records. it's taking long time to execute simple sql statement.
    Such as select (*) take 1min and 30 sec. If i use select count(index_colunm) then it finished with in 3s. i did the following workarounds.
    Analyzed the table.
    created required indexes.
    yet getting the same performance issues. please help me to solve this issue
    Thanks

    BlueDiamond wrote:
    COUNT(*) counts the number of rows produced by the query, whereas COUNT(1) counts the number of 1 values.Would you care to show details that prove that?
    In fact, if you use count(1) then the optimizer actually re-writes that internally as count(*).
    Count(*) and Count(1) are have identical executions.
    Re: Count(*)/Count(1)
    http://asktom.oracle.com/pls/asktom/f?p=100:11:6346014113972638::::P11_QUESTION_ID:1156159920245

  • Why update query takes  long time ?

    Hello everyone;
    My update query takes long time.  In  emp  ( self testing) just  having 2 records.
    when i issue update query , it takes long time;
    SQL> select  *  from  emp;
      EID  ENAME     EQUAL     ESALARY     ECITY    EPERK       ECONTACT_NO
          2   rose              mca                  22000   calacutta                   9999999999
          1   sona             msc                  17280    pune                          9999999999
    Elapsed: 00:00:00.05
    SQL> update emp set esalary=12000 where eid='1';
    update emp set esalary=12000 where eid='1'
    * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:01:11.72
    SQL> update emp set esalary=15000;
    update emp set esalary=15000
      * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:02:22.27

    Hi  BCV;
    Thanks for your reply but it doesn't provide output,  please  see   this.
    SQL> update emp set esalary=15000;
    ........... Lock already occured.
    >> trying to trace  >>
    SQL> select HOLDING_SESSION from dba_blockers;
    HOLDING_SESSION
                144
    SQL> select sid , username, event from v$session where username='HR';
    SID USERNAME     EVENT
       144   HR    SQL*Net message from client
       151   HR    enq: TX - row lock contention
       159   HR    SQL*Net message from client
    >> It  does n 't  provide  clear output about  transaction lock >>
    SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
      2  BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
      3  request
      4  FROM v$lock, v$session
      5  WHERE v$lock.TYPE = 'TX'
      6  AND v$lock.SID = v$session.SID
      7  AND v$session.username = USER;
      no rows selected
    SQL> select MACHINE from v$session where sid = :sid;
    SP2-0552: Bind variable "SID" not declared.

  • Select query running long time

    Hi,
    DB version : 10g
    platform : sunos
    My select sql query running long time (more than 20hrs) .Still running .
    Is there any way to find sql query completion time approximately. (Pending time)
    Also is there any possibilities to increase the speed of sql query (already running) like adding hints.
    Please help me on this .
    Thanks

    Hi Sathish thanks for your reply,
    I have already checked in V$SESSION_LONGOPS .But it's showing TIME_REMAINING -->0
    select TOTALWORK,SOFAR,START_TIME,TIME_REMAINING from V$SESSION_LONGOPS where SID='10'
    TOTALWORK      SOFAR START_TIME      TIME_REMAINING
         1099759    1099759 27-JAN-11                    0Any idea ?
    Thanks.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Is index range scan the reason for query running long time

    I would like to know whether index range scan is the reason for the query running long time. Below is the explain plan. If so, how to optimise it? Please help
    Operation     Object     COST     CARDINALITY     BYTES
    SELECT STATEMENT ()          413     1000     265000
    COUNT (STOPKEY)                    
    FILTER ()                    
    TABLE ACCESS (BY INDEX ROWID)     ORDERS     413     58720     15560800
    INDEX (RANGE SCAN)     IDX_SERV_PROV_ID     13     411709     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1     
    TABLE ACCESS (BY INDEX ROWID)     ADDRESSES     2     1     14
    INDEX (UNIQUE SCAN)     SYS_C004605     1     1

    The index range scan means that the optimiser has determined that it is better to read the index rather than perform a full table scan. So in answer to your question - quite possibly but the alternative might take even longer!
    The best thing to do is to review your query and check that you need every table included in the query and that you are accessing the tables via the best route. For example if you can access a table via primary key index that would be better than using a non-unique index. But the best way of reducing the time the query takes to run is to give it less tables (and indexes) to read.
    John Seaman
    http://www.asktheoracle.net

Maybe you are looking for

  • BED AND ECS AMOUNT IS MORE IN MIGO EXCISE HEADER TAB

    Hi Guys, When I am processing the MIGO, in exise tab(Header) the amount of BED and ECS are 10 time more than the acctual amount. Where as the Item details Excise tab it is correct as per the Purchase order. Can some body tell me how/from where this h

  • NAC Discovery host

    I have  one query,  I am running in OOB mode, I have multiple servers running in OOB  mode for the branches. How can I add their IP address if you an tell me it will  be great? Should I put them in the DNS sever? For example 172.16.28.241  –HQ.nas.co

  • Message Correlation for EDIFACT in Oracle B2B

    How do we setup message correlation in Oracle B2B for EDIFACT?

  • Megaworks 550 subwoofer frequency range.

    Hi, i've read in some websites that the frequency response for the subwoofer is up to 50hz. I find that strange, if i turn on LFE to 200hz i can hear more details from it, a 3rd party soundcard i use has a speaker test that makes it obvious. With LFE

  • Smtp relay (smtprelay) server problem with german isp t-online and apple ma

    servername: smtprelay.t-online.de username: my t-online.de e-mail-address passwort: xyz port: 25 / no ssl clients: - thunderbird/mac -> works fine - thunderbird/win -> works fine - outlook express/win -> works fine - apple mail/mac -> doesn't work!!!