Query taking much time Orace 9i

Hi,
**How can we tune the sql query in oracle 9i.**
The select query taking more than 1 and 30 min to throw the result.
Due to this,
We have created materialsed view on the select query and also we submitted a job to get Materilazed view refreshed daily in dba_jobs.
When we tried to retrive the data from Materilased view getting result very quickly.
But the job which we has been assisgned in Dbajobs taking equal  time to complete, as the query use to take.
We feel since the job taking much time in the test Database and it may cause load if we move the same scripts in Production Environment.
Please suggest how to resolvethe issue and also how to tune the sql
With Regards,
Srinivas
Edited by: Srinivas.. on Dec 17, 2009 6:29 AM

Hi Srinivas;
Please follow this search and see its helpful
Regard
Helios

Similar Messages

  • Query taking much time.

    Hi All,
    I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
    select count(*) from
    select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
    tls.siebel_ba, tls.msisdn
    from
    TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
    where
    tls.siebel_ba = tlo.siebel_ba (+) and
    tls.msisdn = tlo.msisdn (+) and
    tlo.siebel_ba is null and
    tlo.msisdn is null
    union
    select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
    tlo.siebel_ba, tlo.msisdn
    from
    TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
    where
    tls.siebel_ba (+) = tlo.siebel_ba and
    tls.msisdn (+) = tlo.msisdn and
    tls.siebel_ba is null and
    tls.msisdn is null
    explain plan of above query is
    | Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 1 | | 14 | | | |
    | 1 | SORT AGGREGATE | | 1 | | | | | |
    | 2 | SORT AGGREGATE | | 1 | | | 41,04 | P->S | QC (RAND) |
    | 3 | VIEW | | 164 | | 14 | 41,04 | PCWP | |
    | 4 | SORT UNIQUE | | 164 | 14104 | 14 | 41,04 | PCWP | |
    | 5 | UNION-ALL | | | | | 41,03 | P->P | HASH |
    |* 6 | FILTER | | | | | 41,03 | PCWC | |
    |* 7 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
    | 8 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,03 | PCWP | |
    | 9 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,00 | S->P | BROADCAST |
    |* 10 | FILTER | | | | | 41,03 | PCWC | |
    |* 11 | HASH JOIN OUTER | | | | | 41,03 | PCWP | |
    | 12 | TABLE ACCESS FULL| TDB_LIBREP_ONDB | 82 | 3526 | 2 | 41,01 | S->P | HASH |
    | 13 | TABLE ACCESS FULL| TDB_LIBREP_SIEBEL | 82 | 3526 | 1 | 41,02 | P->P | HASH |
    Predicate Information (identified by operation id):
    6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
    7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
    10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
    11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")

    user3479748 wrote:
    Hi All,
    I have one query which is taking much time in dev envi where data size is very small and planning to implement this query in production database where database size is huge. Plz let me know how I can optimize this query.
    select count(*) from
    select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
    tls.siebel_ba, tls.msisdn
    from
    TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
    where
    tls.siebel_ba = tlo.siebel_ba (+) and
    tls.msisdn = tlo.msisdn (+) and
    tlo.siebel_ba is null and
    tlo.msisdn is null
    union
    select /*+ full(tls) full(tlo) parallel(tls,2) parallel(tls, 2) */
    tlo.siebel_ba, tlo.msisdn
    from
    TDB_LIBREP_SIEBEL tls, TDB_LIBREP_ONDB tlo
    where
    tls.siebel_ba (+) = tlo.siebel_ba and
    tls.msisdn (+) = tlo.msisdn and
    tls.siebel_ba is null and
    tls.msisdn is null
    ) ;explain plan of above query is
    | Id  | Operation                |  Name              | Rows  | Bytes | Cost  |  TQ    |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT         |                    |     1 |       |    14 |        |      |            |
    |   1 |  SORT AGGREGATE          |                    |     1 |       |       |        |      |            |
    |   2 |   SORT AGGREGATE         |                    |     1 |       |       | 41,04  | P->S | QC (RAND)  |
    |   3 |    VIEW                  |                    |   164 |       |    14 | 41,04  | PCWP |            |
    |   4 |     SORT UNIQUE          |                    |   164 | 14104 |    14 | 41,04  | PCWP |            |
    |   5 |      UNION-ALL           |                    |       |       |       | 41,03  | P->P | HASH       |
    |*  6 |       FILTER             |                    |       |       |       | 41,03  | PCWC |            |
    |*  7 |        HASH JOIN OUTER   |                    |       |       |       | 41,03  | PCWP |            |
    |   8 |         TABLE ACCESS FULL| TDB_LIBREP_SIEBEL  |    82 |  3526 |     1 | 41,03  | PCWP |            |
    |   9 |         TABLE ACCESS FULL| TDB_LIBREP_ONDB    |    82 |  3526 |     2 | 41,00  | S->P | BROADCAST  |
    |* 10 |       FILTER             |                    |       |       |       | 41,03  | PCWC |            |
    |* 11 |        HASH JOIN OUTER   |                    |       |       |       | 41,03  | PCWP |            |
    |  12 |         TABLE ACCESS FULL| TDB_LIBREP_ONDB    |    82 |  3526 |     2 | 41,01  | S->P | HASH       |
    |  13 |         TABLE ACCESS FULL| TDB_LIBREP_SIEBEL  |    82 |  3526 |     1 | 41,02  | P->P | HASH       |
    Predicate Information (identified by operation id):
    6 - filter("TLO"."SIEBEL_BA" IS NULL AND "TLO"."MSISDN" IS NULL)
    7 - access("TLS"."SIEBEL_BA"="TLO"."SIEBEL_BA"(+) AND "TLS"."MSISDN"="TLO"."MSISDN"(+))
    10 - filter("TLS"."SIEBEL_BA" IS NULL AND "TLS"."MSISDN" IS NULL)
    11 - access("TLS"."SIEBEL_BA"(+)="TLO"."SIEBEL_BA" AND "TLS"."MSISDN"(+)="TLO"."MSISDN")
    I dunno, it looks like you are getting all the things that are null with an outer join, so won't that decide to full scan anyways? Plus the union means it will do it twice and do a distinct to get rid of dups - see how it does a union all and then sort unique. Somehow I have the feeling there might be a more trick way to do what you want, so maybe you should state exactly what you want in English.

  • Select query taking Much time

    Dear all ,
    I am fetching data from pool table a006.  The select query is mentioned below.
    select * from a005 into table i_a005 for all wntries in it_table
                 where  kappl  = 'V'
                 and      kschl   IN  s_kschl
                 and     vkorg   in   s_vkorg
                 and     vtweg  in   s_vtgew
                 and     matnr   in s_matnr
                 and    knumh  =  it_table-knumh .
    here every fields are primary key fields except one field knumh which is comparing with table it_table. Because of these field this query is taking too much time as KNUMH is not primary key. And a005 is pool table . So , i cant create index for same. If there is alternate solutions , than please let me know..
    Thank You ,
    And in technical setting of table ITS Metioned as Fully buffered and size category is 0 .. But data are around 9000000. Is it issue or What ?  Can somebody tell some genual reason ? Or improvement in my select query.........
    Edited by: TVC6784 on Jun 30, 2011 3:31 PM

    TVC6784 wrote:
    Hi Yuri ,
    >
    > Thanks for your reply....I will check as per your comment...
    > bUT if i remove field KNUMH  From selection condition and also for all entries in it_itab ,  than data fetch very fast As KNUMH is not primary key..
    > .  the example is below
    >
    > select * from a005 into table i_a005
    > where kappl = 'V'
    > and kschl IN s_kschl
    > and vkorg in s_vkorg
    > and vtweg in s_vtgew
    > and matnr in s_matnr.
    >
    > Can you comment anything about it ?
    >
    > And can you please say how can i check its size as you mention that is  2-3 Mb More   ?
    >
    > Edited by: TVC6784 on Jun 30, 2011 7:37 PM
    I cannot see the trace and other information about the table so I cannot judge why the select w/o KNUMH is faster.
    Basically, if the table is buffered and it's contents is in the SAP application server memory, the access should be really fast. Does not really matter if it is with KNUMH or without.
    I would really like to see at least ST05 trace of your report that is doing this select. This would clarify many things.
    You can check the size by multiplying the entries in A005 table by 138. This is (in my test system) the ABAP width of the structure.
    If you have 9.000.000 records in A005, then it would take 1,24 Gb in the buffer (which is a clear sign to unbuffer).

  • ABAP QUERY taking much time after ERP Upgrade from 4.6 to 6.0

    Hi All,
    I have an ABAP QUERY which uses the INFOSET INVOICE_INBOUND and the USER GROUP InvoiceVerif. The INFOSET is using the tables RBKP and RSEG connected using a JOIN on BELNR and GJAHR fields.
    The query was working fine in 4.6 C Version.  Now the system has been upgraded to 6.0 version.
    Now it takes so much time that the processing is not getting completed. Do we have to make any changes to the existing queries for an upgrade?
    Thanks a lot in advance.
    Gautham.

    Did u regenrated the query & Infoset & Program  before transporting it to ECC6.0?

  • Query taking much time to execute

    The following query is taking more than 4hrs to execute.
    select  l_extendedprice , count(l_extendedprice) from dbo.lineitem group by  l_extendedprice
    Cardinality of table : 6001215 ( > 6 million)
    Index on l_extendedprice is there
    ReadAheadLobThreshold = 500
    Database version 7.7.06.09
    I need to optimize this query. Kindly suggest way out.
    Thanks
    Priyank

    Data Cache :      80296 KB
    Ok, that's 8 Gigs for cache.
    The index takes 16335 pages à 8 KB = 130.680 KB = 128 MB.
    Fits completely into RAM - the same is true for the additional temp resultset.
    So once the index has been read to cache I assume the query is a lot quicker than 4 hours.
    6 Data Volumes
    first 3 of size 51.200 KB
    other 3 of size  1,048,576 KB
    Well, that's not the smartest thing to do.
    That way the larger volumes will get double the I/O requests which eventually saturates the I/O channel.
    Yes looking at the cardinality of the table lots of I/O required but still more than 4 hrs is quite unrealistic. Some tuning is required.
    We're not talking about the cardinality here - you want all the data.
    We talk pages then.
    And as we've seen, the table is not touched for this query.
    Instead the smaller index is completely read in a index only strategy.
    Loading 128MB from disk, creating temporary data in the same size and spilling out the information (and thereby reading the 128 MB temp size again) in 4 hours add up to ca. 384 MB/4 hours = 96 MB/hour = 1,6 MB/minute.
    Not too good really - I suspect that the I/O system here is not the quickest one.
    You may want to activate the time measurement and set the DB Analyzer interval to 120 secs.
    Then activate Command and Resourcemonitor and look for statements taking longer than 10 minutes.
    Now run your statement again and let us know the information from Command/Resourcemonitor and check for warnings in the DBanalyzer output.
    regards,
    Lars

  • Request for the reasons of Query taking much time

    Hi,
    I have one SQL query.When i execute that query from TOAD it is taking some 120 sec,but the same query when i execute from Forms(Front End) it is taking nearly 5 mints.I don't understand where the problem is.Can any one please help me on this with the resons & solutions(Steps to the overcome).
    Regards,
    Rao.

    Can you do an explain plan of the query in Toad?
    And a sql_trace of the form when it executes the query?
    If you have dba-rights you can enable sql-trace in the forms session by using:
    dbms_system.set_sql_trace_in_session(..sid.., ..serial#.., true);where sid and serial# are the values of the forms session (can be found in v$session).
    Toon

  • Request for query taking much time

    Hi,
    I have one SQL query.When i execute that query from TOAD it is taking some 120 sec,but the same query when i execute from Forms(Front End) it is taking nearly 5 mints.I don't understand where the problem is.Can any one please help me on this with the resons & solutions(Steps to the overcome).
    Regards,
    Rao.

    Hi,
    There are many factors involved in this.
    user11312630 wrote:
    Hi,
    I have one SQL query.When i execute that query from TOAD it is taking some 120 sec,but the same query when i execute from Forms(Front End) it is taking nearly 5 mints.I don't understand where the problem is.Can any one please help me on this with the resons & solutions(Steps to the overcome).Because, Toad is directly contacting the database, but, for forms, the request goes to the server, server sends the query to database, gets the rows, pass back the response to the client.
    >
    Regards,
    Rao.So, the factors like, network latency etc., will also play a part in the performance.
    Take at a look at the performance tuning guides.
    http://www.oracle.com/technology/products/forms/pdf/275191.pdf
    http://download.oracle.com/docs/cd/B25527_01/doc/frs/forms/B14032_02/tuning.htm
    -Arun

  • Discoverer report is taking much time to open

    Hi
    All the discoverer report are taking much time to open,even query in lov is taking 20 -25 min.s.We have restart the services but on result found.
    Please suggest what can be done ,my application is on 12.0.6.
    Regards

    This topic was discussed many times in the forum before, please see old threads for details and for the docs you need to refer to -- https://forums.oracle.com/forums/search.jspa?threadID=&q=Discoverer+AND+Slow&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • Adding column is taking much time. How to avoid?

    ALTER TABLE CONTACT_DETAIL
    ADD (ISIMDSCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
    ,ISREACHCONTACT_F NUMBER(1) DEFAULT 0 NOT NULL
    Is there any way that to speed up the execution time of the query?
    It's more than 24 hrs completed after started running the above script.
    I do not know why it is taking much time.
    Size of the table is 30 MB.

    To add a column the row directory of every record must be rewritten.
    Obviously this will take time and produce redo.
    Whenever something is slow the first question you need to answer is
    'What is it waiting for?' You can do so by investigating by various v$ views.
    Also, after more than 200 'I can not be bothered to do any research on my own' questions, you should know you don't post here without posting a four digit version number and a platform,
    as volunteers aren't mind readers.
    If you want to continue to withheld information, please consider NOT posting here.
    Sybrand Bakker
    Senior Oracle DBA
    Experts: those who did read documentatiion and can be bothered to investigate their own problems.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • LINQ query taking long time

    Following query i write it returns me 1400 records. and below line taking much time.
    1.5 second taken by
        count = quer != null ? quer.Count() : 0;
    and 2 sec taken by
        candidateList = quer.Skip((pageIndex - 1) * pageSize).Take(pageSize).ToList();
    Please suggest.

    Hi Jon,
    In SharePoint, I suggest you use CAML Query. If you use Linq, the performance won't be gurantteed.
    For the first query, you can use SPQury.Count to achieve it, for the second query, you can build a proper CAML to filter the data.
    Here are some detailed articles for your reference:
    SPList.GetItems method (SPQuery)
    SPQuery.Query Property
    Zhengyu Guo
    TechNet Community Support

  • Taking much time when trying to Drill a characterstic in the BW Report.

    Hi All,
    When we are executing the BW report, it is taking nearly 1 to 2 mins then when we are trying to drill down a characterstic it is taking much time nearly 30 mins to 1 hour and througing an error message that,
    "An error has occared during loading. Plese look in the upper frame for further information."
    I have executed this query in RSRT and cheked the query properties,
    this quey is bringing the data directly form Aggregates but some chares are not avalable in the Agrregtes.
    So... after execution when we are trying to drill down the chars is taking much time for chars which are not avilable in the Aggregates. For other chars which are avilable in the Aggregates it is taking only 2 to 3 mins only.
    How to do the drill down for the chars which are not avilable in the Aggregates with out taking much time and error.
    Could you kindly give any solution for this.
    Thanks & Regards,
    Raju. E

    Hi,
    The only solution is to include all the char used in the report in the aggregates or this will the issue you will face.
    just create a proposal for aggregates before creating any new aggregates as it will give you the idea which one is most used.
    Also you should make sure that all the navigation characteristics are part of the aggregates.
    Thanks
    Ajeet

  • Database tableAUFM hitting is taking much time even secondary index created

    Hi Friends,
       There is report for Goods movement rel. to Service orders + Acc.indicator.
       We have  two testing Systems(EBQ for developer and PEQ from client side).
       EBQ system contains replica of PEQ every month.
       This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems  have same data(Getting same output).
    The report has the follwoing fields on the selection criteria:
    A_MJAHR     Material Doc. Year (Mandatory)
    S_BLDAT     Document Date(Optional)
    S_BUDAT     Posting Date(Optional)
    S_LGORT     Storage Location(Optional)
    S_MATNR     Material(Optional)
    S_MBLNR     Material Documen(Optional)t
    S_WERKS     Plant(Optional)
    Client not agrrying to make Material Documen as Mandatory.
    The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
    BLDAT
    BUDAT
    MATNR
    WERKS
    LGORT 
    Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
    What can be done to get teh report executed very fast.
    <removed by moderator>
    The part of report Soure code is as below:
    <long code part removed by moderator>
    Thanks and Regards,
    Rama chary.P
    Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
    Please Read before Posting in the Performance and Tuning Forum
    locked by: Thomas Zloch on Sep 15, 2010 11:40 AM

    Hi Friends,
       There is report for Goods movement rel. to Service orders + Acc.indicator.
       We have  two testing Systems(EBQ for developer and PEQ from client side).
       EBQ system contains replica of PEQ every month.
       This report is not taking much time in EBQ.But it is taking much time in PEQ.For the selection criteria I have given,both systems  have same data(Getting same output).
    The report has the follwoing fields on the selection criteria:
    A_MJAHR     Material Doc. Year (Mandatory)
    S_BLDAT     Document Date(Optional)
    S_BUDAT     Posting Date(Optional)
    S_LGORT     Storage Location(Optional)
    S_MATNR     Material(Optional)
    S_MBLNR     Material Documen(Optional)t
    S_WERKS     Plant(Optional)
    Client not agrrying to make Material Documen as Mandatory.
    The main (first) table hit is on AUFM table .As there are non-key fileds as well in where condition,We have cretaed a secondary index as well for AUFM table on the following fields:
    BLDAT
    BUDAT
    MATNR
    WERKS
    LGORT 
    Even then also ,in PEQ sytem the report is taking very long time ,Some times not even getting the ALV output.
    What can be done to get teh report executed very fast.
    <removed by moderator>
    The part of report Soure code is as below:
    <long code part removed by moderator>
    Thanks and Regards,
    Rama chary.P
    Moderator message: please stay within the 2500 character limit to preserve formatting, only post relevant portions of the code, also please read the following sticky thread before posting.
    Please Read before Posting in the Performance and Tuning Forum
    locked by: Thomas Zloch on Sep 15, 2010 11:40 AM

  • LOV is slow after selecting a value its taking much time to default

    Hi,
    I have a dependent LOV. Master LOV is executing fine and its populatin into the field fastly. But Child LOV is very slow after selecting a value its taking much time to default.
    Can any one please help me if there is any way to default the value fast after selecting a value?
    Thanks,
    Mahesh

    Hi Gyan,
    Same issues in TST and PROD instances.
    In my search criteria if i get 1 record, even after selecting that value its taking much time to default that value into the field.
    Please advice. Thanks for your quick resp.
    Thanks,
    Mahesh

  • ODS Activation is taking much time...

    Hi All,
    Some times ods activation is taking much time. Generally it takes 30 mins and some times it take 6 hours.
    If this activation is taking much, then if i check sm50 ...i can see that there is a piece of code is taking much time.
    SELECT
    COUNT(*) , "RECORDMODE"
    FROM
    "/BIC/B0000814000"
    WHERE
    "REQUEST" = :A0 AND "DATAPAKID" = :A1
    GROUP BY
    "RECORDMODE"#
    Could you please let me know what are the possiblites to solve this issue.
    thanks

    Hello,
    you have 2 options:
    1) as already mentioned, cleanup some old psa data or change log data from this psa table or
    2) create a addtional index for recordtype on this table via Tcode se11 -> indexes..
    Regards, Patrick Rieken.

Maybe you are looking for