Help about query performance -Thanks

Hi
I have a j2ee web application with oracle database.
I finish the function implementation by put all caculaton inside oracle query ( funciton (todayData - avg(historical data )) / std(historical data) and using group by and sub query .but performance is very slow about 3 minute whick is far from client's 10 second requirement.
My question is if I take raw data from database and do the caculation part in Java will it be faster. Is there some other way to speed up the process?(historical data is big)
Thanks for your time and help

create a function index. after creation, see your sql using the function index
this will be faster
regards

Similar Messages

  • [Oracle 8i] Help with query performance

    The following query is running VERY slowly for me:
    SELECT     oord.part_nbr
    ,     sopn.ord_nbr
    ,     sopn.sub_ord_nbr
    ,     sopn.major_seq_nbr
    ,     sopn.wctr_id
    ,     sopn.oper_desc
    ,     SUM(pact.act_dlrs_earned+pact.act_brdn_dls_earned+pact.tool_dlrs_earned+pact.act_fix_brdn_dls_ea)
    ,     pact.activity_date
    FROM     PACT pact
    ,     SOPN sopn
    ,     OORD oord
    WHERE     pact.order_nbr          = sopn.ord_nbr          AND
         pact.maj_seq_nbr     = sopn.major_seq_nbr     AND
         sopn.sub_ord_nbr     = pact.sub_order_nbr     AND
         sopn.ord_nbr          = oord.ord_nbr          AND
         sopn.sub_ord_nbr     = oord.sub_ord_nbr     AND
              pact.activity_date     >= ?          AND
              sopn.rework_ind          = 'N'          AND
              (oord.part_nbr, sopn.major_seq_nbr, sopn.wctr_id)
              NOT IN     (
                        SELECT     rout.doc_nbr
                        ,     rout.major_seq_nbr
                        ,     rout.wctr_id
                        FROM ROUT rout
                        WHERE     (rout.begn_eff_dt    <=SYSDATE)     AND
                             (rout.end_eff_dt    >SYSDATE)     AND
                             (rout.po_rework_ind    ='N')       
    GROUP BY     oord.part_nbr
    ,          sopn.ord_nbr
    ,          sopn.sub_ord_nbr
    ,          sopn.major_seq_nbr
    ,          sopn.wctr_id
    ,          sopn.oper_desc
    ,          pact.activity_dateI sent a request off to my IT department (specifically asking for the explain plan and tkprof, as described in the [main post on this topic|http://forums.oracle.com/forums/thread.jspa?threadID=501834] ), and they replied with a screen shot of the 'explain plan' the tool they use (Toad) provides.
    !http://temp-sample.webs.com/explain_plan.jpg!
    I don't know if anyone can help me based off this, since I know it's not really what the main post says to provide, but it's all I was given.
    My IT department also made a few changes to my original query (see below) and told me it got rid of one of the full scans of the PACT table, but they aren't sure why it helped, and I have to say I'm suspect of any fixes where it's not understood why it helped.
    SELECT     oord.part_nbr
    ,     sopn.ord_nbr
    ,     sopn.sub_ord_nbr
    ,     sopn.major_seq_nbr
    ,     sopn.wctr_id
    ,     sopn.oper_desc
    ,     SUM(pact.act_dlrs_earned+pact.act_brdn_dls_earned+pact.tool_dlrs_earned+pact.act_fix_brdn_dls_ea)
    ,     pact.activity_date
    FROM     PACT pact
    ,     SOPN sopn
    ,     OORD oord
    WHERE     sopn.ord_nbr          = pact.order_nbr     AND
         sopn.major_seq_nbr     = pact.maj_seq_nbr     AND
         pact.sub_order_nbr     = sopn.sub_ord_nbr     AND
         oord.ord_nbr           = sopn.ord_nbr          AND
         oord.sub_ord_nbr     = sopn.sub_ord_nbr     AND
         (pact.activity_date >= ?)               AND
         'N'               = sopn.rework_ind     AND
         pact.order_nbr          = oord.ord_nbr          AND
         oord.sub_ord_nbr = pact.sub_order_nbr          AND
         (oord.part_nbr, pact.maj_seq_nbr, sopn.wctr_id) NOT IN
              SELECT /*+ INDEX_JOIN(ROUT) */     rout.doc_nbr
              ,                    rout.major_seq_nbr
              ,                    rout.wctr_id
              FROM     ROUT rout
              WHERE     rout.begn_eff_dt     <= SYSDATE     AND
                   rout.end_eff_dt      > SYSDATE     AND
                   'N' = rout.po_rework_ind
    GROUP BY     oord.part_nbr
    ,          sopn.ord_nbr
    ,          sopn.sub_ord_nbr
    ,          sopn.major_seq_nbr
    ,          sopn.wctr_id
    ,          sopn.oper_desc
    ,          pact.activity_dateAny help on this would be appreciated... when I run this (right now) for 2-3 months of data, it takes a minimum of 3 hours to complete, and I'll eventually need to run this for up to a year's worth of data.

    Hi,
    Well, let's see.
    You get 156 rows returned using IN and 121 rows using exists.
    You need identify the 'missing records' and conclude if that's correct or not, I'm not able to do that from remote, without knowing your data or system.
    It would be helpful if we could see cost and cardinalities, you (or your IT dept.) can get them easily be running the queries from your SQL*Plus prompt.
    Type
    SET AUTOTRACE TRACEONLYbefore running the queries.
    That gives you the explain plan and additional statistics (sorts in memory and sorts to disk).
    Since you use a group by, and you're on 8i can you also post results of these queries:
    select banner from v$version;
    select name, value, isdefault from v$parameter where name like '%area%';Finally, does below query give you a different plan?
    select oord.part_nbr
    ,      oord.ord_nbr
    ,      oord.sub_ord_nbr
    ,      pact.major_seq_nbr
    ,      sopn.wctr_id
    ,      sopn.oper_desc
    ,      sum(pact.act_dlrs_earned + pact.act_brdn_dls_earned + pact.tool_dlrs_earned + pact.act_fix_brdn_dls_ea)
    ,      pact.activity_date
    from   oord oord 
    ,      pact pact
    ,      sopn sopn
    where  oord.ord_nbr       = pact.order_nbr
    and    oord.sub_ord_nbr   = pact.sub_order_nbr
    and    oord.ord_nbr       = sopn.ord_nbr
    and    oord.sub_ord_nbr   = sopn.sub_ord_nbr
    and    sopn.major_seq_nbr = pact.maj_seq_nbr
    and    (pact.activity_date >= ?)
    and    'N' = sopn.rework_ind
    and    (oord.part_nbr, pact.maj_seq_nbr, sopn.wctr_id) not in ( select rout.doc_nbr
                                                                    ,      rout.major_seq_nbr
                                                                    ,      rout.wctr_id
                                                                    from   rout rout
                                                                    where  rout.begn_eff_dt <= sysdate
                                                                    and    rout.end_eff_dt > sysdate
                                                                    and    'N' = rout.po_rework_ind)
    group  by oord.part_nbr
    ,         oord.ord_nbr
    ,         oord.sub_ord_nbr
    ,         pact.major_seq_nbr
    ,         sopn.wctr_id
    ,         sopn.oper_desc
    ,         pact.activity_date

  • Help With Query Performance

    Hi Everyone,
    I have to query a table that stores daily information and I only need to get the current month's data out of there. There is A LOT of data in there (4 years X 15 stores X 65 depts). I was wondering if there was a way to get the current month's data fast. Now it's extremely slow, 5 mins to query, and my users can't wait that long.
    Any sugguestions
    Thank You
    -Sam

    Assuming that emp_id is the PK of emp, then you do not need the MAX on the data from emp. Also you really do not need to put conditions against sysdate. If you only want the records for the current month, then this should work:
    SELECT TO_CHAR(cms_bh_dagent.row_date, 'mm/yyyy') row_date,
           emp.id, emp.short_name, emp.dept_code, emp.descr,
           SUM(cms_bh_dagent.ti_stafftime) AS ti_stafftime,
           SUM(cms_bh_dagent.ti_availtime) AS ti_availtime,
           SUM(cms_bh_dagent.acdcalls) AS acdcalls,
           SUM(cms_bh_dagent.acdtime) AS acdtime,
           SUM(cms_bh_dagent.acwtime) AS acwtime,
           SUM(cms_bh_dagent.holdcalls) AS holdcalls,
           SUM(cms_bh_dagent.holdtime) AS holdtime,
           SUM(cms_bh_dagent.ti_auxtime0) AS ti_auxtime0
    FROM cms_bh_dagent
            INNER JOIN emp ON cms_bh_dagent.logid = emp.acd_login_id
    WHERE cms_bh_dagent.row_date BETWEEN TRUNC(sysdate, 'MONTH') AND
                                         LAST_DAY(TRUNC(sysdate)) and
          emp.code = :tlcode
    GROUP BY TO_CHAR(cms_bh_dagent.row_date, 'mm/yyyy'),
             emp.id, emp.short_name, emp.dept_code, emp.descrIt seems to me that an index on emp.code might be useful in this query, as well as one on cms_bh_dagent (logid , row_date).
    If you want to be able to specify the from and to date, then replace the predicate on cms_bh_dagent.row_date with:
    BETWEEN TO_DATE(start_date, 'your_format') AND TO_DATE(end_date, 'your_format')
    HTH
    John

  • Please help in query performance

    select bg.billing_group_id,r.rating_result, count(*),rc.source,rc.description
    from rejected_record r,
    result_code@RD rc,
    terminal_device@cust td,
    personal_account_td@cust patd,
    personal_account@cust pa,
    billing_group@cust bg,
    -- tariff_plan@rd tp,
    (select pcs.primary_session_id, pcs.charging_date
    from pcs@umr_interface
    where pcs.range_cdr_file_source not like 'HOT%'
    --and lower(pcs.RANGE_CDR_FILE_SOURCE) not like 'rec%'
    -- and lower(pcs.RANGE_CDR_FILE_SOURCE) not like 'sms%'
    -- and lower(pcs.RANGE_CDR_FILE_SOURCE) not like 'data%'
    -- and pcs.CHARGING_TYPE_ID = 1
    and pcs.charging_date between '20/Aug/2010' and '21/Sep/2010') t
    where r.primary_session_id = t.primary_session_id
    and rc.code (+)= r.rating_result
    and td.terminal_device_id = patd.terminal_device_id
    and patd.personal_account_id = pa.personal_account_id
    and r.calling_party_number_rid = td.msisdn
    and pa.billing_group_id = bg.billing_group_id
    and r.is_precharged=0
    --and td.tariff_plan_id=tp.tariff_plan_id
    and (nvl(td.date_to,'01.01.2000')='01.01.2000')
    and (nvl(pa.date_to,'01.01.2000')='01.01.2000')
    and (nvl(patd.date_to,'01.01.2000')='01.01.2000')
    group by r.rating_result, rc.source, rc.description,bg.billing_group_id
    order by r.rating_result
    SQL> select * from v$version;
    BANNER
    Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    PL/SQL Release 9.2.0.8.0 - Production
    CORE     9.2.0.8.0     Production
    TNS for Solaris: Version 9.2.0.8.0 - Production
    NLSRTL Version 9.2.0.8.0 - Production
    SQL>
    Above select query took only 10 mins when those nvl functions where not there..
    but when these nvl fuctions are added this takes lot of time....
    please help to optimce it..
    first i tried with is null.. it took lot of time.. ten i replaced it with nvl function...
    please somebody help to optimize this query

    Check out these links:
    {message:id=1812597}
    {thread:id=863295}

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Query Performance Tuning - Help

    Hello Experts,
    Good Day to all...
    TEST@ora10g>select * from v$version;
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    "CORE     10.2.0.4.0     Production"
    TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
    NLSRTL Version 10.2.0.4.0 - Production
    SELECT fa.user_id,
              fa.notation_type,
                 MAX(fa.created_date) maxDate,
                                      COUNT(*) bk_count
    FROM  book_notations fa
    WHERE fa.user_id IN
        ( SELECT user_id
         FROM
           ( SELECT /*+ INDEX(f2,FBK_AN_ID_IDX) */ f2.user_id,
                                                      MAX(f2.notatn_id) f2_annotation_id
            FROM  book_notations f2,
                  title_relation tdpr
            WHERE f2.user_id IN ('100002616221644',
                                          '100002616221645',
                                          '100002616221646',
                                          '100002616221647',
                                          '100002616221648')
              AND f2.pack_id=tdpr.pack_id
              AND tdpr.title_id =93402
            GROUP BY f2.user_id
            ORDER BY 2 DESC)
         WHERE ROWNUM <= 10)
    GROUP BY fa.user_id,
             fa.notation_type
    ORDER BY 3 DESC;Cost of the Query is too much...
    Below is the explain plan of the query
    | Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                           |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   1 |  SORT ORDER BY                             |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   2 |   HASH GROUP BY                            |                                |    29 |  1305 |    52  (10)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    11 |   319 |     4   (0)| 00:00:01 |
    |   4 |     NESTED LOOPS                           |                                |    53 |  2385 |    50   (6)| 00:00:01 |
    |   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    29   (7)| 00:00:01 |
    |   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |
    |*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |
    |   8 |         VIEW                               |                                |     5 |    80 |    29   (7)| 00:00:01 |
    |*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  10 |           HASH GROUP BY                    |                                |     5 |   180 |    29   (7)| 00:00:01 |
    |  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5356 |   135K|    26   (0)| 00:00:01 |
    |  12 |             NESTED LOOPS                   |                                |  6917 |   243K|    27   (0)| 00:00:01 |
    |  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                         |     1 |    10 |     1   (0)| 00:00:01 |
    |* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |
    |  15 |              INLIST ITERATOR               |                                |       |       |            |          |
    |* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5356 |       |     4   (0)| 00:00:01 |
    |* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   746 |       |     1   (0)| 00:00:01 |
    Table Details
    SELECT COUNT(*) FROM book_notations; --111367
    Columns
    user_id -- nullable field - VARCHAR2(50 BYTE)
    pack_id -- NOT NULL --NUMBER
    notation_type--     VARCHAR2(50 BYTE)     -- nullable field
    CREATED_DATE     - DATE     -- nullable field
    notatn_id     - VARCHAR2(50 BYTE)     -- nullable field      
    Index
    FBK_AN_ID_IDX - Non unique - Composite columns --> (user_id and pack_id)
    SELECT COUNT(*) FROM title_relation; --12678
    Columns
    pack_id - not null - number(38) - PK
    title_id - not null - number(38)
    Index
    IDX_TITLE_ID - Non Unique - TITLE_ID
    Please help...
    Thanks...

    Linus wrote:
    Thanks Bravid for your reply; highly appreciate that.
    So as you say; index creation on the NULL column doesnt have any impact. OK fine.
    What happens to the execution plan, performance and the stats when you remove the index hint?
    Find below the Execution Plan and Predicate information
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 126058086"
    "| Id  | Operation                                  | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |"
    "|   0 | SELECT STATEMENT                           |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   1 |  SORT ORDER BY                             |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   2 |   HASH GROUP BY                            |                                |    25 |  1125 |    55  (11)| 00:00:01 |"
    "|   3 |    TABLE ACCESS BY INDEX ROWID             | book_notations                 |    10 |   290 |     4   (0)| 00:00:01 |"
    "|   4 |     NESTED LOOPS                           |                                |    50 |  2250 |    53   (8)| 00:00:01 |"
    "|   5 |      VIEW                                  | VW_NSO_1                       |     5 |    80 |    32  (10)| 00:00:01 |"
    "|   6 |       HASH UNIQUE                          |                                |     5 |    80 |            |          |"
    "|*  7 |        COUNT STOPKEY                       |                                |       |       |            |          |"
    "|   8 |         VIEW                               |                                |     5 |    80 |    32  (10)| 00:00:01 |"
    "|*  9 |          SORT ORDER BY STOPKEY             |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  10 |           HASH GROUP BY                    |                                |     5 |   180 |    32  (10)| 00:00:01 |"
    "|  11 |            TABLE ACCESS BY INDEX ROWID     | book_notations                 |  5875 |   149K|    28   (0)| 00:00:01 |"
    "|  12 |             NESTED LOOPS                   |                                |  7587 |   266K|    29   (0)| 00:00:01 |"
    "|  13 |              MAT_VIEW ACCESS BY INDEX ROWID| title_relation                      |     1 |    10 |     1   (0)| 00:00:01 |"
    "|* 14 |               INDEX RANGE SCAN             | IDX_TITLE_ID                   |     1 |       |     1   (0)| 00:00:01 |"
    "|  15 |              INLIST ITERATOR               |                                |       |       |            |          |"
    "|* 16 |               INDEX RANGE SCAN             | FBK_AN_ID_IDX                  |  5875 |       |     4   (0)| 00:00:01 |"
    "|* 17 |      INDEX RANGE SCAN                      | FBK_AN_ID_IDX                  |   775 |       |     1   (0)| 00:00:01 |"
    "Predicate Information (identified by operation id):"
    "   7 - filter(ROWNUM<=10)"
    "   9 - filter(ROWNUM<=10)"
    "  14 - access(""TDPR"".""TITLE_ID""=93402)"
    "  16 - access((""F2"".""USER_ID""='100002616221644' OR ""F2"".""USER_ID""='100002616221645' OR "
    "              ""F2"".""USER_ID""='100002616221646' OR ""F2"".""USER_ID""='100002616221647' OR "
    "              ""F2"".""USER_ID""='100002616221648') AND ""F2"".""PACK_ID""=""TDPR"".""PACK_ID"")"
    "  17 - access(""FA"".""USER_ID""=""$nso_col_1"")"
    The cost is the same because the plan is the same. The optimiser chose to use that index anyway. The point is, now that you have removed it, the optimiser is free to choose other indexes or a full table scan if it wants to.
    >
    Statistics
    BEGIN
    DBMS_STATS.GATHER_TABLE_STATS ('TEST', 'BOOK_NOTATIONS');
    END;
    "COLUMN_NAME"     "NUM_DISTINCT"     "NUM_BUCKETS"     "HISTOGRAM"
    "NOTATION_ID"     110269     1     "NONE"
    "USER_ID"     213     212     "FREQUENCY"
    "PACK_ID"     20     20     "FREQUENCY"
    "NOTATION_TYPE"     8     8     "FREQUENCY"
    "CREATED_DATE"     87     87     "FREQUENCY"
    "CREATED_BY"     1     1     "NONE"
    "UPDATED_DATE"     2     1     "NONE"
    "UPDATED_BY"     2     1     "NONE"
    After removing the hint ; the query still shows the same "COST"
    Autotrace
    recursive calls     1
    db block gets     0
    consistent gets     34706
    physical reads     0
    redo size     0
    bytes sent via SQL*Net to client     964
    bytes received via SQL*Net from client     1638
    SQL*Net roundtrips to/from client     2
    sorts (memory)     3
    sorts (disk)     0
    Output of query
    "USER_ID"     "NOTATION_TYPE"     "MAXDATE"     "COUNT"
    "100002616221647"     "WTF"     08-SEP-11     20000
    "100002616221645"     "LOL"     08-SEP-11     20000
    "100002616221644"     "OMG"     08-SEP-11     20000
    "100002616221648"     "ABC"     08-SEP-11     20000
    "100002616221646"     "MEH"     08-SEP-11     20000Thanks...I still don't know what we're working towards at the moment. WHat is the current run time? What is the expected run time?
    I can't tell you if there's a better way to write this query or if indeed there is another way to write this query because I don't know what it is attempting to achieve.
    I can see that you're accessing 100k rows from a 110k row table and it's using an index to look those rows up. That seems like a job for a full table scan rather than index lookups.
    David

  • Hi, I can't restore my iphone 4. When he's about to finish the restoration, it stops. Don't let me turn on the iphone, and only show me the apple icon, or a "load itunes" icon. Someone can help me please? Thank you!

    Hi, I can't restore my iphone 4. When he's about to finish the restoration, it stops. Don't let me turn on the iphone, and only show me the apple icon, or a "load itunes" icon. Someone can help me please? Thank you!

    We are users like you and won't be able to extend your warranty. You will need to physically return the device to the USA for warranty service as you probably already know. Here is an article that deals with error #2001: Resolve iOS update and restore errors

  • SELECT query performance : One big table Vs many small tables

    Hello,
    We are using BDB 11g with SQLITE support. I have a query about 'select' query performance when we have one huge table vs. multiple small tables.
    Basically in our application, we need to run select query multiple times and today we have one huge table. Do you guys think breaking them into
    multiple small tables will help ?
    For test purposes we tried creating multiple tables but performance of 'select' query was more or less same. Would that be because all tables will map to only one database in backed with key/value pair and when we run lookup (select query) on small table or big table it wont make difference ?
    Thanks.

    Hello,
    There is some information on this topic in the FAQ at:
    http://www.oracle.com/technology/products/berkeley-db/faq/db_faq.html#9-63
    If this does not address your question, please just let me know.
    Thanks,
    Sandra

  • Query Performance Issues

    Hi Gurus,
    I m woking on performance tuning at the moment and wants some tips
    regarding the Query performance tuning,if anyone can helpme in that
    rfrence.
    the thing is that i have got an idea about the system and now the
    issues r with DB space, Abap Dumps, problem in free space in table
    space, no number range buffering,cubes r using too many aggrigates,
    large IC,Large ODS, and many others.
    So my questionis that is anyone can tell me that how to resolve the
    issues with the Large master data tables,and large ODS,and one more
    importnat issue is KPI´s exceding there refrence valuses so any idea
    how to deal with them.
    waiting for the valuable responces.
    thanks In advance
    Redards
    Amit

    Hi Amit
    For Query performance issue u can go for :-
    Aggregates : They will help u a lot to make ur query faster becuase query doesnt hits ur cube it hits the aggregates which has very less number of records in comp to ur cube
    secondly i wud suggest u is use CKF in place of formulaes if any in the query
    other thing is avoid upto the extent possible the use fo nav attr . if u want to use them use it upto the minimal level reason i am saying so is during the query exec whn ever there is nav attr it provides unncessary  join to ur MD and thus dec query perfo
    be specifc to rows and columns if u r not sure of a kf or a char then better put it in a free char.
    use filters if possible
    if u follow these m sure ur query perfo will inc
    Assign points if applicable
    Thanks
    puneet

  • Query performance and Transport

    Guys, (Those of you that already work in the field maybe able to better answer this question.) 
    Let's say if an end-user comes and complaints about a certain query performance.  The developer ends up creating aggregates on the cube.  Would he be doing all this process on dev box then transport to QA and on to Production environment or directly on production environment and won't need to transport anything?  Any kind of documentation will be helpful for this.
    Thanks,
    RG

    Aggregates are created directly in production.
    We create aggregate based on query statistics...which is based on data read by query and various times.
    Since this is based of data in prod we develop directly in prod...i mean dev wont have any data so you dont have any basis to create aggregate. even if you use statistics data from prod to create aggregate in dev...you cannot again check performance improvement coz of aggregate in dev as it wont have data as prod has

  • SQL query performance question

    So I had this long query that looked like this:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM
    ideal d, ideal_term a,( select deal_key, deal_term_key, max(createdOn) maxdate    from Ideal_term B
    where createdOn <= '03-OCT-12 10.03.00 AM' group by deal_key, deal_term_key ) B
    WHERE  a.begin_date <= '20-MAR-09 01.01.00 AM'
    *     and a.end_date >= '19-MAR-09 01.00.00 AM'*
    *     and A.deal_key = b.deal_key*
    *     and A.deal_term_key = b.deal_term_key*
    *     and     a.createdOn = b.maxdate*
    *     and d.deal_key = a.deal_key*
    *     and d.name like 'MVPP1 B'*
    order by
    *     a.begin_date, a.deal_key, a.deal_term_key;*
    This performed very poorly for a record in one of the tables that has 43,000+ revisions. It took about 1 minute and 40 seconds. I asked the database guy at my company for help with it and he re-wrote it like so:
    SELECT a.BEGIN_DATE, a.END_DATE, a.DEAL_KEY, (select name from ideal dd where a.deal_key = dd.deal_key) DEALNAME, a.deal_term_key
    FROM ideal d
    INNER JOIN (SELECT deal_key,
    deal_term_key,
    MAX(createdOn) maxdate
    FROM Ideal_term B2
    WHERE '03-OCT-12 10.03.00 AM' >= createdOn
    GROUP BY deal_key, deal_term_key) B1
    ON d.deal_key = B1.deal_key
    INNER JOIN ideal_term a
    ON B1.deal_key = A.deal_key
    AND B1.deal_term_key = A.deal_term_key
    AND B1.maxdate = a.createdOn
    AND d.deal_key = a.deal_key + 0
    WHERE a.begin_date <= '20-MAR-09 01.01.00 AM'
    AND a.end_date >= '19-MAR-09 01.00.00 AM'
    AND d.name LIKE 'MVPP1 B'
    ORDER BY a.begin_date, a.deal_key, a.deal_term_key
    this works much better, it only takes 0.13 seconds. I've bee trying to figure out why exaclty his version performs so much better. His only epxlanation was that the "+ 0" in the WHERE clause prevented Oracle from using an index for that column which created a bad plan initially.
    I think there has to be more to it than that though. Can someone give me a detailed explanation of why the second version of the query performed so much faster.
    Thanks.
    Edited by: su**** on Oct 10, 2012 1:31 PM

    I used Autotrace in SQL developer. Is that sufficient? Here is the Autotrace and Explain for the slow query:
    and for the fast query:
    I said that I thought there was more to it because when my team members and I looked at the re-worked query the database guy sent us, our initial thoughts were that in the slow query some of the tables didn't have joins and because of that the query formed a Cartesian product and this resulted in a huge 43,000+ rows matrix.
    In his version all tables had joins properly defined and in addition he had that +0 which told it to ignore the index for the attribute deal_key of table ideal_term. I spoke with the database guy today and he confirmed our theory.

  • Tuning query performance

    Dear experts,
    I have a question regarding as the performance of a BW query.
    It takes 10 minutes to display about 23 thousands lines.
    This query read the data from an ODS object.
    According to the "where" clause in the "select" statement monitored via Oracle session when the query was running, I created an index for this ODS object.
    After rerunning the query, I found that the index had been taken by Oracle in reading this table (estimated cost is reduced to 2 from about 3000).
    However, it takes the same time as before.
    Is there any other reason or other factors that I should consider in tuning the performance of this query?K
    Thanks in advance

    Hi David,
              Query performance when reporting on ODS object is slower compared to infocubes, infosets, multiproviders etc because of no aggregates and other performance techinques in DSO.
    Basically for DSO/ODS you need to turn on the BEx reporting flag, which again is an overhead for query execution and affects performance.
    To improve the performance when reporting on ODS you can create secondary indexes from BW workbench.
    Please check the below links.
    [Re: performance issues of ODS]
    [Which criteria to follow to pick InfoObj. as secondary index of ODS?;
    Hope this helps.
    Regards,
    Haritha.

  • Forms - Query - Performance

    Hi,
    I am working on a application Developed in Forms10g and Oralce 10g.
    I have few very large transaction tables in db and most of the screens in my application based on these tables only.
    When user performs a query (with out any filter conditions) the whole table(s) loaded into memory and takes very long time. Further queries on the same screen perform better.
    How can I keep these tables in memory (buffer) always to reduce the initial query time?
    or
    Is there any way to share the session buffers with other sessions, sothat it does not take long time in each session?
    or
    Any query performance tuning suggestions will be appreciated.
    Thanks in advance

    Thanks a lot for your posts, very large means around
    12 million rows. Yep, that's a large table
    I have set the query all records to "No". Which is good. It means only enough records are fetched to fill the initial block. That's probably about 10 records. All the other records are not fetched from the database, so they're also not kept in memory at the Forms server.
    Even when I try the query in SQL PLUS it is taking
    long time. Sounds like a query performance problem, not a Forms issue. You're probably better of asking in the database or SQL forum. You could at least include the SELECT statement here if you want any help with it. We can't guess why a query is slow if we have no idea what the query is.
    My concern is, when I execute the same query again or
    in another session (some other user or same user),
    can I increase the performance because the tables are
    already in memory. any possibility for this? Can I
    set any database parameters to share the data between
    sessions like that... The database already does this. If data is retrieved from disk for one user it is cached in the SGA (Shared Global Area). Mind the word Shared. This caching information is shared by all sessions, so other users should benefit from it.
    Caching also has its limits. The most obvious one is the size of the SGA of the database server. If the table is 200 megabyte and the server only has 8 megabyte of cache available, than caching is of little use.
    Am I thinking in the right way? or I lost some where?Don't know.
    There's two approaches:
    - try to tune the query or database to have better performance. For starters, open SQL*Plus, execute "set timing on", then execute "set autotrace traceonly explain statistics", then execute your query and look at the results. it should give you an idea on how the database is executing the query and what improvements could be made. You could come back here with the SELECT statement and timing and trace results, but the database or SQL forum is probably better
    - MORE IMPORTANTLY: think if it is necessary for users to perform such time consuming (and perhaps complex) queries. Do users really need the ability to query all records. Are they ever going to browse through millions of records?
    >
    >
    Thanks

  • ABAP query performance

    Dear gurus,
                   I want to use ABAP query. However, i wonders about its performance.
    Would anyone please show your opinion and experience using this tool?
    Is it taking long time to run this tool? Will it effect the overall performace of the system if i use this tool? <b><REMOVED BY MODERATOR></b>
    Thanks
    Ammy
    Message was edited by:
            Alvaro Tejada Galindo

    Hello,
    plrase check the link
    http://www.sap-img.com/abap/what-is-sap-queries.htm
    http://www.thespot4sap.com/Articles/SAP_ABAP_Queries_Authorizations.asp
    http://www.ies.state.pa.us/hr/lib/hr/BJ0033_Travel_SQ00_Transfer_Travel_Expense_Reporting_.pdf
    Executing Query
    http://help.sap.com/erp2005_ehp_02/helpdata/en/d2/cb4056455611d189710000e8322d00/frameset.htm
    Queries
    http://help.sap.com/erp2005_ehp_02/helpdata/en/d2/cb476b455611d189710000e8322d00/frameset.htm
    Sap Queries
    http://help.sap.com/erp2005_ehp_02/helpdata/en/4f/71e252448011d189f00000e81ddfac/frameset.htm
    Sap Query Components
    http://help.sap.com/erp2005_ehp_02/helpdata/en/d2/cb3f2e455611d189710000e8322d00/frameset.htm
    Sample Screen Shots
    http://www.olemiss.edu/projects/sap/SQ01_10_03.pdf
    <b><REMOVED BY MODERATOR></b>
    Regards,
    LIJO
    Message was edited by:
            Alvaro Tejada Galindo

  • Query Performance & Turning off graph

    Since we've upgraded from BW3.5 to BI 7.0, query performance deteriorated.  The same query now runs longer in BI 7 than compared to in BW 3.5, especially for queries that return large sets of data.
    For example, one query took 4 minutes 30 seconds in BW 3.5, but takes 10 minutes in BI 7.0
    File size also increase from 6,500 Kb in BW 3.5 to 16,000 Kb in BI 7.0
    I believe it takes longer to format the data after retrieval, and it was trying to plot 17,000 records on the the embeded chart. But after we've unhidden and deleted the chart, file size almost dropped half, to about 7,600 Kb.
    I believe the embedded graph is a nice-to-have, but it all depends how the data is plotted to detgermine the meaningfulness of the grapah - which is not in most cases for us and hence the graph is not too measningful.
    Does anyone know if there is a way to turn off / disable the auto pre-plotting of graph in BI 7?
    I believe that this should improve query performance if it is disabled..
    Many thanks for your help.
    Lawrence

    Just to add that: for smaller queries, the time and size dfference are not significant, but large queries are taking significantly longer (and larger in file size in BI 7)..

Maybe you are looking for

  • Encore CS4 asset import creates black images

    Running Encore CS4 on Win 7 32-bit;   4GB RAM,  Radeon 7750 graphics If I do an Import of assets, namely a bunch of standard, out of camera jpegs, about 2.5-4MB in size, then CS4 imports them but anywhere from 10-100% of them will just be black image

  • Photos keep getting rearranged or disappearing

    Each week, I have to take a bunch of pictures and set them to an audio recording as part of my job, and I've been using iMovie to do this since the start of the year.  Basically, I just drag all my photos into iMovie and then drag the audio on top of

  • Creating data at multilevel collection

    Dear Experts, Objective: Want to store Table of Orderstatus_Type in a collection. Herewith i have explain the my objects Create or Replace Type Orderstatus_Type as Objects      Status Varchar2(50),      Statusdate Timestamp Create or Replace Type Mor

  • Adding attachment to SharePoint Document Library- Is it possible?

    I would like to know if it possible to attach a file to a document library items like one would to a List item. I have read some postings on this very issue but most are a couple years old.  Would like to know if updated technology has possibly solve

  • R/3 ECC6.0 connection as a source system to BI 7.0 For IDES system

    Hi Friends, I want to create a R/3 source system conenction in IDES say Example My IDES server For <b>R/3 ECC 6.0 is  XYZ800</b> and client is 800. My IDES server For <b>BI 7.0 also XYZ800</b> and client is 800. in ECC 6.0 IDES, both BI and R/3 is sa