Query Takes Longer time as the Data Increases.

Hi ,
We have one of the below Query which takes around 4 to 5 minutes to retrieve the data and this appears to be very slow as the data grows.
DB Version=10.2.0.4
OS=Solaris 10
tst_trd_owner@MIFEX3> explain plan for select * from TIBEX_OrderBook as of scn 7785234991 where meid='ME4';
Explained.
tst_trd_owner@MIFEX3> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
PLAN_TABLE_OUTPUT
Plan hash value: 3096779986
| Id  | Operation                     | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT              |                          |     1 |   303 |       |   609K  (1)| 01:46:38 |
|*  1 |  HASH JOIN SEMI               |                          |     1 |   303 |   135M|   609K  (1)| 01:46:38 |
|*  2 |   HASH JOIN                   |                          |   506K|   129M|       |   443K  (1)| 01:17:30 |
|   3 |    TABLE ACCESS BY INDEX ROWID| TIBEX_ORDERSTATUSENUM    |     1 |    14 |       |     2   (0)| 00:00:01 |
|*  4 |     INDEX RANGE SCAN          | TIBEX_ORDERSTAT_ID_DESC  |     1 |       |       |     1   (0)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL          | TIBEX_ORDER              |  3039K|   736M|       |   443K  (1)| 01:17:30 |
|   6 |   VIEW                        | VW_NSO_1                 |  7931K|   264M|       |   159K  (1)| 00:27:53 |
|   7 |    HASH GROUP BY              |                          |  7931K|   378M|   911M|   159K  (1)| 00:27:53 |
|*  8 |     HASH JOIN RIGHT ANTI      |                          |  7931K|   378M|       | 77299   (1)| 00:13:32 |
|*  9 |      VIEW                     | index$_join$_004         |     2 |    28 |       |     2  (50)| 00:00:01 |
|* 10 |       HASH JOIN               |                          |       |       |       |            |          |
|  11 |        INLIST ITERATOR        |                          |       |       |       |            |          |
|* 12 |         INDEX RANGE SCAN      | TIBEX_ORDERSTAT_ID_DESC  |     2 |    28 |       |     2   (0)| 00:00:01 |
|  13 |        INDEX FAST FULL SCAN   | XPKTIBEX_ORDERSTATUSENUM |     2 |    28 |       |     1   (0)| 00:00:01 |
|  14 |      INDEX FAST FULL SCAN     | IX_ORDERBOOK             |    11M|   408M|       | 77245   (1)| 00:13:31 |
Predicate Information (identified by operation id):
   1 - access("A"."MESSAGESEQUENCE"="$nso_col_1" AND "A"."ORDERID"="$nso_col_2")
   2 - access("A"."ORDERSTATUS"="ORDERSTATUS")
   4 - access("SHORTDESC"='ORD_OPEN')
   5 - filter("MEID"='ME4')
   8 - access("ORDERSTATUS"="ORDERSTATUS")
   9 - filter("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
  10 - access(ROWID=ROWID)
  12 - access("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
33 rows selected.
The View Query  TIBEX_OrderBook.
SELECT  ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
          BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID,
          PRICETYPE, PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL,
          DISCLOSEDQTY, REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE,
          ACCOUNTNO, CLEARINGAGENCY, 'OK' AS LASTINSTRESULT,
          LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE, TIMESTAMP,
          QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE, LASTEXECQTY,
          LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY,
          STOPPRICE, LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO,
          LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
          BOOKTIMESTAMP, ParticipantIDMM, MarketState, PartnerExId,
          LastExecSettlementCycle, LastExecPostTradeVenueType,
          PriceLevelPosition, PrevReferenceID, EXPIRYTIMESTAMP, matchType,
          lastExecutionRole, a.MDEntryID, a.PegOffset, a.haltReason,
          a.LastInstFixSequence, A.COMPARISONPRICE, A.ENTEREDPRICETYPE
    FROM  tibex_Order A
    WHERE (A.MessageSequence, A.OrderID) IN (
            SELECT  max(B.MessageSequence), B.OrderID
              FROM  tibex_Order B
              WHERE orderStatus NOT IN (
                      SELECT orderStatus
                        FROM tibex_orderStatusEnum
                        WHERE ShortDesc in ('ORD_REJECT', 'ORD_NOTFND')
              GROUP By B.OrderID
      AND A.OrderStatus IN (
            SELECT OrderStatus
              FROM  tibex_orderStatusEnum
              WHERE ShortDesc IN ('ORD_OPEN')
/Any helpful suggestions.
Regards
NM

Hi Centinul,
I tried your modified version of the query on the test Machine.It used Quite a lot of Temp space around 9GB and Finally ran out of disk space.
On the test Machine i have generated stats and Executed the Queries but in the production our stats will be always Stale reason is
In the Morning we have 3000 records in Tibex_Order and as the day progresses data will be increment and goes upto 20 millions records by the end of day and we generate the stats and Truncate the Transaction tables(Tibex_Order=20 Million records) and next day our stats will be stale and if the user runs any Query then they will take Ages to retrieve Example is the below one.
tst_trd_owner@MIFEX3>
tst_trd_owner@MIFEX3> CREATE OR REPLACE VIEW TIBEX_ORDERBOOK_TEMP
  2  (ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
  3   BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID, PRICETYPE,
  4   PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL, DISCLOSEDQTY,
  5   REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE, ACCOUNTNO,
  6   CLEARINGAGENCY, LASTINSTRESULT, LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE,
  7   TIMESTAMP, QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE,
  8   LASTEXECQTY, LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY, STOPPRICE,
  9   LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO, LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
10   BOOKTIMESTAMP, PARTICIPANTIDMM, MARKETSTATE, PARTNEREXID, LASTEXECSETTLEMENTCYCLE,
11   LASTEXECPOSTTRADEVENUETYPE, PRICELEVELPOSITION, PREVREFERENCEID, EXPIRYTIMESTAMP, MATCHTYPE,
12   LASTEXECUTIONROLE, MDENTRYID, PEGOFFSET, HALTREASON, LASTINSTFIXSEQUENCE,
13   COMPARISONPRICE, ENTEREDPRICETYPE)
14  AS
15  SELECT orderid
16       , MAX(userorderid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
17       , MAX(orderside) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
18       , MAX(ordertype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
19       , MAX(orderstatus) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
20       , MAX(boardid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
21       , MAX(timeinforce) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
22       , MAX(instrumentid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
23       , MAX(referenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
24       , MAX(pricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
25       , MAX(price) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
26       , MAX(averageprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
27       , MAX(quantity) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
28       , MAX(minimumfill) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
29       , MAX(disclosedqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
30       , MAX(remainqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
31       , MAX(aon) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
32       , MAX(participantid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
33       , MAX(accounttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
34       , MAX(accountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
35       , MAX(clearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
36       , 'ok' as lastinstresult
37       , MAX(lastinstmessagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
38       , MAX(lastexecutionid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
39       , MAX(note) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
40       , MAX(timestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
41       , MAX(qtyfilled) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
42       , MAX(meid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
43       , MAX(lastinstrejectcode) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
44       , MAX(lastexecprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
45       , MAX(lastexecqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
46       , MAX(lastinsttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
47       , MAX(lastexecutioncounterparty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
48       , MAX(visibleqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
49       , MAX(stopprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
50       , MAX(lastexecclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
51       , MAX(lastexecaccountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
52       , MAX(lastexeccpclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
53       , MAX(messagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
54       , MAX(lastinstuseralias) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
55       , MAX(booktimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
56       , MAX(participantidmm) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
57       , MAX(marketstate) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
58       , MAX(partnerexid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
59       , MAX(lastexecsettlementcycle) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
60       , MAX(lastexecposttradevenuetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
61       , MAX(pricelevelposition) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
62       , MAX(prevreferenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
63       , MAX(expirytimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
64       , MAX(matchtype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
65       , MAX(lastexecutionrole) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
66       , MAX(mdentryid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
67       , MAX(pegoffset) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
68       , MAX(haltreason) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
69       , MAX(lastinstfixsequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
70       , MAX(comparisonprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
71       , MAX(enteredpricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
72  FROM   tibex_order
73  WHERE  orderstatus IN (
74                           SELECT orderstatus
75                           FROM   tibex_orderstatusenum
76                           WHERE  shortdesc IN ('ORD_OPEN')
77                        )
78  GROUP BY orderid
79  /
View created.
tst_trd_owner@MIFEX3> SELECT /*+ gather_plan_statistics */    *   FROM   TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4';
SELECT /*+ gather_plan_statistics */    *   FROM   TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4'
ERROR at line 1:
ORA-01114: IO error writing block to file %s (block # %s)
ERROR:
ORA-03114: not connected to ORACLEAny Suggestion will be helpful
Regards
NM

Similar Messages

  • Why update query takes  long time ?

    Hello everyone;
    My update query takes long time.  In  emp  ( self testing) just  having 2 records.
    when i issue update query , it takes long time;
    SQL> select  *  from  emp;
      EID  ENAME     EQUAL     ESALARY     ECITY    EPERK       ECONTACT_NO
          2   rose              mca                  22000   calacutta                   9999999999
          1   sona             msc                  17280    pune                          9999999999
    Elapsed: 00:00:00.05
    SQL> update emp set esalary=12000 where eid='1';
    update emp set esalary=12000 where eid='1'
    * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:01:11.72
    SQL> update emp set esalary=15000;
    update emp set esalary=15000
      * ERROR at line 1:
    ORA-01013: user requested cancel of current operation
    Elapsed: 00:02:22.27

    Hi  BCV;
    Thanks for your reply but it doesn't provide output,  please  see   this.
    SQL> update emp set esalary=15000;
    ........... Lock already occured.
    >> trying to trace  >>
    SQL> select HOLDING_SESSION from dba_blockers;
    HOLDING_SESSION
                144
    SQL> select sid , username, event from v$session where username='HR';
    SID USERNAME     EVENT
       144   HR    SQL*Net message from client
       151   HR    enq: TX - row lock contention
       159   HR    SQL*Net message from client
    >> It  does n 't  provide  clear output about  transaction lock >>
    SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
      2  BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
      3  request
      4  FROM v$lock, v$session
      5  WHERE v$lock.TYPE = 'TX'
      6  AND v$lock.SID = v$session.SID
      7  AND v$session.username = USER;
      no rows selected
    SQL> select MACHINE from v$session where sid = :sid;
    SP2-0552: Bind variable "SID" not declared.

  • My query take long time..

    The output of tkprof of my trace file is :
    SELECT ENEXT.NUM_PRSN_EMPLY ,ENEXT.COD_BUSUN ,ENEXT.DAT_CALDE ,ENEXT.COD_SHFT
    FROM
    AAC_EMPLOYEE_ENTRY_EXITS5_VIW ENEXT ,PDS.PDS_EMPLOYEES EMPL ,
    PDS.PDS_EMPLOYMENT_TYPES EMPTYP ,PDS.PDS_PAY_CONDITIONS PAYCON WHERE
    ENEXT.DAT_CALDE BETWEEN :B6 AND :B5 AND ENEXT.NUM_PRSN_EMPLY IN (SELECT
    ATT21 FROM APPS.GLOBAL_TEMPS WHERE ATT1 = 'PRSN') AND ENEXT.NUM_PRSN_EMPLY =
    EMPL.NUM_PRSN_EMPLY AND EMPL.EMTYP_COD_EMTYP = EMPTYP.COD_EMTYP AND
    EMPTYP.LKP_COD_STA_PAY_EMTYP <> 3 AND
    NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1 AND EMPL.PCOND_COD_STA_PCOND
    = PAYCON.COD_STA_PCOND AND NVL(EMPL.LKP_MNTLY_WITHOUT_ENEXT_EMPLY,2) <> 1
    AND PAYCON.LKP_FLG_STA_PAY_PCOND = 1 AND ENEXT.DAT_CALDE >=
    EMPL.DAT_EMPLT_EMPLY AND ENEXT.DAT_CALDE <= NVL(EMPL.DAT_DSMSL_EMPLY,
    TO_DATE('15001229','YYYYMMDD')) AND 1 = (CASE WHEN
    ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND
    ENEXT.TYP_DAY BETWEEN 4 AND 6 THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2
    AND ENEXT.LKP_CAT_SHFT_SHTAB = 1 AND ENEXT.TYP_DAY NOT BETWEEN 4 AND 6 THEN
    1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 2 AND ENEXT.LKP_CAT_SHFT_SHTAB = 2
    THEN 0 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
    1 THEN 1 WHEN ENEXT.LKP_STA_HOLIDAY_CALNR = 1 AND ENEXT.LKP_CAT_SHFT_SHTAB =
    2 THEN 0 END) AND ENEXT.LKP_COD_DPUT_BUSUN = NVL(:B4 ,
    ENEXT.LKP_COD_DPUT_BUSUN) AND ENEXT.LKP_COD_MANAG_BUSUN = NVL(:B3 ,
    ENEXT.LKP_COD_MANAG_BUSUN) AND ENEXT.COD_BUSUN = NVL(:B2 , ENEXT.COD_BUSUN)
    AND ENEXT.COD_CAL = NVL(COD_CAL, ENEXT.COD_CAL) AND ENEXT.NUM_PRSN_EMPLY =
    NVL(:B1 , ENEXT.NUM_PRSN_EMPLY) AND ENEXT.COD_SHFT IN (SELECT
    SHFTBL.COD_SHTAB FROM AAC_SHIFT_TABLES SHFTBL WHERE
    SHFTBL.LKP_CAT_SHFT_SHTAB = 1) AND ENEXT.DAT_CALDE NOT IN (SELECT ABN.DAT
    FROM APPS.AAC_EMPL_EN_EX_ABNORMAL_VIW ABN WHERE ABN.PRSN =
    ENEXT.NUM_PRSN_EMPLY AND ABN.DAT BETWEEN :B6 AND :B5 ) AND ENEXT.DAT_CALDE
    IN (SELECT EMPENEXT.DAT_STR_SHFT_ENEXT FROM AAC.AAC_EMPLOYEE_ENTRY_EXITS
    EMPENEXT WHERE EMPENEXT.EMPLY_NUM_PRSN_EMPLY = EMPL.NUM_PRSN_EMPLY AND
    EMPENEXT.DAT_STR_SHFT_ENEXT BETWEEN :B6 AND :B5 AND
    EMPENEXT.LKP_FLG_STA_ENEXT <> 3) ORDER BY ENEXT.NUM_PRSN_EMPLY,
    ENEXT.DAT_CALDE
    call count cpu elapsed disk query current rows
    Parse 2 0.00 0.00 0 0 0 0
    Execute 2 0.00 0.00 0 0 0 0
    Fetch 2 40.45 40.30 306 17107740 0 24
    total 6 40.45 40.30 306 17107740 0 24
    what is wrong in my query?
    why it take long time?

    user13344656 wrote:
    what is wrong in my query?
    why it take long time?See PL/SQL forum FAQ
    https://forums.oracle.com/forums/ann.jspa?annID=1535
    *3. How to improve the performance of my query? / My query is running slow.*
    SQL and PL/SQL FAQ
    For instructions on what information to post an how to format it.

  • Using Word Easy Table Under Report Generation takes long time to add data points to table and generate report

    Hi All,
    We used report generation tool kit to generate the report on word and with other API 's under it,we get good reports .
    But when the data points are more (> 100 on all channels) it take a long time  to write all data and create a table in the word and generate report.
    Any sugegstions how to  make this happen in some seconds .
    Please assist.

    Well, I just tried my suggestion.  I simulated a 24-channel data producer (I actually generated 25 numbers -- the first number was the row number, followed by 24 random numbers) and generated 100 of these for a total of 2500 double-precision values.  I then saved this table to Excel and closed the file.  I then opened Word (all using RGT), wrote a single text line "Text with Excel", inserted the previously-created "Excel Object", and saved and closed Word.
    First, it worked (sort of).  The Table in Word started on a new page, and was in a very tiny font (possibly trying to fit 25 columns on a page?  I didn't inspect it very carefully).  This is probably "too much data" to really try to write the whole table, unless you format it for, say, 3 significant figures.
    Now, timing.  I ran this four times, two duplicate sets, one with Excel and Word in "normal" mode, one in "minimized".  To my surprise, this didn't make a lot of difference (minimized was less than 10% faster).  Here are the approximate times:
         Generate the data -- about 1 millisecond.
         Write the Excel Report -- about 1.5 seconds
         Write the Word Report -- about 10.5 seconds
    Seems to me this is way faster than trying to do this directly in Word.
    Bob Schor

  • Query takes long time on multiprovider

    Hi,
    When i execute a query on the multiprovider, it takes very long time. it doesnt show up the results also. It just keep processing. I have executed the report only for one day but still it doesnt show any result. But when i execute on the cube, it executes quickly and shows the result.
    Actually i added one more cube to the multiprovider and ten transported that multiprovider to QA and PRD. Transportation went on successfully. After this i am unalbe to execute the reports on that multiprovider. What might be the cause? your help is appreciated.
    Thanks
    Annie

    Hi Annie.......
    Checklist for the performance of a Query........from a DOc........
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If u201CDisplay as hierarchyu201D is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before calculations. Try to avoid calculations before restrictions.
    16. Check Sequential vs Parallel read on Multiproviders.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.
    24. Leverage characteristics or navigational attributes rather than hierarchies. Using a hierarchy requires reading temporary hierarchy tables and creates additional overhead compared to characteristics and navigational attributes. Therefore, characteristics or navigational attributes result in significantly better query performance than hierarchies, especially as the size of the hierarchy (e.g., the number of nodes and levels) and the complexity of the selection criteria increase.
    25. If hierarchies are used, minimize the number of nodes to include in the query results. Including all nodes in the query results (even the ones that are not needed or blank) slows down the query processing. The u201Cnot assignedu201D nodes in the hierarchy should be filtered out, and you should use a variable to reduce the number of hierarchy nodes selected.
    Also check this.........Recommendations for Modeling MultiProviders
    http://help.sap.com/saphelp_nw70/helpdata/EN/43/5617d903f03e2be10000000a1553f6/frameset.htm
    Hope this helps......
    Regards,
    Debjani......

  • Oracle SQL Select query takes long time than expected.

    Hi,
    I am facing a problem in SQL select query statement. There is a long time taken in select query from the Database.
    The query is as follows.
    select /*+rule */ f1.id,f1.fdn,p1.attr_name,p1.attr_value from fdnmappingtable f1,parametertable p1 where p1.id = f1.id and ((f1.object_type ='ne_sub_type.780' )) and ( (f1.id in(select id from fdnmappingtable where fdn like '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#%')))order by f1.id asc
    This query is taking more than 4 seconds to get the results in a system where the DB is running for more than 1 month.
    The same query is taking very few milliseconds (50-100ms) in a system where the DB is freshly installed and the data in the tables are same in both the systems.
    Kindly advice what is going wrong??
    Regards,
    Purushotham

    SQL> @/alcatel/omc1/data/query.sql
    2 ;
    9 rows selected.
    Execution Plan
    Plan hash value: 3745571015
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    | 1 | SORT ORDER BY | |
    | 2 | NESTED LOOPS | |
    | 3 | NESTED LOOPS | |
    | 4 | TABLE ACCESS FULL | PARAMETERTABLE |
    |* 5 | TABLE ACCESS BY INDEX ROWID| FDNMAPPINGTABLE |
    |* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    |* 7 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
    |* 8 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    Predicate Information (identified by operation id):
    5 - filter("F1"."OBJECT_TYPE"='ne_sub_type.780')
    6 - access("P1"."ID"="F1"."ID")
    7 - filter("FDN" LIKE '0=#1#/14=#S0058-3#/17=#S0058-3#/18=#1#/780=#5#
    8 - access("F1"."ID"="ID")
    Note
    - rule based optimizer used (consider using cbo)
    Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    0 bytes sent via SQL*Net to client
    0 bytes received via SQL*Net from client
    0 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    9 rows processed
    SQL>

  • Select query take long time

    Hi All.
    When i execute select query from View it takes about 00:00:45:12 sec to pull the data , but when i execute same query in some other system(different database with same table structure) it takes about 00:00:02:05 sec.
    1)I have tried by dropped and recreated the index then i tried by exec dbms_stats.gather_table_stats procedure still no luck.
    Please help me to understand the reason difference in response time
    Thanks
    sankar

    did you run the EXPLAIN PLAN?

  • Query takes long time to return results.

    I am on Oracle database 10g Enterprise Edition Release 10.2.0.4.0 – 64 bit
    This query takes about 58 seconds to return 180 rows...
             SELECT order_num,
                    order_date,
                    company_num,
                    customer_num,
                    address_type,
                    create_date as address_create_date,
                    contact_name,
                    first_name,
                    middle_init,
                    last_name,
                    company_name,
                    street_address_1,
                    customer_class,
                    city,
                    state,
                    zip_code,
                    country_code,
                    MAX(decode(media_type,
                               'PHH',
                               phone_area_code || '''' || phone_number,
                               NULL)) home_phone,
                    MAX(decode(media_type,
                               'PHW',
                               phone_area_code || '''' || phone_number,
                               NULL)) work_phone,
                    address_seq_num,
                    street_address_2
               FROM (SELECT oh.order_num order_num,
                            oh.order_datetime order_date,
                            oh.company_num company_num,
                            oh.customer_num customer_num,
                            ad.address_type address_type,
                            c.create_date create_date,
                            con.first_name || '''' || con.last_name contact_name,
                            con.first_name first_name,
                            con.middle_init middle_init,
                            con.last_name last_name,
                            ad.company_name company_name,
                            ad.street_address_1 street_address_1,
                            c.customer_class customer_class,
                            ad.city city,
                            ad.state state,
                            ad.zip_code zip_code,
                            ad.country_code,
                            cph.media_type media_type,
                            cph.phone_area_code phone_area_code,
                            cph.phone_number phone_number,
                            ad.address_seq_num address_seq_num,
                            ad.street_address_2 street_address_2
                       FROM reporting_base.gt_gaft_orders gt,
                            doms.us_ordhdr   oh,
                            doms.us_address  ad,
                            doms.us_customer c,
                            doms.us_contact  con,
                            doms.us_contph   cph
                      WHERE oh.customer_num = c.customer_num(+)
                        AND oh.customer_num = ad.customer_num(+)
                        AND (
                               ad.customer_num = c.customer_num
                        AND
                               ad.address_type = 'B'
                         OR   (
                                ad.customer_num = c.customer_num
                        AND
                                ad.address_type = 'S'
                        AND
                            ad.address_seq_num = oh.ship_to_seq_num
                        AND ad.customer_num = con.customer_num(+)
                        AND ad.address_type = con.address_type(+)
                        AND ad.address_seq_num = con.address_seq_num(+)
                        AND con.customer_num = cph.customer_num(+)
                        AND con.contact_id = cph.contact_id(+)
                        AND oh.order_num = gt.order_num
                        AND oh.business_unit_id = gt.business_unit_id)
              GROUP BY order_num,
                       order_date,
                       company_num,
                       customer_num,
                       address_type,
                       create_date,
                       contact_name,
                       first_name,
                       middle_init,
                       last_name,
                       company_name,
                       street_address_1,
                       customer_class,
                       city,
                       state,
                       zip_code,
                       country_code,
                       address_seq_num,
                       street_address_2;This is the explain plan for the query:
    Plan
    SELECT STATEMENT FIRST_ROWS Cost: 21 Bytes: 207 Cardinality: 1
         18 HASH GROUP BY Cost: 21 Bytes: 207 Cardinality: 1
               17 NESTED LOOPS OUTER Cost: 20 Bytes: 207 Cardinality: 1
                     14 NESTED LOOPS OUTER Cost: 16 Bytes: 183 Cardinality: 1
                           11 FILTER
                                 10 NESTED LOOPS OUTER Cost: 12 Bytes: 152 Cardinality: 1
                                       7 NESTED LOOPS OUTER Cost: 8 Bytes: 74 Cardinality: 1
                                             4 NESTED LOOPS OUTER Cost: 5 Bytes: 56 Cardinality: 1
                                                   1 TABLE ACCESS FULL TABLE (TEMP) REPORTING_BASE.GT_GAFT_ORDERS Cost: 2 Bytes: 26 Cardinality: 1
                                                   3 TABLE ACCESS BY INDEX ROWID TABLE DOMS.US_ORDHDR Cost: 3 Bytes: 30 Cardinality: 1
                                                         2 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USORDHDR_IXUPK_ORDNUMBUID Cost: 2 Cardinality: 1
                                             6 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CUSTOMER Cost: 3 Bytes: 18 Cardinality: 1 Partition #: 11
                                                   5 INDEX UNIQUE SCAN INDEX (UNIQUE) DOMS.USCUSTOMER_IXUPK_CUSTNUM Cost: 2 Cardinality: 1
                                       9 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_ADDRESS Cost: 4 Bytes: 156 Cardinality: 2 Partition #: 13
                                             8 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USADDR_IXUPK_CUSTATYPASEQ Cost: 3 Cardinality: 2
                           13 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTACT Cost: 4 Bytes: 31 Cardinality: 1 Partition #: 15
                                 12 INDEX RANGE SCAN INDEX DOMS.USCONT_IX_CNATAS Cost: 3 Cardinality: 1
                     16 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE DOMS.US_CONTPH Cost: 4 Bytes: 24 Cardinality: 1 Partition #: 17
                           15 INDEX RANGE SCAN INDEX (UNIQUE) DOMS.USCONTPH_IXUPK_CUSTCONTMEDSEQ Cost: 3 Cardinality: 1 Cost is good. All indexes are used. However the time to return the data is very high.
    Any ideas to make the query faster?.
    Thanks

    Hi, here is the tkprof output as requested by Rob..
    TKPROF: Release 10.2.0.4.0 - Production on Mon Jul 13 09:07:09 2009
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Trace file: axispr1_ora_15293.trc
    Sort options: default
    count    = number of times OCI procedure was executed
    cpu      = cpu time in seconds executing
    elapsed  = elapsed time in seconds executing
    disk     = number of physical reads of buffers from disk
    query    = number of buffers gotten for consistent read
    current  = number of buffers gotten in current mode (usually for update)
    rows     = number of rows processed by the fetch or execute call
    SELECT ORDER_NUM, ORDER_DATE, COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE,
      CREATE_DATE AS ADDRESS_CREATE_DATE, CONTACT_NAME, FIRST_NAME, MIDDLE_INIT,
      LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1, CUSTOMER_CLASS, CITY, STATE,
      ZIP_CODE, COUNTRY_CODE, MAX(DECODE(MEDIA_TYPE, 'PHH', PHONE_AREA_CODE ||
      '''' || PHONE_NUMBER, NULL)) HOME_PHONE, MAX(DECODE(MEDIA_TYPE, 'PHW',
      PHONE_AREA_CODE || '''' || PHONE_NUMBER, NULL)) WORK_PHONE, ADDRESS_SEQ_NUM,
       STREET_ADDRESS_2
    FROM
    (SELECT OH.ORDER_NUM ORDER_NUM, OH.ORDER_DATETIME ORDER_DATE, OH.COMPANY_NUM
      COMPANY_NUM, OH.CUSTOMER_NUM CUSTOMER_NUM, AD.ADDRESS_TYPE ADDRESS_TYPE,
      C.CREATE_DATE CREATE_DATE, CON.FIRST_NAME || '''' || CON.LAST_NAME
      CONTACT_NAME, CON.FIRST_NAME FIRST_NAME, CON.MIDDLE_INIT MIDDLE_INIT,
      CON.LAST_NAME LAST_NAME, AD.COMPANY_NAME COMPANY_NAME, AD.STREET_ADDRESS_1
      STREET_ADDRESS_1, C.CUSTOMER_CLASS CUSTOMER_CLASS, AD.CITY CITY, AD.STATE
      STATE, AD.ZIP_CODE ZIP_CODE, AD.COUNTRY_CODE, CPH.MEDIA_TYPE MEDIA_TYPE,
      CPH.PHONE_AREA_CODE PHONE_AREA_CODE, CPH.PHONE_NUMBER PHONE_NUMBER,
      AD.ADDRESS_SEQ_NUM ADDRESS_SEQ_NUM, AD.STREET_ADDRESS_2 STREET_ADDRESS_2
      FROM REPORTING_BASE.GT_GAFT_ORDERS GT, DOMS.US_ORDHDR OH, DOMS.US_ADDRESS
      AD, DOMS.US_CUSTOMER C, DOMS.US_CONTACT CON, DOMS.US_CONTPH CPH WHERE
      OH.ORDER_NUM = GT.ORDER_NUM AND OH.BUSINESS_UNIT_ID = GT.BUSINESS_UNIT_ID
      AND OH.CUSTOMER_NUM = C.CUSTOMER_NUM(+) AND OH.CUSTOMER_NUM =
      AD.CUSTOMER_NUM(+) AND AD.CUSTOMER_NUM = C.CUSTOMER_NUM AND (
      AD.ADDRESS_TYPE = 'B' OR ( AD.ADDRESS_TYPE = 'S' AND AD.ADDRESS_SEQ_NUM =
      OH.SHIP_TO_SEQ_NUM ) ) AND AD.CUSTOMER_NUM = CON.CUSTOMER_NUM(+) AND
      AD.ADDRESS_TYPE = CON.ADDRESS_TYPE(+) AND AD.ADDRESS_SEQ_NUM =
      CON.ADDRESS_SEQ_NUM(+) AND CON.CUSTOMER_NUM = CPH.CUSTOMER_NUM(+) AND
      CON.CONTACT_ID = CPH.CONTACT_ID(+) ) GROUP BY ORDER_NUM, ORDER_DATE,
      COMPANY_NUM, CUSTOMER_NUM, ADDRESS_TYPE, CREATE_DATE, CONTACT_NAME,
      FIRST_NAME, MIDDLE_INIT, LAST_NAME, COMPANY_NAME, STREET_ADDRESS_1,
      CUSTOMER_CLASS, CITY, STATE, ZIP_CODE, COUNTRY_CODE, ADDRESS_SEQ_NUM,
      STREET_ADDRESS_2
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch      257      0.04       0.05         45          0          0        6421
    total      257      0.04       0.05         45          0          0        6421
    Misses in library cache during parse: 0
    Parsing user id: 126
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch      257      0.04       0.05         45          0          0        6421
    total      257      0.04       0.05         45          0          0        6421
    Misses in library cache during parse: 0
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        0      0.00       0.00          0          0          0           0
    Execute      0      0.00       0.00          0          0          0           0
    Fetch        0      0.00       0.00          0          0          0           0
    total        0      0.00       0.00          0          0          0           0
    Misses in library cache during parse: 0
        1  user  SQL statements in session.
        0  internal SQL statements in session.
        1  SQL statements in session.
    Trace file: axispr1_ora_15293.trc
    Trace file compatibility: 10.01.00
    Sort options: default
           1  session in tracefile.
           1  user  SQL statements in trace file.
           0  internal SQL statements in trace file.
           1  SQL statements in trace file.
           1  unique SQL statements in trace file.
         289  lines in trace file.
          83  elapsed seconds in trace file.Thanks in advance!

  • Select query takes long time....

    Hi Experts,
    I am using a select query in which inspection lot is in another table and order no. is in another table. this select query taking very long time, what is the problem in this query ? Pl. guide us.
    select bPRUEFLOS bMBLNR bCPUDT aAUFNR amatnr aLGORT a~bwart
    amenge aummat asgtxt axauto
    into corresponding fields of table itab
    *into table itab
    from mseg as a inner join qamb as b
    on amblnr = bmblnr
    and azeile = bzeile
    where b~PRUEFLOS in insp
    and  b~cpudt in date1
    and b~typ = '3'
    and a~bwart = '321'
    and a~aufnr in aufnr1.
    Yusuf

    hi
    instead of using 'move to corresponding of itab'  fields use  'into table itab'.....
    coz......if u use move to corresponding it will search for all the appropriate fields then it will place u r data........instead of that declare apprpiate internal table and use 'into table itab'.
    and one more thing dont use joins ......coz joins will decrease u r performance .....so instead of that use 'for all entries' ....and mention all the key fields in where condition ........
    ok
    reward points for helpful answers

  • Query take long time in fetching when used within a procedure

    The Database is : Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    Query just takes a second from toad but when used inside a procedure as a cursor it takes takes 3 to 5 minutes.
    Following is the Tkprof information when running from procedure.
    SELECT CHCLP.CLM_PRVDR_TYPE_LKPCD, CHCLP.PRVDR_LCTN_IID, TO_CHAR
    (CHCLP.MODIFIED_DATE, 'MM-dd-yyyy hh24:mi:ss') MODIFIED_DATE,
    CHCLP.PRVDR_LCTN_IDENTIFIER, CHCLP.CLM_HDR_CLM_LN_X_PVDR_LCTN_SID
    FROM
    CLM_HDR_CLM_LN_X_PRVDR_LCTN CHCLP WHERE CHCLP.CLAIM_HEADER_SID = :B1 AND
    CHCLP.CLAIM_LINE_SID IS NULL AND CHCLP.IDNTFR_TYPE_CID = 7
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 110.79 247.79 568931 576111 0 3
    total 2 110.79 247.79 568931 576111 0 3
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 93 (CMSAPP) (recursive depth: 1)
    Rows     Execution Plan
    0 SELECT STATEMENT MODE: ALL_ROWS
    0 PARTITION RANGE (SINGLE) PARTITION:KEYKEY
    0 TABLE ACCESS MODE: ANALYZED (BY LOCAL INDEX ROWID) OF
    'CLM_HDR_CLM_LN_X_PRVDR_LCTN' (TABLE) PARTITION:KEYKEY
    0 INDEX MODE: ANALYZED (RANGE SCAN) OF
    'XAK1CLM_HDR_CLM_LN_X_PRVDR_LCT' (INDEX (UNIQUE))
    PARTITION:KEYKEY
    Execution plan when running just the query from TOAD is: (it comes out in a second)
    Plan
    SELECT STATEMENT ALL_ROWSCost: 6 Bytes: 100 Cardinality: 2                
         3 PARTITION RANGE SINGLE Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 1 Partitions accessed #13          
              2 TABLE ACCESS BY LOCAL INDEX ROWID TABLE CMSAPP.CLM_HDR_CLM_LN_X_PRVDR_LCTN Cost: 6 Bytes: 100 Cardinality: 2 Partition #: 2 Partitions accessed #13     
    Why would fetching take such a long time? Please let me know if you need any other information.
    Thank You.
    Edited by: spur230 on Apr 1, 2009 10:23 AM
    Edited by: spur230 on Apr 1, 2009 10:26 AM
    Edited by: spur230 on Apr 1, 2009 10:28 AM
    Edited by: spur230 on Apr 1, 2009 10:30 AM

    Query just takes a second from toad It's possible that the query starts returning rows in a second, but that's not the time required for the entire query.

  • Query Takes Longer time

    SELECT CAL_EMPCALENDAR.START_DATE as main,
    bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' ||
    CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
    TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
    TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
    bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' ||
    CAL_EMPCALENDAR.EMPLOYEE_ID as name,
    CAL_EMPCALENDAR.START_DATE as sdate,
    CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
    CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
    TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
    TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
    TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
    CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
    CAL_EMPCALENDAR.LATE_IN,
    CAL_EMPCALENDAR.EARLY_OUT,
    CAL_EMPCALENDAR.UNDER_TIME,
    CAL_EMPCALENDAR.OVERTIME,
    TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
    CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
    HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
    BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
    (SELECT shift_id
    FROM CAL_GRPWORKDAY
    WHERE CAL_GRPWORKDAY.calgrp_id =
    (SELECT calgrp_id
    FROM CAL_CALASSIGNMENT
    WHERE employee_id = CAL_EMPCALENDAR.employee_id
    AND CAL_CALASSIGNMENT.START_DATE <=
    CAL_EMPCALENDAR.START_DATE
    AND (CAL_CALASSIGNMENT.END_DATE is null or
    CAL_CALASSIGNMENT.END_DATE >=
    CAL_EMPCALENDAR.START_DATE))
    AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
    (SELECT max(entry_dt)
    FROM , LV_TXN txn, CAL_EMPDAILYEVENT cale
    WHERE status = 'Approved'
    AND LV_APPSTATUSHIST.application_id = txn.application_id
    AND cale.reference_id = txn.txn_id
    AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id
    ) AS entry_dt,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 1
    and BIZUNIT_ID like 'SG')) F1,
    --TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,                            
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 2
    and bizunit_id like 'SG')) F2,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 3
    and bizunit_id like 'SG')) F3,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 4
    and bizunit_id like 'SG')) F4,
    (SELECT ENTITLEMENT + ADJUST
    FROM TAM_ALLOWANCE
    WHERE (WF_STATUS = 'Pending' OR WF_STATUS = 'Approved' OR
    WF_STATUS = 'Verified' OR WF_STATUS is Null OR
    WF_STATUS = 'No Action')
    and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
    AND ITEM_ID = (SELECT ITEM_ID
    FROM TAM_CLAIM_FORMAT
    WHERE SEQUENCE = 5
    and bizunit_id like 'SG')) F5
    From CAL_EMPCALENDAR, HRM_CURR_CAREER_V, CAL_SHIFT, HRM_EMPLOYEE
    Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
    AND (CAL_EMPCALENDAR.WF_STATUS = 'Approved' Or
    CAL_EMPCALENDAR.WF_STATUS = 'No Action')
    AND CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
    --and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
    AND CAL_EMPCALENDAR.START_DATE BETWEEN
    GREATEST(HRM_EMPLOYEE.COMMENCE_DATE,
    TO_DATE('1-4-2006', 'DD-MM-YYYY')) AND
    LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'),
    NVL(HRM_EMPLOYEE.CESSATION_DATE,
    TO_DATE('30-4-2006', 'DD-MM-YYYY')))
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
    And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
    -- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
    --AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
    --$P!{ExceptionSQL}
    --$P!{iHRFilterClause}
    --order by $P!{OrderBy}
    order by main
    Hi all this query takes a very long time to run.
    On the explain plan the The table in bold letter is using full tablescan rest all go for index scanning.
    Table got Indexe on those CLOMUNS REFERREED
    Oracle version 9.2.0.6
    Message was edited by:
    Maran.E
    Message was edited by:
    Maran.E

    Maran,
    With tags and indentation it should be easiest to analyze at least for you :
    SELECT CAL_EMPCALENDAR.START_DATE as main,
           bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' /' || CAL_EMPCALENDAR.EMPLOYEE_ID as secondary,
           TO_DATE('1-4-2006', 'DD-MM-YYYY') as FROM_DATE,
           TO_DATE('30-4-2006', 'DD-MM-YYYY') as TO_DATE,
           bit_empname(CAL_EMPCALENDAR.EMPLOYEE_ID) || ' / ' || CAL_EMPCALENDAR.EMPLOYEE_ID as name,
           CAL_EMPCALENDAR.START_DATE as sdate,
           CAL_EMPCALENDAR.OVERTIME_REASON as OTReason,
           CAL_EMPCALENDAR.POSTED_ON as POSTED_ON,
           TO_CHAR(CAL_EMPCALENDAR.START_DATE, 'Dy') as dayname,
           TAM_GET_ADJUSTED_IN(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_in,
           TAM_GET_ADJUSTED_OUT(CAL_EMPCALENDAR.EMPCALENDAR_ID) as adj_out,
           CAL_EMPCALENDAR.SHIFT_ID AS SHIFT_ABBREV,
           CAL_EMPCALENDAR.LATE_IN,
           CAL_EMPCALENDAR.EARLY_OUT,
           CAL_EMPCALENDAR.UNDER_TIME,
           CAL_EMPCALENDAR.OVERTIME,
           TAM_GET_LEAVE_DESC(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'ALL') Leave,
           CAL_EMPCALENDAR.EMPLOYEE_ID as empid,
           HRM_CURR_CAREER_V.DEPARTMENT_CODE as deptcode,
           BIT_CODEDESC(HRM_CURR_CAREER_V.DEPARTMENT_CODE) as deptname,
           (SELECT shift_id
            FROM   CAL_GRPWORKDAY
            WHERE  CAL_GRPWORKDAY.calgrp_id = (SELECT calgrp_id
                                               FROM   CAL_CALASSIGNMENT
                                               WHERE employee_id = CAL_EMPCALENDAR.employee_id
                                               AND CAL_CALASSIGNMENT.START_DATE <= CAL_EMPCALENDAR.START_DATE
                                               AND (   CAL_CALASSIGNMENT.END_DATE is null
                                                    or CAL_CALASSIGNMENT.END_DATE >= CAL_EMPCALENDAR.START_DATE))
            AND CAL_GRPWORKDAY.start_date = CAL_EMPCALENDAR.start_date) AS shift_id,
           (SELECT max(entry_dt)
            FROM   LV_TXN txn, CAL_EMPDAILYEVENT cale
            WHERE status = 'Approved'
            AND LV_APPSTATUSHIST.application_id = txn.application_id
            AND cale.reference_id = txn.txn_id
            AND cale.empcalendar_id = CAL_EMPCALENDAR.empcalendar_id) AS entry_dt,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE  SEQUENCE = 1
                           and BIZUNIT_ID like 'SG')) F1,
           --TAM_GET_ENT_AND_ADJUSTED(CAL_EMPCALENDAR.EMPCALENDAR_ID, 'SG', 1) F1,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE  SEQUENCE = 2
                           and    bizunit_id like 'SG')) F2,
           (SELECT ENTITLEMENT + ADJUST
            FROM   TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM   TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 3
                           and   bizunit_id like 'SG')) F3,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 4
                           and bizunit_id like 'SG')) F4,
           (SELECT ENTITLEMENT + ADJUST
            FROM TAM_ALLOWANCE
            WHERE (   WF_STATUS = 'Pending'
                   OR WF_STATUS = 'Approved'
                   OR WF_STATUS = 'Verified'
                   OR WF_STATUS is Null
                   OR WF_STATUS = 'No Action')
            and EMPCALENDAR_ID = CAL_EMPCALENDAR.EMPCALENDAR_ID
            AND ITEM_ID = (SELECT ITEM_ID
                           FROM TAM_CLAIM_FORMAT
                           WHERE SEQUENCE = 5
                           and bizunit_id like 'SG')) F5
    From CAL_EMPCALENDAR,
         HRM_CURR_CAREER_V,
         CAL_SHIFT,
         HRM_EMPLOYEE
    Where CAL_SHIFT.SHIFT_ID(+) = CAL_EMPCALENDAR.ACTUAL_SHIFT_ID
    AND   (   CAL_EMPCALENDAR.WF_STATUS = 'Approved'
           Or CAL_EMPCALENDAR.WF_STATUS = 'No Action')
    AND   CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_EMPLOYEE.EMPLOYEE_ID
    --and CAL_EMPCALENDAR.START_DATE between TO_DATE('1-4-2006','DD-MM-YYYY') AND TO_DATE('31-4-2006','DD-MM-YYYY')
    AND   CAL_EMPCALENDAR.START_DATE BETWEEN GREATEST(HRM_EMPLOYEE.COMMENCE_DATE, TO_DATE('1-4-2006', 'DD-MM-YYYY'))
                                         AND LEAST(TO_DATE('30-4-2006', 'DD-MM-YYYY'), NVL(HRM_EMPLOYEE.CESSATION_DATE, TO_DATE('30-4-2006', 'DD-MM-YYYY')))
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SG' || '%'
    And CAL_EMPCALENDAR.EMPLOYEE_ID like 'SGTAM001'
    And CAL_EMPCALENDAR.EMPLOYEE_ID = HRM_CURR_CAREER_V.EMPLOYEE_ID
    -- AND HRM_CURR_CAREER_V.DEPARTMENT_CODE like 'DPHR'
    --AND HRM_EMPLOYEE.EMPLOYMENT_TYPE_CODE like '$P!{EmploymentType}'
    --$P!{ExceptionSQL}
    --$P!{iHRFilterClause}
    --order by $P!{OrderBy}
    order by mainNicolas.

  • DataBlAppend takes long time on registered data

    Greetings! I'm using DIAdem 2012 on a Win7/64-bit computer (16GB memory and solid-state hard drive).  I work with one tdms file at a time but that file can be up to 8GB so I bring it into the Data Portal via the Register Data option.  The tdms file contains about 40 channels and each channel has about 50M datapoints.  If it matters, the data type of each channel is U16 with appropriate scaling factors in the channel parameters.  I display one channel in View and my goal is to set the two cursors on either side of an "event" then copy that segment of data between the cursors to a new channel in another group.  Actually, there are about ten channels that I want to copy exactly the same segment out to ten new channels.  This is the standard technique for programmatically "copying-flagged-data-points", i.e. reading and using the X1,X2 cursor position.  I am using DataBlAppend to write these new channels (I have also tried DataBlCopy with identical results).  My VBS script works exactly as I desire.  The new channel group containing the segments will be written out as a tdms file using another script. 
    Copying out "small" segments takes a certain amount of time but copying larger segments takes an increasing amount of time, i.e. the increase is not linear.  I would like to do larger segments but I don't like waiting 20-30 minutes per segment.  The time culprit is the script line "Call DataBlAppend (CpyS, CurPosX1, CurPosX2-CurPosX1 +1, CpyT)" where CpyS and CpyT are strings containing the names of the source and target channels respectively (the empty target channels were previously created in the new group). 
    My question is, "is there a faster way to do this within DIAdem?"  The amount of data being written to the new group can range from 20-160MB but I need to be able to write up to 250MB.  TDMS files of this size can normally be loaded or written out quite quickly on this computer under normal circumstances, so what is slowing this process down?  Thanks!

    Greetings, Brad!! 
    I agree that DataBlCopy is fast when working "from channels loaded in the Data Portal" but the tdms file I am working with is only "registered" in the portal.  I do not know exactly why that makes a difference except that it must go out to the disk in order to read each channel.  The function DataBlCopy (or Append) is a black box to me so I was hoping for some insight as to why it is behaving like it is under these circumstances.  However, your suggestion to try the function DataFileLoadRed() may bear fruit!  I wrote up a little demo script to copy out a "large" segment from a 8GB file registered in the portal using DataFileLoadRed and it is much, much faster!  It was a little odd selecting "IntervalCount" as my method and the total number of intervals the same as the total number of data points between my begin and end points, and "eInterFirstValue" [in the interval] as the reduction method, but the results speak for themselves.  I will need to do some thorough checking to verify that I am getting exactly the data I want but DataFileLoadRed does look promising as an alternative.  Thanks!
    Chris

  • Query takes long time to load

    Hello experts,
    My users have queries on a MultiProvider. Today, the queries take a long time to load. When they want to put a filter on it, the selection pop-up takes a long time to show the possible selections.
    Is there any way to find out what causes the long wait time? And what can I do about it, to optimize it?
    Thanks in advance.

    Hi Erica,
    Please check load on system(how many users are logged in..?)
    Please check : [Checklist for Query Performance|http://sapbwneelam.blogspot.com/2007/10/checklist-for-query-performance.html] some tips for query performance.
    Hope it Helps
    Srini

  • Query takes long time from one machine but 1 sec from  machine

    I got a update query which is like a application patch which takes 1 sec from one machine.I need to apply that on the other machine where application is installed
    Both applications are same and connecting to the same DB server.The query ran from second machine takes so long time ....
    but i can update other thing from the secon machine.
    IS there anything to do with page size ,line size
    Urgent Please

    HI
    Everything is same except from the diff machine.
    Any client version issue because the script us so wide like 240 chars
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes | Cost |
    | 0 | UPDATE STATEMENT | | | | |
    | 1 | UPDATE | IDI_INTERFACE_MST | | | |
    | 2 | INDEX UNIQUE SCAN | PK_IDI_INTMST | | | |
    Note: rule based optimization, PLAN_TABLE' is old version
    10 rows selected.
    Message was edited by:
    Maran.E

  • Insert query takes long time

    Hi,
    I have written a procedure that does the following :
    1- Creates a temp table
    2- INSERT /*+ append */ INTO <temp table>
    (Select ......,
    (select sum(amt)
    from tbl1 b
    where b.col1=a.col2 and b.col3=a.col3 and (b.col4=a.col4 OR b.col5=a.col5) ...
    grp by col1,col2,col3),
    from tbl1 a
    where a.col1=.................
    3- Query to delete the duplicate rows from the temp table....
    4- Populate summarized data from TEMP TABLE to tbl1
    5- drops the temp table.
    Now this procedure takes around 2-3 mins for 2lakh records on local db, but when executed on production db takes 4 hours, when called from the application.
    When the log was reviewed it took 4 hours just to insert the data into TEMP TABLE.
    (*Note: this problem occurs only after executing the procedure for continuously 1 week or so. When the bea server is restarted and the application is run, it works fine. but after few days again the performance detiorates.
    What could be the reason for this weird problem?!!! Please give me some tips, which can help me to figure out the problem.
    Thanks
    Shruts.

    **I have renamed the cols and table.
    CREATE OR REPLACE PROCEDURE PROC_TEST(p_1 IN number,p_2 IN number,p_3 IN varchar2)
    is
    vsql varchar2(5000);
    Error_Message varchar2(200);
    vcalc varchar2(150);
    v_rec_present number;
    begin
         SELECT nvl(count(*),0) into v_rec_present
         FROM USER_TABLES
         WHERE upper(TABLE_NAME)=upper('TMP_TEST');
         if v_rec_present>0 then
         EXECUTE IMMEDIATE 'DROP table TMP_TEST';
         end if;
         EXECUTE IMMEDIATE 'CREATE TABLE TMP_TEST as select * from TBL1 where 1=2';
         vsql:= 'select distinct NULL as col1,a.scol2,a.ncol3,a.scol4, a.ncol5,a.scol6,a.dcol7,a.dcol8,null as dcol9,a.scol10,';
         vsql:=vsql||' nvl((select sum(t.amount) as TOTAL_AMT ';
         vsql:=vsql||' from TBL1 t';
         vsql:=vsql||' where t.ncol3='||p_1;
         vsql:=vsql||' and t.ncol3=a.ncol3';
         vsql:=vsql||' and t.scol4=a.scol4 and t.scol10=a.scol10 ';
         vsql:=vsql||' and (t.ncol5=a.ncol5 OR t.scol6=a.scol6)';
         vsql:=vsql||' and t.ncol11=a.ncol11      and t.ccol12=a.ccol12';
         vsql:=vsql||' and t.dcol7=a.dcol7 and t.dcol8=a.dcol8';     
         vsql:=vsql||' group by t.ncol3,t.scol4,';
         vsql:=vsql||' t.dcol7,t.dcol8, t.scol10,t.ncol11, t.ccol12),0) as amount,';
         vsql:=vsql||' a.ccol12,a.ncol11,';
         vsql:=vsql||' a.ncol13, null as ncol14, null as scol15, null as dcol16, null as scol17, null as scol18,';
         vsql:=vsql||' a.scol19, a.ccol20,a.sUser, sysdate as date_ins, a.site_ins,null as description';
         vsql:=vsql||' from TBL1 a';
         vsql:=vsql||' where a.ncol3='||p_1;
         if p_2=1 then
         vcalc:=' SYSDATE ';
         else
              vcalc:='to_date((to_char(to_date('''||p_3||''',''dd-mon-yyyy hh:mi:ss am''),''mm/dd/yyyy'') ) ,''mm/dd/yyyy'')' ;
         end if;
         vsql:=vsql||' and (case when a.ccol20=''TC'' and a.ccol12=''R''';
         vsql:=vsql||' then (case when '||p_2||'=0 and to_date(to_char(a.dcol9,''mm/dd/yyyy''),''mm/dd/yyyy'')='||vcalc;
         vsql:=vsql||' then ''TRUE'' ELSE ''FALSE'' END)';
         vsql:=vsql||' else ''TRUE'' END)=''TRUE''';
         /* if accrual flag=0 and calcenddt<>NULL then */
         if p_2=0 and p_3 is not NULL then
              vsql:=vsql||' and (a.dcol9 is null oR     (to_char(a.dcol9,''YYYY'')<=to_char(sysdate,''YYYY'') and';
              vsql:=vsql||' to_date(to_char(a.dcol9,''mm/dd/yyyy hh24:mi:ss''),''mm/dd/yyyy hh24:mi:ss'')<=to_date((to_char(to_date('''||p_3||''',''dd-mon-yyyy hh:mi:ss am''),''mm/dd/yyyy'') || ''23:59:59'') ,''mm/dd/yyyy hh24:mi:ss'')))';
         elsif p_2=1 then
              vsql:=vsql||' and (a.dcol9 is null oR     ((to_char(a.dcol9,''YYYY'')<=to_char(sysdate,''YYYY'') or';
              vsql:=vsql||' to_char(a.dcol9,''YYYY'')>to_char(sysdate,''YYYY''))))';
    end if;
    vsql:= 'INSERT /*+ append */ INTO TMP_TEST '|| vsql;
    EXECUTE IMMEDIATE vsql;
    EXECUTE IMMEDIATE 'truncate table TBL1';
    EXECUTE IMMEDIATE 'INSERT INTO TBL1 SELECT * FROM TMP_TEST';
    EXECUTE IMMEDIATE 'DROP table TMP_TEST';
    vsql:='DELETE from TBL1 a';
    vsql:=vsql||' where a.ncol3='||p_1;
    vsql:=vsql||' and ROWID<>(select min(ROWID)';
    vsql:=vsql||' from TBL1 b';
    vsql:=vsql||' where (a.scol6=b.scol6 OR a.ncol5=b.ncol5)';
    vsql:=vsql||' AND b.ncol3='||p_1;
    vsql:=vsql||' and a.ncol3=b.ncol3 and a.scol4=b.scol4 and a.scol10=b.scol10';
    vsql:=vsql||' and a.dcol7=b.dcol7';
    vsql:=vsql||' and a.dcol8=b.dcol8';
    vsql:=vsql||' and a.ccol12=b.ccol12 and a.ncol11=b.ncol11';
    vsql:=vsql||' group by a.scol4,a.dcol7,a.dcol8,a.scol10,a.ccol12,a.ccol20)';
    EXECUTE IMMEDIATE vsql;
    EXCEPTION
         WHEN OTHERS THEN
              Error_Message := 'Error in executing SP "PROC_TEST" '|| chr(10) ||'Error Code is ' || SQLCODE || Chr(10) || 'Error Message is ' || SQLERRM;
              dbms_output.put_line ('ERROR:-'||Error_message);
              raise;
    end;
    ******************************************************************************************

Maybe you are looking for