Query performance - dynamic indexing?

Hi Experts,
I have a query like this:
WITH /*+materialize*/ A AS (SELECT  A.NAME,
   B.pname,
   B.ptype,
DECODE(C.ttype,'L','Lab','Non-Lab') ttype
,(CASE WHEN(C.ttype = 'L') THEN C.RES_CODE ELSE NULL END) RES_CODE
,(SELECT (first_name|| ' ' ||last_name) FROM resources WHERE name=C.ROLE)ROLE_desc
,D.currency Bill_curr
,D.TOTALCOST Bill_cost
,E.currency Home_curr
,E.TOTALCOST Home_cost
,C.role role_code
,(C.HOURS) AS HRS
,(D.TOTALAMOUNT) 
,(E.TOTALAMOUNT)    Home_amount
FROM    WORKPROG C
,WORKPROG_VAL D
,WORKPROG_VAL E
,PROJECTS A
,periods B
WHERE  C.TNO=D.TNO
AND    C.TNO=E.TNO
AND    D.CURR_TYP='BILL'
AND    E.CURR_TYP='HM'
AND    A.NAME=C.PROJ_CODE
AND    C.PROJ_CODE LIKE 'JA%'
AND C.TDATE BETWEEN b.start_date AND b.end_date AND b.ptype ='SMON'
SELECT
name,b.pname,b.ptype,ttype,res_code,role_desc,role_code,Bill_curr,SUM(Bill_cost),Home_curr,SUM(Home_cost),SUM(hrs),SUM(totalamount),SUM(Home_amount)
FROM A
GROUP BY name,b.pname,b.ptype,ttype,res_code,role_desc,role_code,Bill_curr,Home_currThe problem is - the WITH part takes no time to run, its very fast. But the group by part takes ages. This is because the WITH part returns millions of records.
Is ther eany way that I can dynamically index the result set of WITH query so that the group by runs faster.
Thanks.

Caitanya wrote:
Thanks Someone.
By ages... I mean it kept running for 45 minutes after which I aborted it.
The indexes are being used and there is no 'FULL TABLE ACCESS' in the explain plan. We are using Oracle 9i.It is quite unlikely (although possible) that getting millions of rows is done effectively by using indexes, you should actually look for performing full table scans. Only if the tables contained billions of rows it would be reasonable for the optimizer to choose an index access, but it will take hours to get millions of rows via index lookup and the corresponding single row random access to the tables.
I assume that the statistics are not up to date because a query that returns millions of rows probably won't have a realistic cost of 102. That would mean that Oracle estimates that it takes him the time of 102 single block reads to complete the query - if you take a conservative guess of 10ms per random single block access then this means roughly 1 second.
So please post your explain plan here, and if possible can you get the total number of rows in the tables accessed by the query and the approx. number of rows the query is supposed to return as base information.
Here are some (generic) instructions how to generate a reasonable EXPLAIN PLAN output:
Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] tag before and \[code\] after or {noformat}{{noformat}code{noformat}}{noformat} before and after to enhance readability of the output provided:
In SQL*Plus:
SET LINESIZE 130
EXPLAIN PLAN FOR <your statement>;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
In previous versions you could run the following in SQL*Plus (on the server) instead:
@?/rdbms/admin/utlxplsA different approach in SQL*Plus:
SET AUTOTRACE ON EXPLAIN
<run your statement>;will also show the execution plan.
In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
[When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
and post the "tkprof" output here, too.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/

Similar Messages

  • Select query performance improvement - Index on EDIDC table

    Hi Experts,
    I have a scenario where in I have to select data from the table EDIDC. The select query being used is given below.
      SELECT  docnum
              direct
              mestyp
              mescod
              rcvprn
              sndprn
              upddat
              updtim
      INTO CORRESPONDING FIELDS OF TABLE t_edidc
      FROM edidc
      FOR ALL ENTRIES IN t_error_idoc
      WHERE
      upddat GE gv_date1 AND
      upddat LE gv_date2 AND
      updtim GE p_time AND
      status EQ t_error_idoc-status.
    As the volume of the data is very high, our client requested to put up some index or use an existing one to improve the performance of the data selection query.
    Question:
    4.    How do we identify the index to be used.
    5.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    6.    What will be the impact on the table performance if we create a new index.
    Regards ,
    Raghav

    Question:
    1.    How do we identify the index to be used.
    Generally the index is automatically selected by SAP (DB Optimizer )  ( You can still mention the index name in your select query by changing the syntax)
      For your select Query the second Index will be called automatically by the Optimizer, ( Because  the select query has u2018Updatu2019 , u2018uptimu2019 in the sequence before the u2018statusu2019 ) .
    2.    On which fields should the indexing be done to improve the performance (if available indexes donu2019t cater to our case).
    (Create a new Index with MANDT and the 4 fields which are in the where clause in sequence  )
    3.    What will be the impact on the table performance if we create a new index.
    ( Since the index which will be newly created is only the 4th index for the table, there shouldnu2019t be any side affects)
    After creation of index , Check the change in performance of the current program and also some other programs which are having the select queries on EDIDC ( Various types of where clauses preferably ) to verify that the newly created index is not having the negative impact on the performance. Additionally, if possible , check if you can avoid  into corresponding fields .
    Regards ,
    Seth

  • SQL query performance difference with Index Hint in Oracle 10g

    Hi,
    I was having a problem in SQL select query which was taking around 20 seconds to get the results. So, by hit and trail method I added Index Oracle Hint into the same query with the list of indexes of the tables and the results are retrieved with in 10 milli seconds. I am not sure to get How this is working with Indexes Hint.
    The query with out Index Hint:
    select /*+rule*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3  where FdnTab.id=52787 and paramTab1.id= FdnTab.id  and paramTab3.id = FdnTab.id  and paramTab3.attr_value = FdnTab2.fdn  and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2  where TemplateTab2.id=FdnTab.id  and ParamTab2.id=TemplateTab2.template_id  and ParamTab2.id=FdnTab2.id  and ParamTab2.attr_name=paramTab1.attr_name)  ==> EXECUTION TIME: 20 secs
    The same query with Index Hint:
    select /*+INDEX(fdnmappingtable[PRIMARY_KY_FDNMAPPINGTABLE],parametertable[PRIMARY_KY_PARAMETERTABLE])*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name) ==> EXECUTION TIME: 10 milli secs
    Can any one suggest what could be the real problem?
    Regards,
    Purushotham

    Sorry,
    The right query and the explain plan:
    select /*+rule*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3  where FdnTab.id=52787 and paramTab1.id= FdnTab.id  and paramTab3.id = FdnTab.id  and paramTab3.attr_value = FdnTab2.fdn  and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2  where TemplateTab2.id=FdnTab.id  and ParamTab2.id=TemplateTab2.template_id  and ParamTab2.id=FdnTab2.id  and ParamTab2.attr_name=paramTab1.attr_name) 
    SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 651267974
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    |* 1 | FILTER | |
    | 2 | NESTED LOOPS | |
    | 3 | NESTED LOOPS | |
    | 4 | NESTED LOOPS | |
    |* 5 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    PLAN_TABLE_OUTPUT
    |* 6 | TABLE ACCESS BY INDEX ROWID| PARAMETERTABLE |
    |* 7 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE |
    | 8 | TABLE ACCESS BY INDEX ROWID | PARAMETERTABLE |
    |* 9 | INDEX RANGE SCAN | PRIMARY_KY_PARAMETERTABLE |
    | 10 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
    |* 11 | INDEX UNIQUE SCAN | SYS_C005695 |
    | 12 | NESTED LOOPS | |
    |* 13 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE |
    |* 14 | INDEX UNIQUE SCAN | PRIMARY_KEY_TRTABLE |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    1 - filter( EXISTS (SELECT 0 FROM "TEMPLATERELATIONTABLE"
    "TEMPLATETAB2","PARAMETERTABLE" "PARAMTAB2" WHERE
    "PARAMTAB2"."ATTR_NAME"=:B1 AND "PARAMTAB2"."ID"=:B2 AND
    "PARAMTAB2"."ID"="TEMPLATETAB2"."TEMPLATE_ID" AND
    "TEMPLATETAB2"."ID"=:B3))
    5 - access("FDNTAB"."ID"=52787)
    6 - filter("PARAMTAB1"."ATTR_VALUE"<>'DEFAULT')
    7 - access("PARAMTAB1"."ID"="FDNTAB"."ID" AND
    PLAN_TABLE_OUTPUT
    "PARAMTAB1"."ATTR_NAME"='harqUsersMax')
    9 - access("PARAMTAB3"."ID"="FDNTAB"."ID")
    11 - access("PARAMTAB3"."ATTR_VALUE"="FDNTAB2"."FDN")
    13 - access("PARAMTAB2"."ID"=:B1 AND "PARAMTAB2"."ATTR_NAME"=:B2)
    14 - access("TEMPLATETAB2"."ID"=:B1 AND
    "PARAMTAB2"."ID"="TEMPLATETAB2"."TEMPLATE_ID")
    Note
    - rule based optimizer used (consider using cbo)
    43 rows selected.
    WITH INDEX HINT:
    select /*+INDEX(fdnmappingtable[PRIMARY_KY_FDNMAPPINGTABLE],parametertable[PRIMARY_KY_PARAMETERTABLE])*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name);
    SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 2924316070
    | Id | Operation | Name | Rows | B
    ytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 1 |
    916 | 6 (0)| 00:00:01 |
    |* 1 | FILTER | | |
    | | |
    | 2 | NESTED LOOPS | | 1 |
    916 | 4 (0)| 00:00:01 |
    | 3 | NESTED LOOPS | | 1 |
    401 | 3 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 4 | NESTED LOOPS | | 1 |
    207 | 2 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PARAMETERTABLE | 1 |
    194 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
    | 1 (0)| 00:00:01 |
    |* 7 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE | 1 |
    PLAN_TABLE_OUTPUT
    13 | 1 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID | PARAMETERTABLE | 1 |
    194 | 1 (0)| 00:00:01 |
    |* 9 | INDEX RANGE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
    | 1 (0)| 00:00:01 |
    | 10 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE | 1 |
    515 | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |* 11 | INDEX UNIQUE SCAN | SYS_C005695 | 1 |
    | 1 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | | 1 |
    91 | 2 (0)| 00:00:01 |
    |* 13 | INDEX UNIQUE SCAN | PRIMARY_KEY_TRTABLE | 1 |
    26 | 1 (0)| 00:00:01 |
    |* 14 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
    65 | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    1 - filter( EXISTS (SELECT /*+ */ 0 FROM "TEMPLATERELATIONTABLE" "TEMPLATETAB
    2","PARAMETERTABLE"
    PLAN_TABLE_OUTPUT
    "PARAMTAB2" WHERE "PARAMTAB2"."ATTR_NAME"=:B1 AND "PARAMTAB2"."ID"
    =:B2 AND
    "TEMPLATETAB2"."TEMPLATE_ID"=:B3 AND "TEMPLATETAB2"."ID"=:B4))
    5 - filter("PARAMTAB1"."ATTR_VALUE"<>'DEFAULT')
    6 - access("PARAMTAB1"."ID"=52787 AND "PARAMTAB1"."ATTR_NAME"='harqUsersMax')
    7 - access("FDNTAB"."ID"=52787)
    9 - access("PARAMTAB3"."ID"=52787)
    11 - access("PARAMTAB3"."ATTR_VALUE"="FDNTAB2"."FDN")
    13 - access("TEMPLATETAB2"."ID"=:B1 AND "TEMPLATETAB2"."TEMPLATE_ID"=:B2)
    14 - access("PARAMTAB2"."ID"=:B1 AND "PARAMTAB2"."ATTR_NAME"=:B2)
    PLAN_TABLE_OUTPUT
    Note
    - dynamic sampling used for this statement
    39 rows selected.

  • An index can not being used and still afect a query performance?

    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?

    user11173393 wrote:
    Hi i have a query with a high cost so i created two indexes, A and B, to improve its performance.
    After the creation of the indexes when i reviewed the execution plan of the query the cost had been reduced, but i noticed that the index B is not being used,
    and if i try to force the query to use index B with a HINT the cost increases, so i decided to drop the index B.
    Once i droped the index B i checked the execution plan again and then i noticed that the cost of the query increased, if i recreate the index B the explain plan
    shows a lower cost even though its not being used by the execution plan.
    Does anyone know why is this happening?
    An index can, not being used by the execution plan and still affect a query performance?You said that is what is happening, & I believe you.

  • How does Index fragmentation and statistics affect the sql query performance

    Hi,
    How does Index fragmentation and statistics affect the sql query performance
    Thanks
    Shashikala
    Shashikala

    How does Index fragmentation and statistics affect the sql query performance
    Very simple answer, outdated statistics will lead optimizer to create bad plans which in turn will require more resources and this will impact performance. If index is fragmented ( mainly clustered index,holds true for Non clustred as well) time spent in finding
    the value will be more as query would have to search fragmented index to look for data, additional spaces will increase search time.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    My TechNet Wiki Articles

  • Is aggregate DB index is required for query performance

    Hi,
    Cube mange tab when i check for the aggregate DB index it shows red, here my doubt is, is the DB index is required for query performance
    is the check statics should be in green?
    Please let me know
    Regards,
    Kiran

    Hi,
    Yes it improves the reports performance..
    Using the Check Indexes button, you can check whether indexes already exist and whether these existing indexes are of the correct type (bitmap indexes).
    Yellow status display: There are indexes of the wrong type
    Red status display: No indexes exist, or one or more indexes are faulty
    You can also list missing indexes using transaction DB02, pushbutton Missing Indexes. If a lot of indexes are missing, it can be useful to run the ABAP reports SAP_UPDATE_DBDIFF and SAP_INFOCUBE_INDEXES_REPAIR.
    See SAP Help
    http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    Thanks
    Reddy

  • Peformance tuning of query using bitmap indexes

    Hello guys
    I just have a quick question about tuning the performance of sql query using bitmap indexes..
    Currently, there are 2 tables, date and fact. Fact table has about 1 billion row and date dim has about 1 million. These 2 tables are joined by 2 columns:
    Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber
    I have query that needs to be run as the following:
    Select dates.dayofweek, dates,dates, fact.opened_amount from dates, facts
    where Date.Dateid = Fact.snapshot.dates and Date.companyid = fact.companynumber and dates.dayofweek = 'monday'.
    Currently this query is running forever. I think it is joining that takes a lot of time. I have created bitmap index on dayofweek column because it's low on distinctive rows. But it didn't seem to speed up with the performance..
    I'd like to know what other indexes will be helpful for me.. I am thinking of creating another one for companynumber since it also have low distinctive records.
    Currently the query is being generated by frontend tools like OBIEE so I can't change the sql nor can't I purge data or create smaller table, I have to what with what I have..
    So please let me know your thoughts in terms of performance tunings.
    Thanks

    The explain plan is:
    Row cost Bytes
    Select statement optimizer 1 1
    nested loops 1 1 299
    partition list all 1 0 266
    index full scan RD_T.PK_FACTS_SNPSH 1 0 266
    TABLE ACCESS BY INDEX ROWID DATES_DIM 1 1 33
    INDEX UNIQUE SCAN DATES_DIM_DATE 1 1
    There is no changes nor wait states, but query is taking 18 mins to return results. When it does, it returns 1 billion rows, which is the same number of rows of the fact table....(strange?)That's not a bitmap plan. Plans using bitmaps should have steps indicating bitmap conversions; this plan is listing ordinary btree index access. The rows and bytes on the plan for the volume of data you suggested have to be incorrect. (1 row instead of 1B?????)
    What version of the data base are you using?
    What is your partition key?
    Are the partioned table indexes global or local? Is the partition key part of the join columns, and is it indexed?
    Analyze the tables and all indexes (use dbms_stats) and see if the statistics get better. If that doesn't work try the dynamic sampling hint (there is some overhead for this) to get statistics at runtime.
    I have seen stats like the ones you listed appear in 10g myself.
    Edited by: riedelme on Oct 30, 2009 10:37 AM

  • Query performance on RAC is a lot slower than single instance

    I simply followed the steps provided by oracle to install a rac db of 2 nodes.
    The performce on Insertion (java, thin ojdbc) is pretty much the same compared to a single instance on NFS
    However the performance on the select query is very slow compared to single instance.
    I have tried using different methods for the storage configuration (asm with raw, ocfs2) but the performance is still slow.
    When I shut down one instance, leaving only one instance up, the query performance is very fast (as fast as one single instance)
    I am using rhel5 64 bit (16G of physical memory) and oracle 11.1.0.6 with patchset 11.1.0.7
    Could someone help me how to debug this problem?
    Thanks,
    Chau
    Edited by: user638637 on Aug 6, 2009 8:31 AM

    top 5 timed foreground events:
    DB CPU: times 943(s), %db time (47.5%)
    cursor.pin S wait on X: wait(13940), time (321s), avg wait(23ms), %db time (16.15%)
    direct path read (95,436), times (288), avg watie (3ms), %db ime (14.51%)
    IPC send completion sync: wait(546,712), times(149s), avg wait (0), %db time (7.49%)
    gc cr multi block request: waits (7574), teims (78) avg wait (10 ms), %db time (4.0)
    another thing i see is the "avg global cache cr block flush time (ms): is 37.6 msThe DB CPU Oracle metric is the amount of CPU time (in microseconds) spent on database user-level calls.
    You should check your sql statement from report and tuning them.
    - Check from Execute Plan.
    - If not index, determine to use index.
    SQL> set autot trace explain
    SQL> sql statement;
    cursor: pin S wait on X.
    A session waits on this event when requesting a mutex for sharable operationsrelated to pins (such as executing a cursor), but the mutex cannot be granted becauseit is being held exclusively by another session (which is most likely parsing the cursor).
    use variable SQL , avoid dynamic sql
    http://blog.tanelpoder.com/2008/08/03/library-cache-latches-gone-in-oracle-11g/
    check about memory MEMORY_TARGET initialization parameter.
    By the way you have high "DB CPU"(47.5%), you should tune about your sql statement (check sql in report and tune)
    Good Luck

  • How to improve the query performance in to report level and designer level

    How to improve the query performance in to report level and designer level......?
    Plz let me know the detail view......

    first its all based on the design of the database, universe and the report.
    at the universe Level, you have to check your Contexts very well to get the optimal performance of the universe and also your joins, keep your joins with key fields, will give you the best performance.
    at the report level, try to make the reports dynamic as much as you can, (Parameters) and so on.
    and when you create a paremeter try to get it match with the key fields in the database.
    good luck
    Amr

  • Report burst:To increase query performance in xcelsius

    Is there anyway to increase query performance in xcelsius by using report bursting

    Fremlin,
    Report bursting is only for distributing your reports to your end users.
    You can improve performance only by following the [Best practices|https://www.sdn.sap.com/irj/boc/index?rid=/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac] in xcelsius.
    -Anil

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Query Performance issue in Oracle Forms

    Hi All,
    I am using oracle 9i DB and forms 6i.
    In query form ,qry took long time to load the data into form.
    There are two tables used here.
    1 table(A) contains 5 crore records another table(B) has 2 crore records.
    The recods fetching range 1-500 records.
    Table (A) has no index on main columns,after created the index on main columns in table A ,the query is fetched the data quickly.
    But DBA team dont want to create index on table A.Because of table space problem.
    If create the index on main table (A) ,then performance overhead in production.
    Concurrent user capacity is 1500.
    Is there any alternative methods to handle this problem.
    Regards,
    RS

    1) What is a crore? Wikipedia seems to indicate that it's either 10,000,000 or 500,000
    http://en.wikipedia.org/wiki/Crore
    I'll assume that we're talking about tables with 50 million and 20 million rows, respectively.
    2) Large tables with no indexes are definitely going to be slow. If you don't have the disk space to create an appropriate index, surely the right answer is to throw a bit of disk into the system.
    3) I don't understand the comment "If create the index on main table (A) ,then performance overhead in production." That seems to contradict the comment you made earlier that the query performs well when you add the index. Are you talking about some other performance overhead?
    Justin

  • How to improve query performance built on a ODS

    Hi,
    I've built a report on FI_GL ODS (BW3.5). The report execution time takes almost 1hr.
    Is there any method to improve or optimize th query performance that build on ODS.
    The ODS got huge volume of data ~ 300 Million records for 2 years.
    Thanx in advance,
    Guru.

    Hi Raj,
    Here are some few tips which helps you in improving ur query performance
    Checklist for Query Performance
    1. If exclusions exist, make sure they exist in the global filter area. Try to remove exclusions by subtracting out inclusions.
    2. Use Constant Selection to ignore filters in order to move more filters to the global filter area. (Use ABAPer to test and validate that this ensures better code)
    3. Within structures, make sure the filter order exists with the highest level filter first.
    4. Check code for all exit variables used in a report.
    5. Move Time restrictions to a global filter whenever possible.
    6. Within structures, use user exit variables to calculate things like QTD, YTD. This should generate better code than using overlapping restrictions to achieve the same thing. (Use ABAPer to test and validate that this ensures better code).
    7. When queries are written on multiproviders, restrict to InfoProvider in global filter whenever possible. MultiProvider (MultiCube) queries require additional database table joins to read data compared to those queries against standard InfoCubes (InfoProviders), and you should therefore hardcode the infoprovider in the global filter whenever possible to eliminate this problem.
    8. Move all global calculated and restricted key figures to local as to analyze any filters that can be removed and moved to the global definition in a query. Then you can change the calculated key figure and go back to utilizing the global calculated key figure if desired
    9. If Alternative UOM solution is used, turn off query cache.
    10. Set read mode of query based on static or dynamic. Reading data during navigation minimizes the impact on the R/3 database and application server resources because only data that the user requires will be retrieved. For queries involving large hierarchies with many nodes, it would be wise to select Read data during navigation and when expanding the hierarchy option to avoid reading data for the hierarchy nodes that are not expanded. Reserve the Read all data mode for special queriesu2014for instance, when a majority of the users need a given query to slice and dice against all dimensions, or when the data is needed for data mining. This mode places heavy demand on database and memory resources and might impact other SAP BW processes and tasks.
    11. Turn off formatting and results rows to minimize Frontend time whenever possible.
    12. Check for nested hierarchies. Always a bad idea.
    13. If "Display as hierarchy" is being used, look for other options to remove it to increase performance.
    14. Use Constant Selection instead of SUMCT and SUMGT within formulas.
    15. Do review of order of restrictions in formulas. Do as many restrictions as you can before
    calculations. Try to avoid calculations before restrictions.
    17. Turn off warning messages on queries.
    18. Check to see if performance improves by removing text display (Use ABAPer to test and validate that this ensures better code).
    19. Check to see where currency conversions are happening if they are used.
    20. Check aggregation and exception aggregation on calculated key figures. Before aggregation is generally slower and should not be used unless explicitly needed.
    21. Avoid Cell Editor use if at all possible.
    22. Make sure queries are regenerated in production using RSRT after changes to statistics, consistency changes, or aggregates.
    23. Within the free characteristics, filter on the least granular objects first and make sure those come first in the order.

  • Query performance in two environments

    Hi all,
    I have developed simple select queries on a multiprovider and I am facing issues with query performance in quality box. A query runs pretty fast in in dev and return results while the same one dumps in Quality environment giving a time out error. This sounds more strange because our dev box has comparitively more records than the quality environment right now.
    On anlyzing the query path in both environments, we noticed that the query does an index scan in dev but not in Quality environment, especially when the selection is such that the query is supposed to return lot of records. Since the query does a sequential scan in quality, it dumps. Is there any setting that I need to make seprately in the quality environment.
    Any tips on query optimization would be great help. Thanks
    Regards
    Niranjana

    Execute some of the RSRT tests in the QA for the query using "Execute+Debug" option and use some test for Multiprovider and Databases checks in it ,try to compare with Dev as well.
    Hope it Helps
    Chetan
    @CP..

  • Poor query performance when joining CONTAINS to another table

    We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
    For example, we can find all the records a user has access to from our base table by the following query:
    SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID;
    This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
    Our search query looks like this:
    SELECT score(1), d.*
    FROM duns d
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
    2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
    WITH subset
    AS
    (SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID
    SELECT score(1), d.*
    FROM duns d
    JOIN subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
    Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
    Thanks!!

    Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc  NUMBER,
      3       text_key  VARCHAR2 (30))
      4  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc  NUMBER,
      3       emp_id       NUMBER)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO duns
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> -- indexes:
    SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
      2  ON duns (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
      2  ON primary_contact (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
    SCOTT@orcl_11gR2> -- as suggested by Roger:
    SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- original query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> WITH
      2    subset AS
      3        (SELECT d.duns_loc
      4         FROM      duns d
      5         JOIN      primary_contact pc
      6         ON      d.duns_loc = pc.duns_loc
      7         AND      pc.emp_id = :employeeID)
      8  SELECT score(1), d.*
      9  FROM   duns d
    10  JOIN   subset s
    11  ON     d.duns_loc = s.duns_loc
    12  WHERE  CONTAINS (TEXT_KEY, :search,1) > 0
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 4228563783
    | Id  | Operation                      | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |   1 |  SORT ORDER BY                 |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |*  2 |   HASH JOIN                    |                   |     2 |    84 |   120   (3)| 00:00:02 |
    |   3 |    NESTED LOOPS                |                   |    38 |  1292 |    50   (2)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  5 |      DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | DUNS_DUNS_LOC_IDX |     1 |     5 |     1   (0)| 00:00:01 |
    |*  7 |    TABLE ACCESS FULL           | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
       5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
       6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
       7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
    SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
    SCOTT@orcl_11gR2> WITH
      2    subset1 AS
      3        (SELECT pc.duns_loc
      4         FROM      primary_contact pc
      5         WHERE  pc.emp_id = :employeeID),
      6    subset2 AS
      7        (SELECT score(1), d.*
      8         FROM      duns d
      9         WHERE  CONTAINS (TEXT_KEY, :search,1) > 0)
    10  SELECT subset2.*
    11  FROM   subset1, subset2
    12  WHERE  subset1.duns_loc = subset2.duns_loc
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
    SCOTT@orcl_11gR2> SELECT subset2.*
      2  FROM   (SELECT pc.duns_loc
      3            FROM   primary_contact pc
      4            WHERE  pc.emp_id = :employeeID) subset1,
      5           (SELECT score(1), d.*
      6            FROM   duns d
      7            WHERE  CONTAINS (TEXT_KEY, :search,1) > 0) subset2
      8  WHERE  subset1.duns_loc = subset2.duns_loc
      9  ORDER  BY score(1) DESC
    10  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- ansi join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  JOIN   primary_contact
      4  ON     duns.duns_loc = primary_contact.duns_loc
      5  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      6  AND    primary_contact.emp_id = :employeeid
      7  ORDER  BY SCORE(1) DESC
      8  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- old join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns, primary_contact
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc = primary_contact.duns_loc
      5  AND    primary_contact.emp_id = :employeeid
      6  ORDER  BY SCORE(1) DESC
      7  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- in clause:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc IN
      5           (SELECT primary_contact.duns_loc
      6            FROM   primary_contact
      7            WHERE  primary_contact.emp_id = :employeeid)
      8  ORDER  BY SCORE(1) DESC
      9  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 3825821668
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN SEMI              |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2>

Maybe you are looking for

  • DUNS Number is not accepted

    My DUNS is registered for more than 45 days in Brazil. In search tool DUNS Apple it is found correctly. But when I try to enroll in the program gives developers an error "Your D & B information was not accepted." Could someone help me?

  • 申請 iOS Developer Program 之 Company 版

    我想請問有關申請的問題 第14天,是甚麼算,還有甚麼樣才算成功申請

  • I am trying to sync my cox email account with my iPhone

    I get messages on the iPhone but they are not deleted from the iPhone when I delete from my cox. account on the computer.  Help anyone.

  • Why cant i use my photoshop cs5.5 extended serial number with cs6?

    It states in the terms and conditions, on-screen instructions, a photoshop cs5.5 license can be used to apply to the cs6 version of photoshop but it shows as invalid. Is there a link to photoshop cs5.5 extended if this is incorrect? Regards. Scott Ev

  • E71 Deleting email accounts

    I have created an email account (a duplicate that I want to get rid of)but I am really struggling on how to delete this. If I am in messaging the obvious option would be to see a 'delete account' but this does not appear to be the case? Any tips most