Essbase ASO Cube query performance from OBI EE

Hi all
I have serious problems of performance when I query an ASO cube from OBI EE. The problem born when I implement a filter in some dimension of model in the Business Model and Mapping layer. This filter is to level-0 of the dimension, the values are obtained from a session variable in OBI EE. The objetive of this is apply filters depending of users. Then, for session variable I've a table in relational dabase base with relation between user and "access", then my dimensions (not all) have as level-0 the "access" of users (as duplicated members).
The session variable in OBI EE is filled with row-wise option, so it has all values of "access" that correspond to user (:USER system variabe).
When I query only by one of this filtered dimensions the respond is very fast, When I query for one of this filtered dimensions and a metric the respond is fast (10 seconds). But when I query for two of this filtered dimensions and metric the respond take 25 minutes. I checked Essbase app log and found this:
+[Mon Nov 15 19:56:01 2010]Local/TestSec5/TestSec5/admin/Info(1013091)+
+Received Command [MdxReport] from user [admin]+
+[Mon Nov 15 20:28:28 2010]Local/TestSec5/TestSec5/admin/Info(1260039)+
MaxL DML Execution Elapsed Time : [1947.18] seconds
When I look the MDX query generated by OBI I see that the aggregation process is doing in the fly in the members filtered of the crossjoin of two dimensions:
With
set [CATALOGO_INSTITUCIONAL2] as '[CATALOGO_INSTITUCIONAL].Generations(2).members'
set [CATALOGO_PRESUPUESTARIO2] as '[CATALOGO_PRESUPUESTARIO].Generations(2).members'
*member [METRICAS_PRESUPUESTARIAS].[MS1] as 'AGGREGATE(filter(crossjoin (Descendants([CATALOGO_INSTITUCIONAL].currentmember,[CATALOGO_INSTITUCIONAL].Generations(7)),Descendants([CATALOGO_PRESUPUESTARIO].currentmember,[CATALOGO_PRESUPUESTARIO].Generations(7))),(([CATALOGO_INSTITUCIONAL].CurrentMember.MEMBER_ALIAS = "01.01" OR [CATALOGO_INSTITUCIONAL].CurrentMember.MEMBER_Name = "01.01")) AND (([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "G" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "G") OR ([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "I0101" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "I0101") OR ([CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_ALIAS = "S01" OR [CATALOGO_PRESUPUESTARIO].CurrentMember.MEMBER_Name = "S01"))),METRICAS_PRESUPUESTARIAS.[Compromiso])', SOLVE_ORDER = 100*
select
{ [METRICAS_PRESUPUESTARIAS].[MS1]
} on columns,
NON EMPTY {crossjoin ({[CATALOGO_INSTITUCIONAL2]},{[CATALOGO_PRESUPUESTARIO2]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [TestSec5.TestSec5]
Can somebody tell me if is possible to change the way in that OBI built the query or if is possible to use aggregations previously materialized of essbase?

hi Amol,
1. On what basis , did you estimate your cube to around 400GB to 600GB.
2. If ASO is an option, its huge advantage lies in space, its does not take more space , unlike BSO.
3. I have seen cubes ,who size was around 300-400GB in BSO,when made the same cube into ASO , its consumed space of 40GB-45GB.
HOpe this helps
Sandeep Reddy Enti
HCC
http://hyperionconsutlancy.com/

Similar Messages

  • Accessing Essbase ASO Cube from Oracle Relational database

    Hi All,
    I am a Oracle database developer. We have a requirement where we need to access Hyperion Essbase ASO cube data directly from Relational Database. We have identified below options.
    1. Use Hyperion web service and UTL_HTTP oracle utility
    2. Use JAVA API to access ASO cube. The code of the Java, will be written in Informatica Java Transformation.
    Unfortunetly, i am not getting good resources in Google on how to do?
    Appreciate, if someone share the knowledge if they have implemented this.?

    I am not competent to recommend any particular approach but Essbase.ru has some blog entries on using XMLA / 11.1.2.2 services and a Google Code project...
    http://essbase.ru/archives/category/performance/essbase-api/xmla
    Google will translate if you don't read Russian!

  • ASO cube Migration Steps from Backend

    Hi Gurus,
    What would be the Essbase ASO cube migration steps from the Backend.
    Edited by: Hkag on 18-Apr-2013 04:11

    found answer
    Edited by: Softperson on 19/8/2010 17:53

  • FDM to load data in Essbase ASO cube

    Anybody have used FDM to load data in Essbase ASO cube? How do you clear and run calc on ASO cube?
    Thanks

    Does the Essbase Adapter for FDM Support ASO Cubes? [ID 1168153.1]
    Modified 17-AUG-2010 Type HOWTO Status PUBLISHED
    Applies to:
    Hyperion Financial Data Quality Management - Version: 11.1.1.3.00 and later [Release: 11.1 and later ]
    Information in this document applies to any platform.
    Goal:
    Does the Essbase adapter for FDQM support ASO cubes?
    Solution:
    ASO cubes are not currently supported in FDQM.
    Unpublished Enhancement 6568323 has been created and it is currently under consideration for a future release.
    References
    BUG:6568323 - 8-529236080 - CUSTOMER WANTS TO TAKE ADVANTAGE OF THE ASO FUNCTIONS IN ESSBASE.
    Related
    Products
    Middleware > Enterprise Performance Management > Financial Data Quality Management > Hyperion Financial Data Quality Management

  • Wrong date value in Essbase ASO cube

    Hi All,
    I'm trying to load a date value in mm-dd-yy format into an Essbase ASO cube. I'm using is a txt tab delimited file. The load rule is working fine. The outline properties is set with the proper format "mm-dd-yy". I loaded the data and when I retrieve the data using Smart View I see all the dates decreased by one day in my Smartview report.
    Would you have any ideas why that is happening?
    Thanks

    this is a bug and fixed in 11.1.2

  • Data load in Essbase ASO cube

    Hi,
    I have not been using ASO cube before and had worked only on BSO cubes. Now I have a requirement to create a rule file to load data in to an ASO Essbase cube. I have created a data load rule file as I was creating for a BSO cube which is correctly validating. However when I am doing the data load I am getting following warning:
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"
    I have investigated further and found that ASO cube does not allow data loading at upper levels & on members calculated through formulas. After this I have ensured that I am loading the data in to zero level members and members which are not calculated through formula. But still I am not able to do the data load & getting the same warning.
    Could you please help me and let me know if there is anything else which I am missing here?
    Thanks in advance...
    AKW

    Hi AKW,
    "Aggregate storage applications ignore update to derived cells. [480] cells skipped"This is only a warning message that means only those many cells were skipped might be for some reasons like any member pointing to those cells will be missing.
    If you want to copy the Data of your BSO cube to an ASO Application why dont you use an PARTIONING it will copy your whole data from BSO to ASO (If Outline is common in both then copy any member of Sparse dimension like "Scenario 1" from Source i.e. BSO, to same member like "Scenario 1" in Target i.e ASO ),
    This is only an alternate wayThanks
    Avneet Singh Bhatia

  • OAF page : How to get its query performance from Oracle Apps Screen?

    Hi Team,
    How to get the query performance of an OAF page using Oracle Apps Screen ??
    regards
    sridhar

    Go through this link
    Any tools to validate performance of an OAF Page?
    However do let us know as these queries performance can be check through backend also
    Thanks
    --Anil
    http://oracleanil.blogspot.com/

  • Issue with federation between OBIEE 11.1.1.5 and Essbase ASO Cube 11.1.2.1

    Hi All,
    I am trying to retrieve relational attributes from a confirmed dimension to the Essbase Cube.
    However I am facing lot of issues/errors during rpd development.
    Both the Essbase Cube hierarchy and Table that contians attributes are from same relational source.
    Essbase 11.1.2.1 and OBIEE 11.1.1.5 are both cmpatabile versions.
    I read this blog and didn't got the solution
    http://www.rittmanmead.com/2009/11/oracle-bi-ee-10-1-3-4-1-essbase-connectivity-enriching-essbase-reports-with-relational-attributes/
    Can someone assist me with some better option to proceed further for horizontal federation
    Thanks,
    SatyaB

    You shouldn't have to restart Essbase as it should automatically sync, there may be additional information in SharedServices_Security_Client.log though it is probably worth logging with Oracle.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Query performance from start to variablescreen in a webtemplate

    We have users who complain about performance when starting webtemplates which have some variables. Sometimes the queries run on time out before the variable sreen appears. The variables are open for user input, they are not executetd by a user exit or something.
    So I am responsible for administration of the application from IT side, I would be happy to hear about a hint to control the response time in the segment from start to variablescreen so I can find aout the reason for the problem (Perhaps: ST03N, RSDDSTAT which parameter?)
    Eva

    Check this post:
    Webtemplate performance analysis
    Hope it helps.
    Regards

  • Cube Query performance

    Hi All,
    I am working on a data warehousing project. I have 7 dimensions and one fact table. I am planning to create a materialized view with aggregate values based on the dimension keys. In my Mview query I am trying to use parallel option and group by cube clause. I am using 9 keys in the cube clause. When I did explain plan the cost of the query is 8537. But the query takes for ever to run. I let it run for more than 3 hrs still I did not get the output.
    Can anyone please tell me if it is advisable to use these many keys in cube. I read in the documentation that cube is more expensive. So, this time I tried to use grouping sets clause with 2 key combinations. The cost was around 50k and I got the result in 5 mins. Then, I changed the grouping sets clause to have 4 keys the cost bumped up to 800k. The query ran for 25 min and failed throwing an error message "Unable to extend temp segment"
    DB settings are 128gb memory, 80gb of temp table space. Oracle version is 11g.
    Any help/inputs are greatly appreciated.
    Thanks
    Hari

    What I recommend is computing only the combinations you need. For example with 9 entries in the group by clause Oracle has to compute 2^9 or 512 combinations. This obviously can take a significant amount of time. Once you figure out the combinations you need you may be able to achieve the required results with partial cubes, or combinations of grouping sets and rollup statements.
    See the following link for a lot of good examples: [SQL for Aggregation in Data Warehouses|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/aggreg.htm#i1007428]
    HTH!

  • Jobs and Tasks for Automation of load data in Essbase ASO Cube

    Hi, all people,
    my question is about creating jobs or tasks for automation loading data in essbase cubes.
    I use .bat file witch run MaxL script for loading data form oracle database into an essbase cube. For running .bat file is made a Windows Task.
    Is there any other opportunities to run .bat file not using windows tasks? Maybe there is special utilities in Oracle EPM Systems or any other.....
    I am using essbase version 11.1.1.2.0.
    Thanks for reply))

    There is no internal scheduler.You either have to use Windows Scheduler, Unix Cron or a third party tool. Take a look at Star Analytics Command center, it is designed specifically for Hyperion applications http://www.staranalytics.com/products/command_center.htm

  • Building Drill through reports from aso cube using odi in webanalysis11.1.3

    I need some urgent help as i have a important requirement for an essbase aso cube .I was trying to establish a database connection to build a drill through report in webanalysis from oracle datawarehouse.the integration tool which we are using to get data from oracle datawarehouse to hyperion is odi.My question are
    1) Can drill through reports be built using odi like eis from an aso cube?if yes whats the procedure?
    2) is there any alternate ways to bring transaction level or relational data for reporting in webanalysis?
    regards,
    praveen.
    Edited by: user13070887 on Oct 11, 2010 3:48 PM

    Hi Glenn,
    We tried optimizing the drill through SQL query but actually when we run the directly in TOAD it takes *23 secs* but when we do drill through on the same intersection
    it took more than 25 mins. Following is our query structure :
    (SELECT *
    FROM "Table A" cp_594
    INNER JOIN "Table B" cp_595 ON (cp_594.key = cp_595.key)
    WHERE (Upper(cp_595.*"Dim1"*) in (select Upper(CHILD) from (SELECT * FROM DIM_TABLE_1 where CUBE = 'ALL') WHERE CONNECT_BY_ISLEAF = 1 START WITH PARENT = $$Dim1$$ CONNECT BY PRIOR CHILD = PARENT UNION ALL select Upper(CHILD) from DIM_TABLE_1 where CUBE = 'ALL' AND REPLACE('GL_'||CHILD, 'GL_IC_', 'IC_') = $$Dim1$$))
    And ----same for 5 more dimensions
    Can you suggest some improvement ? Please advice.
    Thanks

  • Ncome Statement & Balance Sheet Reporting via a single ASO cube

    Hi All,
    I wanted to get some perspective on industry best practices as it relates to performing Income Statement & Balance Sheet reporting via a single Essbase ASO cube. As both these areas share a lot of common dimensions, do most of the companies implement a single cube for integrated reporting or split cubes and have tools like HFR, Web Analysis etc. combine the information from those cubes for integrated cube.
    Appreciate any thoughts on this.
    Thanks!

    In 16 years of Essbase/Hyperion experience (14 of them consulting and training), I have seen just as many clients combining income statements and balance sheets in the same cube (BSO or ASO) as I have seen separate the two into individual cubes. Just as the BSO/ASO decision should factor in data volumes and hierarchy size, the combined/separate cube decision must come from good analysis of the situation at hand. Sometimes it is just a design preference for the company at hand.
    Your question seems weighted toward ASO, so I would encourage you to make sure you are using ASO for the right reasons. I reserver ASO for cubes where the current or planned amount of history carried will be massive and/or when a very large hierarchy is required. Otherwise, I prefer to have the full flexibility of BSO.
    One favorite project was for a re-insurance company in Bermuda (you can probably guess some of the other reasons I considered it a favorite project!?!). The consolidated P&L and B/S were in the same cube. This allowed us to properly "connect" the two statements such that retained earnings at the end of a quarter could flow over to hit the balance sheet. Of course, a two-cube design wouldn't necessarily prevent this thanks to @XREF (which didn't exist back in those days) or partitioning to name a couple of alternative. This makes my point that, to some extent, it all comes back to a matter of design preference for you and your project team.
    Darrell Barr

  • Income Statement & Balance Sheet Reporting via a single ASO cube

    Hi All,
    I wanted to get some perspective on industry best practices as it relates to performing Income Statement & Balance Sheet reporting via a single Essbase ASO cube. As both these areas share a lot of common dimensions, do most of the companies implement a single cube for integrated reporting or split cubes and have tools like HFR, Web Analysis etc. combine the information from those cubes for integrated cube.
    Appreciate any thoughts on this.
    Thanks!

    You have asked this question the essbase forum :- ncome Statement & Balance Sheet Reporting via a single ASO cube
    Best keeping it to one forum and seeing it is a purely essbase question.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Poor query performance when joining CONTAINS to another table

    We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
    For example, we can find all the records a user has access to from our base table by the following query:
    SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID;
    This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
    Our search query looks like this:
    SELECT score(1), d.*
    FROM duns d
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
    2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
    WITH subset
    AS
    (SELECT d.duns_loc
    FROM duns d
    JOIN primary_contact pc
    ON d.duns_loc = pc.duns_loc
    AND pc.emp_id = :employeeID
    SELECT score(1), d.*
    FROM duns d
    JOIN subset s
    ON d.duns_loc = s.duns_loc
    WHERE CONTAINS(TEXT_KEY, :search,1) > 0
    ORDER BY score(1) DESC;
    For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
    Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
    Thanks!!

    Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
    SCOTT@orcl_11gR2> -- tables:
    SCOTT@orcl_11gR2> CREATE TABLE duns
      2    (duns_loc  NUMBER,
      3       text_key  VARCHAR2 (30))
      4  /
    Table created.
    SCOTT@orcl_11gR2> CREATE TABLE primary_contact
      2    (duns_loc  NUMBER,
      3       emp_id       NUMBER)
      4  /
    Table created.
    SCOTT@orcl_11gR2> -- data:
    SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
      2  /
    1 row created.
    SCOTT@orcl_11gR2> INSERT INTO duns
      2  SELECT object_id, object_name
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> INSERT INTO primary_contact
      2  SELECT object_id, namespace
      3  FROM   all_objects
      4  WHERE  object_id > 1
      5  /
    76027 rows created.
    SCOTT@orcl_11gR2> -- indexes:
    SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
      2  ON duns (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
      2  ON primary_contact (duns_loc)
      3  /
    Index created.
    SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
    SCOTT@orcl_11gR2> -- as suggested by Roger:
    SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
      2  ON duns (text_key)
      3  INDEXTYPE IS CTXSYS.CONTEXT
      4  FILTER BY duns_loc
      5  /
    Index created.
    SCOTT@orcl_11gR2> -- gather statistics:
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- variables:
    SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
    SCOTT@orcl_11gR2> EXEC :employeeid := 1
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
    SCOTT@orcl_11gR2> EXEC :search := 'highway'
    PL/SQL procedure successfully completed.
    SCOTT@orcl_11gR2> -- original query:
    SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
    SCOTT@orcl_11gR2> WITH
      2    subset AS
      3        (SELECT d.duns_loc
      4         FROM      duns d
      5         JOIN      primary_contact pc
      6         ON      d.duns_loc = pc.duns_loc
      7         AND      pc.emp_id = :employeeID)
      8  SELECT score(1), d.*
      9  FROM   duns d
    10  JOIN   subset s
    11  ON     d.duns_loc = s.duns_loc
    12  WHERE  CONTAINS (TEXT_KEY, :search,1) > 0
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 4228563783
    | Id  | Operation                      | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT               |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |   1 |  SORT ORDER BY                 |                   |     2 |    84 |   121   (4)| 00:00:02 |
    |*  2 |   HASH JOIN                    |                   |     2 |    84 |   120   (3)| 00:00:02 |
    |   3 |    NESTED LOOPS                |                   |    38 |  1292 |    50   (2)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  5 |      DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN           | DUNS_DUNS_LOC_IDX |     1 |     5 |     1   (0)| 00:00:01 |
    |*  7 |    TABLE ACCESS FULL           | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
       5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
       6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
       7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
    SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
    SCOTT@orcl_11gR2> WITH
      2    subset1 AS
      3        (SELECT pc.duns_loc
      4         FROM      primary_contact pc
      5         WHERE  pc.emp_id = :employeeID),
      6    subset2 AS
      7        (SELECT score(1), d.*
      8         FROM      duns d
      9         WHERE  CONTAINS (TEXT_KEY, :search,1) > 0)
    10  SELECT subset2.*
    11  FROM   subset1, subset2
    12  WHERE  subset1.duns_loc = subset2.duns_loc
    13  ORDER  BY score(1) DESC
    14  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
    SCOTT@orcl_11gR2> SELECT subset2.*
      2  FROM   (SELECT pc.duns_loc
      3            FROM   primary_contact pc
      4            WHERE  pc.emp_id = :employeeID) subset1,
      5           (SELECT score(1), d.*
      6            FROM   duns d
      7            WHERE  CONTAINS (TEXT_KEY, :search,1) > 0) subset2
      8  WHERE  subset1.duns_loc = subset2.duns_loc
      9  ORDER  BY score(1) DESC
    10  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- ansi join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  JOIN   primary_contact
      4  ON     duns.duns_loc = primary_contact.duns_loc
      5  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      6  AND    primary_contact.emp_id = :employeeid
      7  ORDER  BY SCORE(1) DESC
      8  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- old join:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns, primary_contact
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc = primary_contact.duns_loc
      5  AND    primary_contact.emp_id = :employeeid
      6  ORDER  BY SCORE(1) DESC
      7  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 153618227
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN                   |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2> -- in clause:
    SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
      2  FROM   duns
      3  WHERE  CONTAINS (duns.text_key, :search, 1) > 0
      4  AND    duns.duns_loc IN
      5           (SELECT primary_contact.duns_loc
      6            FROM   primary_contact
      7            WHERE  primary_contact.emp_id = :employeeid)
      8  ORDER  BY SCORE(1) DESC
      9  /
      SCORE(1)   DUNS_LOC TEXT_KEY
            18          1 highway
    1 row selected.
    Execution Plan
    Plan hash value: 3825821668
    | Id  | Operation                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |   1 |  SORT ORDER BY                |                   |    38 |  1406 |    83   (5)| 00:00:01 |
    |*  2 |   HASH JOIN SEMI              |                   |    38 |  1406 |    82   (4)| 00:00:01 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| DUNS              |    38 |  1102 |    11   (0)| 00:00:01 |
    |*  4 |     DOMAIN INDEX              | DUNS_TEXT_KEY_IDX |       |       |     4   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | PRIMARY_CONTACT   |  4224 | 33792 |    70   (3)| 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
       4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
       5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
    SCOTT@orcl_11gR2>

Maybe you are looking for

  • How do I communicate data to a Host VI from an FPGA sub vi without ending the sub vi

    I am trying to send timestamps of events that happen in my FPGA sub vi to be collected by the host vi on a windows computer in Labview 2010.  The trick is that I do not what to stop any of the parallel loops in my sub vi from executing while I do thi

  • Is it good to put mac to sleep or shut down?

    Hello, I wanted to know which is better for my macbook air, putting it to sleep or shutting it down? I use it almost everyday, exceptions when I am at work and sleeping. When is best to shut it down?

  • Time Capsule advice, HELP!!

    Folks, I got an iomega 500gb ULTRAMAX external hdd from apple store last week. it is too noisy and has bad reviews now i have checked... 1) should i upgrade to a time capsule 500gb, which costs double the price of the iomega? is it worth it? 2) do i

  • Non-synchronized client method calls

    hi there, i am having following scenario: methods provided in the remote interface of a stateful session bean are called by a remote client.. client is using multithreading. now these methods in stateful session forward method calls to a stateless se

  • Ability to move a CUCM 9 VM from one host to another

       I have a client that is going to implement several CUCM servers.  As the client is located several hours away, I am only on site 2 days a week.  I want to copy off the CUCM Vm from their host and copy it onto my host in lab so that I can continue