Federating Essbase and Relational Data in 11g

Hi,
I am trying to display Time Entry measures from both Essbase and Relational data source (OLTP) in one report.
The People hierarchy in Essbase has 3 levels and I mapped the 3rd level (Gen3 People - Member Key which is actually the Employee Id) to Employee Id column in OLTP. I cannot map the other levels to OLTP because there are no columns that match.
When I create a report with the ff fields: Gen3 People Member Key, Essbase Measure, OLTP Measure, I am seeing correct values. But when I add Gen3 People, the OLTP Measure column becomes blank. When I add Gen2 People, I am getting "No fact table exists at the requested level of detail".
Does this mean that I also have to map Gen2 People and Gen3 People to OLTP before I can use them in my report?
Thanks for the help!

Hi,
Thanks a lot for the help.
I am currently facing following issues in integration.
1. In my Essbase outline , the child level has some aliases. So when I import in OBIEE, I am getting the alias value instead of the actual. How to get the actual data in OBIEE?
2. If I am using external data for only one dimension, I am getting the report generated,but all the measures like sales returns no value. But If I am using the sales and some existing cube dimension column, I am getting the values
3. Inorder to use the other dimension, I followed the creation of total level and added the dummy fact. Still I am getting error 'Unable to navigate requested expression'
Thanks in advance,
Many Thanks,
RS

Similar Messages

  • ESSBASE and relational Integration in OBIEE

    Hi,
    Currently I am trying to do ESSBASE and Relation DB integration in OBIEE. Is it possible to join a dimension of essbase to another dimension in relational table. E.g I have name of a person in Essbase, Can I get his attributes like phone number from the dimension in Relational table. I have proper join condition among these dimensions. If possible how to do that.
    Thanks in advance for the help.
    Many Thanks
    RS

    Hi,
    Thanks a lot for the help.
    I am currently facing following issues in integration.
    1. In my Essbase outline , the child level has some aliases. So when I import in OBIEE, I am getting the alias value instead of the actual. How to get the actual data in OBIEE?
    2. If I am using external data for only one dimension, I am getting the report generated,but all the measures like sales returns no value. But If I am using the sales and some existing cube dimension column, I am getting the values
    3. Inorder to use the other dimension, I followed the creation of total level and added the dummy fact. Still I am getting error 'Unable to navigate requested expression'
    Thanks in advance,
    Many Thanks,
    RS

  • Multiple data sources Essbase and RDBMS in BIEE 11g

    Hi,
    I want to link Essbase cube with sql server in BIEE 11. I try to do it and I found that it can be linked in physical layer, but it does not allow me to set linkage in "Business Model and Mapping" layer. when I drag and drop a report and it return error:
    =================
    View Display Error
    ODBC Driver returned an error (SQLExeDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code:10058. [NQODC] [SQL_STATE: HY000] [nQSError: 10058] A general error has ocurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 15018] Incorrectly defined logical table source (for fact table dummy_gen2_info) does not contain mapping for [Product.Gen2,Product - Member Key]. (HY000)
    =================
    My steps:
    ==========================
    1) Use "Import Metadata" in BI Admin tools to import data source from Essbase (e.g. TBC, TBC_ASO)
    2) Use "Import Metadata" in BI Admin tools to import SQL server table (e.g. testdb.dbo.p_gen2_info)
    3) create linkage between table testdb.dbo.p_gen2_info.gen_id with TBC_ASO.[Product.Gen2,Product - Member Key] in "Physical layer"
    4) drag and drop TBC (Essbase), TBC_ASO (Essbase), and testdb (sql server) from physical layer to "Business Model and Mapping" layer
    5) drag and drop table testdb.dbo.p_gen2_info to TBC_ASO (since it dosn't allow set linkage in "Business Model and Mapping" layer between different data source)
    6) drag and drop TBC, TBC_ASO, testdb from "Business Model and Mapping" layer to Presentation Layer.
    ==========================
    Anything I did wrong? please advise.

    Check what are your settings under tools-> Embedded OC4J Preferences in the datasource node.

  • What file acts as bridge between Planning and relational data source?

    Hi any one knows the answer plz give me reply ..

    Hi,
    Planning communicates to the relational data source through a number of Java classes using JDBC.
    Business rules also use the hyperion RMI service to communicate to planning.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Joining cubes and relational data

    Hi we run std 2008 r2.  My experience is that voluminous dims have no place in olap cubes.  If I remember correctly power pivot can join a cube to a relational table.  Is that correct?
    I'm contemplating going that route but wonder if the same performance issues u get when u incorrectly add a voluminous dim to olap will also occur if instead I set up pp to join that voluminous dim from a relational table to my cube. 
    Also worried about any other gotchyas pp brings to the table when a newbie thinks pp is  going to be a viable self service bi tool for his user.
    the customer dim i'm focused on has 158k values and is growing.

    Well, I didn't mean to run Tabular in a hybrid mode, but to switch your data model to tabular completely.
    Good that you mentioned it. Don't think there's any advantage in doing a hybrid on that constellation. 
    Here are some links re tabular:
    https://technet.microsoft.com/en-us/library/hh212940(v=sql.110).aspx
    https://msdn.microsoft.com/en-us/library/hh994774.aspx
    Don't know how much you actually tried to optimize query performance in your current environment - do you know this Whitepaper?: http://www.microsoft.com/en-us/download/details.aspx?id=17303
    Maybe it would make more sense to use partioning on your current environment - but then an upgrade to Enterprise or at least Business Intelligence Edition (2014) would be needed (as it would be for running SSAS in Tabular anyway)
    2008:
    https://msdn.microsoft.com/en-us/library/cc645993(v=sql.105).aspx
    2014: https://msdn.microsoft.com/library/cc645993.aspx
    Imke

  • Essbase and iSeries data record locks

    Hello,
    I was wondering if there are any known issues with using Essbase 7.1 on iSeries where data can only be updated one at a time. Essentially the equivalent to MSAS's record locks where when one person is creating a new table or adding entries to one, it is locked for all other users.
    If this is a known limitation, what would be required to unlock it? eg. new version of Essbase?
    Thanks everyone.

    Is it true that if someone opens the outline to make changes then the outline will be locked.^^^I just opened the outline of my Very Favorite Essbase Database In The Whole Wide World (Sample.Basic) in EAS and then went into Excel and was able to change a data value. A locked outline does not prevent users from changing data.
    Users can’t update the cube when someone else is editing it?^^^The outline lock is exclusive, i.e., only one process can have the outline locked.
    Or are you talking about the actual writing of the outline/updates through load rules and the concomitant restructure (dense/sparse/outline)?
    Per the DBAG:
    You cannot build dimensions while other users are reading or writing to the database. After you build dimensions, Essbase restructures the outline and locks the database for the duration of the restructure operation.See: http://download.oracle.com/docs/cd/E17236_01/epm.1112/esb_dbag/frameset.htm?ddlintro.html#ddlintro1029815
    Regards,
    Cameron Lackpour

  • I want to delete my Game Center user because of some games which have saved data remainig there.I want to start the games all over again, but when I delete the games and related data on my iPod the data is further on the Game Center!!..help?!

    I need help..!

    1, you can't at the moment, though with iOS 5 in the Autumn, from http://www.apple.com/ios/ios5/features.html#photos :
    Even organize your photos in albums — right on your device
    2, by removing it from you synced from and re-syncing. Only photos taken with the iPad, copied to it via the camera connection kit, or saved from emails/websites etc can be deleted directly on the iPad (either via the trashcan icon in the top right corner if viewing the photo in full screen, or via the icon of the box with the arrow coming out of it in thumbnail view)
    3, the location of the photos that you synced to the iPad should be listed on the iPad's Photos tab when connected to your computer's iTunes.
    4, you can copy the photos from your iPad to your computer : http://support.apple.com/kb/HT4083 . You should also be able to delete them from the iPad as part of the transfer process to your computer, and it's then your choice whether to add them to your sync photo list so as to copy them back to the iPad. Copying them to your computer would allow you to organise them into folders and therefore be able to sync them back into separate albums.
    5, I don't use Dropbox either. There are some third-party browser apps in the iTunes App Store that allow you to download pages so that you can view them when offline e.g. Atomic Web (the whole page is saved within Atomic Web, it doesn't place a photo into the Photos app)
    6, deleting content should help. If you remove an app from your iPad then you also remove the content that it's got on the iPad - so if you then decide to reinstall it back onto the iPad then you will need to manually add back any content that you want in it. None of the Apple built-in apps (including Photos) can be removed from the iPad

  • Using relational data from SQL data source in Planning and Essbase

    Hi,
    How do I take sample data from a SQL data source and bring it into a Hyperion Planning application? I understand that when creating Planning applications, a link between a relational data source and Essbase must be established, because the relational database holds the metadata while the database outline is stored with Essbase. However, all I am currently able to do is load the data into Planning applicaitons via EAS, where I right-click on the Application database, hit load data, and select from either a .txt file or an excel file. Do I need Oracle Data Integrator? Any help or insight would be greatly appreciated, as well as corrections to any incorrect assumptions I may have made in this post. Thank you.

    When you import your file (Excel or text), you're importing it using a Load Rule in EAS. To load from SQL, you simply create a SQL load rule. You'll load data the exact same way (via EAS), but with a different type of load rule. The load rule will contain the SQL that queries the database. You can preview your data in the load rule the same way you would with a file.
    If your SQL is very complex, I'd recommend creating a view and loading from that view. But otherwise it's pretty straight-forward.
    The only catch is that you need to configure a database connection (to your relational database) on the Essbase server. The Essbase DBA guide will show you how to do this.
    You COULD use ODI, but I tend to only use it for loading metadata.
    Hope this helps,
    - Jake

  • How to create the relationship between ESSBASE 11 and DM  in OBIEE 11G?

    Hi Experts,
    I have one requirement that there is one property table named 'Store Master' in DW,and it contains a lot of attribute, such as Open Date, Close Date, IS 24 Hour etc.
    But another data source is essbase and based on this source, I create all reports.
    In ESSBASE, it has one dimension and hierarchy Location, and it has four level, Country(L1),Region (L2),Province(L3),Store(L4)
    So I want to know how to create the relationship between Location (ESSBASE) and Store Master (DM).
    I try to create one relationship in physical layer between Gen4,Location and Store, then drag the open date and close date into Location Dimension in BMM,then Presentation Layer.
    When I drag column 'Open Date' ,'Gen4,Location ' and 'Sales' into reports, it will generate the following error message:
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14020] None of the fact tables are compatible with the query request Dim Region.Store Open Date. (HY000)
    However, when I remove the column 'Open Date'. it will be ok
    So what I missing the steps? Please help me. Thanks.

    >
    '2. Now, pull the 'Store' column from relational DB onto the Gen5, Location column from Essbase. This action now creates, two logical sources for your 'Store' column.'
    If the length from different data source is not same,such as 1001(DM),L_1001(ESSBASE), can I drag the 'Store' column from relational DB onto the Gen5, Location column from Essbase?
    I think it does not work.Right?Hi,
    I am not sure if you are talking about the length(as in varchar(128)) of the member value being different in different sources, or the member itself is different in both the sources.
    I am still assuming, that you are referring to the members not same in both the sources.If it is, the whole concept of federation is based on conforming dimensions. So, it needs that the same dimension information is present in both the sources and then only, you know we can analyze the numbers based on this dimension. So, either the dimension being different in both sources, or the members not present in both the dimensions might lead to incorrect numbers.
    So I select Store Attributes in relation DB and Location in ESSBASE in physical layer, then create the physical join, such as right("Hour Sales"."H_Sales".""."H_Sales"."Gen6,Location",4) = "Authorization".""."EDW"."T_EDW_MDM_STORE"."US_CODE", then drag the OPEN_DATE and CLOSE_DATE in relation DB to Location in ESSBASE in BMM,finially drag them into presentation layer.We create physical layer relationships, to send over the same relation to the underlying database during querying. So, creating a physical relationship between essbase cube and relation database would not help here.
    When you set up this federation, BI Server sends individual queries to each source and maps the conforming dimension members internally.
    Hope I was clear, and this helps.
    Thank you,
    Dhar

  • Essbase vs. Relational Data Warehouse (Which one is the fact table in DW)?

    Guys, thanks in advance for your feedback but below is a simple question i am trying to get feedback on. I am trying to compare an Essbase cube to Relational Data Warehouse containing the same set of information:
    Essbase Dimensions
    Time, Account, Product, Scenario (making it easy)
    Relational Data Warehouse
    Time (dim), Account (dim), Product (dim), Scenario (Fact table)
    OR
    Time (dim), Product (dim), Scenario (dim), Account (Fact Table)
    Which of the relational lines is correct? Is Account the fact table? or Scenario the fact table? Account will contain your usual P&L accounts. Scenario will contain your usual Actual, Budget and Forecast scenarios.
    Thanks,

    I am so not a DW guy, it's amazing, but I've never let little more than a brush with a product stop me from posting...
    Wouldn't all of your dimensions need to be in your fact table? How else would you join from the fact table to the dimensions?
    In either layout, wouldn't you have the keys for Product, Time, Scenario, Acount, and then data in the fact table?
    Or are you talking about the last dimension in your layouts be the columns? If that were the case (and I don't know that it is), I would guess that Scenario changes less, so it would be in columns, although I can definitely see that not being efficient as you are likely to pull all or some of the Accounts for a given time, product and scenario versus all of the scenarios for a given time, product, and account.
    I'm really curious about this as I am just the consumer of star schemas, never (thankfully, and obviously, given the above insane ramblings) the designer of them.
    Regards,
    Cameron Lackpour

  • First and Last Date.  Is is possible in Essbase?

    Hello All,<BR><BR>Here are the dimensions in my outline:<BR>- User<BR>- Application<BR>- Dates<BR>- Measures (Count, FirstLoginDate, LastLoginDate)<BR><BR>I am trying to create a report in Essbase that shows the first and last date that each user accessed each Essbase application. I created an ETL process that builds a relational table with one record for each unique combination of User/Application/Day where a user was logged into an application. This process gets its source data from the Essbase application log files. I am loading a count of 1 to each intersection of User/Application/Day. I also want to add the first and last day to the cube but can not figure out how.<BR><BR>1) Is there an easier way to get at the source data then the application log files?<BR>2) If the application log files are the way to go, how do I indicate in the cube what the first and last login days are for each combination of User/Application?<BR><BR>I was thinking of loading the serial dates to the FirstLoginDate and LastLoginDate measures. The trouble I am having is where to load the dates data. Do I load it to level 0 (individual dates) and then use time balance to get the dates up to the higher levels (month, year) or load it at the higher levels? If I should load it at the higher levels, do I assume correctly I will need to create another relational table with one record for each User/Application/Month and User/Application/Year that shows the first and last dates? Any other ideas?<BR><BR>Thank you in advance,<BR><BR>Bill Handelman<BR>847-989-1758<BR>[email protected]<BR><BR>

    While you may have natural ordering in your date dimensions, Essbase doesn't handle first and last dates well. The one area where ther is at least some funcionality in date manipulation is inattribute dimensions, however, the easy use there allows only one date attribute using normal date processing. It gets a great deal more complex using two dates.<BR><BR>Look up the DBAG references to the date type of attribute dimensions, you might find it a partial solution to your problem.<BR><BR>If you go the serial date method, load at level 0 and want to use time balance to bring the values up, you still need a time dimension. If I were designing something, I might use the first access date as a time dimension and the last access date in an attribute dimension, but I'm rambling rather than analyzing.<BR><BR>In any case, look at the date type attribute dimension as one possible option for your cube.

  • ADF 11g can not select and copy data from cell of readonly table in IE

    hi,
    In ADF 11g, when render view object as readonly table with Single RowsSelection, using IE browser can not select and copy data from the cell, but it work in firefox.
    is it a bug?
    Edited by: kent2066 on 2009-5-18 上午8:46

    Hi Timo,
    Sorry forgot to mention versions.
    We are using 11.1.1.7 and IE 9.
    I tried in Google but could not get the solution.
    Kindly let me know solution for this.
    PavanKumar

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • OBIEE 11g write back to Essbase and run calc script feature

    Hi,
    I have a requirement to write back into Essbase Cube and run calc script from OBIEE dashboard.
    From what i have search on google, we must deploy additional Java Script into weblogic, but that is before OBIEE 11.1.1.6.
    I have 2 question:
    - Does OBIEE 11.1.1.6 already supported native write-back to Essbase and running calcscript?
    - Anyone has example of the custom java-script for write back and running calcscript?
    And another, if there are requirement like this, is it better to install Essbase Add-in on Microsoft Excel and do the what-if analysis there, then just display the report on OBIEE dashboard? (based on user-friendliness and the complexity on maintenance)
    Thanks in advance.

    Hi,
    Even I am trying to achieve the same thing as you have mentioned but think that it is not possible to achieve easily in obiee 11.1.1.6, though we do have a work around to perform a writeback in Essbase cube using JAPI as mentioned below.
    Also we can call Hyperion reports from OBIEE using Action Links and also pass parameters to the same but dont know if it runs calculation script.
    Below link could be useful for you for write back workaround.
    http://oraclebizint.wordpress.com/2009/05/25/oracle-bi-ee-10-1-3-4-1-writebacks-to-essbase-using-japi-and-custom-html-part-1/
    Let me know in case you have found out anything else related to same.
    Thanks,

  • How to create a table which contains relational data and Document data

    hai all
    i need to create a table which contains relational data(i mean coulumns whose data types are type NUMBER,VARCHAR) and documents(like xml file/html file/image)using iFS.
    when i store the document data(xml data/html data) in the iFS ,it will be stored as Document Object.so how do i relate this document object belongs to a particular row in a table.
    do guide me
    thanks

    Please see reply at http://technet.oracle.com:89/ubb/Forum36/HTML/000778.html

Maybe you are looking for