Linking relational data to folders in XMLDB Repository by using metadata

Hi,
We want to use the XML DB Repository to store documents (PDF, Word, etc) belonging to customers, dossiers of customers, invoices of customers, etc. To accomplish this we are thinking of a folder hierarchy with the first level being customer folders, the second level dossier / invoice folders and within each of these folders the relevant documents / other folders. When querying a customer by sql, we want to determine the correct folder in the repository by storing the primary key of the customer row as user meta data to that folder. After this we get the folders and documents under this folder with the under_path function. Some folders represent a dossier / invoice folder and with this folder the primary key of the dossier / invoice is stored via meta data. While querying these folders by using sql we want to retrieve additional info which is stored in the relational tables: Customer info within a customer table, dossier info within a dossier table, invoice info with an invoice table, etc. Theoretically all available info must be retrieved, so this info preferably must not be stored as metadata (only the primary key and type to these rows and tables).
My question: Is this the right way to go or are we going to face problems with this architecture? Is there a need to store all info as metadata or can it be done as I describe? So we want to link info from different tables to folders / documents in the repository. Because each folder can have metadata pointing to different tables we are facing (even with a small data set) performance issues. Can someone point me in the right direction?
Thanks,
Piotr Chabot Stadhouders
Timeff
The Netherlands

Here's an example, tell me if this is what you need.
Setup : the following creates a table to store a specific type of metadata, two folders, and finally creates a resource (JPEG image) and its associated metadata :
SQL> create table character_metadata (
  2    character_id   number(6)
  3  , character_name varchar2(80)
  4  , origin         varchar2(80)
  5  , category       varchar2(30)
  6  );
Table created.
SQL> declare
  2    res boolean;
  3  begin
  4    res := dbms_xdb.CreateFolder('/ComicBooks');
  5    res := dbms_xdb.CreateFolder('/ComicBooks/Characters');
  6  end;
  7  /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL> declare
  2
  3    v_img_name     varchar2(260) := 'odie.jpg';
  4    v_metadata_id  character_metadata.character_id%type;
  5    res            boolean;
  6
  7  begin
  8
  9    /* Create the resource from the image file*/
10    res := dbms_xdb.CreateResource('/ComicBooks/Characters/' || v_img_name, bfilename('TEST_DIR', v_img_name));
11
12    /* Create the metadata in the dedicated table */
13    insert into character_metadata (character_id, character_name, origin, category)
14    values(1, 'Odie', 'Garfield', 'Dog')
15    returning character_id into v_metadata_id;
16
17    /* Add the pointer in the resource as user-defined metadata (non schema-based) */
18    dbms_xdb.appendResourceMetadata(
19      '/ComicBooks/Characters/' || v_img_name
20    , xmltype( '<cm:CharacterMetadata xmlns:cm="http://mycompany.com/ComicBooks/Characters"><cm:id>' ||
21               to_char(v_metadata_id) ||
22               '</cm:id></cm:CharacterMetadata>' )
23    );
24
25  end;
26  /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.A possible query would look like :
SQL> select cm.*
  2       , x.character_pic
  3  from resource_view v
  4     , xmltable(
  5         xmlnamespaces(
  6           'http://mycompany.com/ComicBooks/Characters' as "cm"
  7         , default 'http://xmlns.oracle.com/xdb/XDBResource.xsd'
  8         )
  9       , '/Resource'
10         passing v.res
11         columns metadata_id    number path 'cm:CharacterMetadata/cm:id'
12               , character_pic  blob   path 'XMLLob'
13       ) x
14     , character_metadata cm
15  where under_path(v.res, '/ComicBooks/Characters') = 1
16  and cm.character_id = x.metadata_id
17  ;
CHARACTER_ID CHARACTER_NAME  ORIGIN          CATEGORY        CHARACTER_PIC
           1 Odie            Garfield        Dog             FFD8FFE000104A4649460001010000
                                                             0100010000FFDB0084000906061412
                                                             111414121416141514171717161718
                                                             1815181D17171617151816151A1718
                                                             1C261E1719231918141F2F2223272A
                                                             2C2C2C161EThe image content is retrieved as a BLOB, along with its addtional data.

Similar Messages

  • BI Beans for relational data?

    Hi,
    Discoverer 10g BI can operate on relational data too and Discoverer Viewer 10g BI uses the same BI beans. Does some one know if Oracle plans to make BI beans usable for Relational data too?
    Regards,
    Tamas Szecsy

    In case some one might be interested, this is what Metalink Support answered:
    - The presentation builder beans are not available for relational data sources.
    - BI Beans graph can be created for relational data soucre, however are feature limitations compared to the OLAP datasourced graph:
    http://bibeans.us.oracle.com:8888/ohw/help/?topic=bi_specifying_data_graph_html
    - BI Beans graph is created in Reports and Form, that can be a little bit customized:
    http://www.oracle.com/technology/products/reports/htdocs/faq/Graph_FAQ_with_style.html

  • How to pass relative dates (Yesterday, Last week) via Command Parameters in a linked subreport

    I have been struggling with this one and would sure appreciate your help
    I have a report that runs fine but takes to long when running, to optimise it have setup it up as a subreport , created a main report which I linked to it via a relative field ({?ServiceID}) field, this has reduced the report time but I would like to go even further by user selecting date e.g startdate, enddate or relative field eg ‘yesterday’, ‘Last Week’
    I can do the selectable start and end date fine but I’m failing to make use of the ‘Yesterday’,’ last week’ within the command parameters, below is an example of the code I have on the reports select expert but I want a semiliar filtering on the command parameter to avoid the filtering taking place on crystal but straight from the DB.
    IF  {?Relative Date}= "Yesterday" THEN currentdate-1
    ELSE IF {?Relative Date}="Last Week" THEN LastFullWeek

    Good Day Guys,
    Apologies for the late response, I have been looking and trying the different suggestion you pointed out, unfortunately without any success
    Stored Proc
    Nrupal, I tried setting one up but I’m getting a error I cannot seem to get past, below is my condition, any help would be appreciated
    where cd1.Service_ID = @Service_ID
    and cd1.CallStartDt in (
      Case @RelativeDate
      When 'Today' then CONVERT (date, getdate())
      When 'Yesterday' Then DATEADD(DAY,-1,CONVERT (date, getdate()))
      When ‘Last Week’ then between @LastWeekStart and @LastWeekEnd       //sytntax before between, also tried using ‘IN’, same result
      When ‘Last Month’ then between @LastMonthStart and @LastMonthEnd 
    else between @CallStart and @CallEnd
    END )
    End
    Crystal command parameter code
    where cd1.Service_ID in ({?Service_ID}) and
    convert(datetime,cd1.CallStartdt) in (
    Case {?RelativeDate}
    When 'Yesterday' Then GetDate()-1
    else  {?CallStart}
    END)
    Abhilash,
    I have tried both your proposed command parameters but still fail
    I’m getting an invalid column name ‘Yesterday’ error , I’m not sure if Crytsal takes this type of command formatting using the command parameters, would appreciate your help as I would like to stay clear of the SP, unless it’s the only altanative
    Dell
    I will start working in your re
    Thanks Again

  • Where to search for a specific Dimention related data

    Hi,
    I guess, hyperion planning store the dimention related data( parent, child, uda,attributes, consolidation operator, data storage etc) in some relational tables of that planning application. Can anybody help me understand where & how those data is stored and what are the table name I should look for a particular dimention related data?
    Actually I need to look into the planning RDBMS table to get the membernames of one particular dimention and search another huge Oracle database to search for those and retrieve the relevant data writing a query. I am using Planning ver9.3.1
    Please revert back for any clarification.
    Regards.

    Hi,
    Take a look at below tables in your application repository schema (db), they are all linked through id fields and they include dimensional infromation.
    HSP_OBJECT
    HSP_OBJECT_TYPE
    HSP_DIMENSION
    HSP_MEMBER
    HSP_ALIAS
    You get detail information from HSP_OBJECT. HSP_OBJECT keeps entire details for entire metadata. Other tables will help you understand the relations, positions etc.
    Cheers,
    Alp

  • Events on XMLDB repository. How ?

    Hi all,
    is there any way to fire custom logic on the XMLDB repository events from any access protocol (FTP,WebDAV,DBMS_XMLDB)?
    (i.e Create/Delete/Modify a Folder/Link/Resource etc)
    I don't use XMLType but only unstructured data (CLOB).
    If triggers are the solution , on which table/view they should be created ?
    Any example ?
    Thanks a lot

    None that are considered a 'supported' solution. This functionality is being considered for a future release of the database.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Help Required: Working with relational data

    Hi,
    I'm looking for some advice
    Scenario
    We have a customer who has an SE relational database with Apex for transaction processing. They require reporting functionality and are debating whether to buy BI SE One or BI EE; they will not buy the enteprise edition database though.
    Questions
    Here are my questions:
    1. In order to use answers and dashboards, does the relational data have to be in a dimensional format. Is this reccommended.
    2. We were debating on whether to create data marts for various business requirements. Is this standard procedure for relational data (Which is not too complex)
    3. If the data has to be in a dimensional format, is this best modelled using warehouse builder or the model layer of the repository using BI Administrator. (Baring in mind that the customer is an SME using a SE database)
    The main problem where having, been fairly new to warehouse building, is that we dont know what the best path is to take when working with relational data. Do we create dimensional models? What should we use? Is a relational structure suitable enough?
    Can anyone shed light on our problems
    Regards
    Kevin

    1. In order to use Answers and Dashboards you do not need to have a Data Warehouse. It can report out of both Transactional database as well as a data warehouse.
    2. It depends on what your business requirements are. BI EE/SE1 does not require you to have data marts. Data Marts would give performance gains, would help you in visualizing the transactional data from a business standpoint, is a big plus but not a mandate. If you want to go that route of creating Datamarts, you can easily do that using BI SE1 since that gives you warehouse builder which can be both used as a data modelling tool and a ETL tool.
    3. Warehouse Builder is a ETL tool. It helps in you visualizing/deploying data warehouses in databases. But BI EE Admin tool is more of a logical dimension modelling tool wherein the facts and dimensions do not exist physically but the tool uses them to do the querying/processing.
    Since you are just starting, what you can do is to use BI EE directly on top of the transactional system for the time being. But as a long term solution you can start working on your data marts and as soon as they are ready you can port the BI EE on top of them rather than the transactional system.
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • Using relational data from SQL data source in Planning and Essbase

    Hi,
    How do I take sample data from a SQL data source and bring it into a Hyperion Planning application? I understand that when creating Planning applications, a link between a relational data source and Essbase must be established, because the relational database holds the metadata while the database outline is stored with Essbase. However, all I am currently able to do is load the data into Planning applicaitons via EAS, where I right-click on the Application database, hit load data, and select from either a .txt file or an excel file. Do I need Oracle Data Integrator? Any help or insight would be greatly appreciated, as well as corrections to any incorrect assumptions I may have made in this post. Thank you.

    When you import your file (Excel or text), you're importing it using a Load Rule in EAS. To load from SQL, you simply create a SQL load rule. You'll load data the exact same way (via EAS), but with a different type of load rule. The load rule will contain the SQL that queries the database. You can preview your data in the load rule the same way you would with a file.
    If your SQL is very complex, I'd recommend creating a view and loading from that view. But otherwise it's pretty straight-forward.
    The only catch is that you need to configure a database connection (to your relational database) on the Essbase server. The Essbase DBA guide will show you how to do this.
    You COULD use ODI, but I tend to only use it for loading metadata.
    Hope this helps,
    - Jake

  • How to  fetch the relational  data from the xml file registered in xdb

    Hi,
    I have to register the xml file into the  xdb repository and i have to fetch the data of the xml file as relational structure  through the select statement .
    i used the below query to register the xml file in xdb.
    DECLARE
    v_return BOOLEAN;
    BEGIN
    v_return := DBMS_XDB.CREATERESOURCE(
    abspath => '/public/demo/xml/db_objects.xml',
    data => BFILENAME('XML_DIR', 'db_objects.xml')
    COMMIT;
    END;
    Now i have to fetch the values in the xml file as relational data .
    whether it is possible ?
    can any one help me.
    Regards,
    suresh.

    When you transform your XMLdata to a xmltype you can do something like this for example:
    select
    extractvalue(value(p),'/XMLRecord/Session_Id') session_id,
    extractvalue(value(p),'/XMLRecord/StatementId') StatementId,
    extractvalue(value(p),'/XMLRecord/EntryId') EntryId
    from
    table(xmlsequence(extract(xmltype('
    <XMLdemo>
    <FormatModifiers><FormatModifier>UTFEncoding</FormatModifier></FormatModifiers>
    <XMLRecord>
    <Session_Id>117715</Session_Id>
    <StatementId>6</StatementId>
    <EntryId>1</EntryId>
    </XMLRecord>
    </XMLdemo>
    '),'/XMLdemo/*'))) p
    where extractvalue(value(p),'/XMLRecord/Session_Id') is not null;
    For this sample I've put a readable XML in plain text and convert it to xmltype so you can run it on your own database.

  • OWB 10g -- Can't Create Database Links for Data Source and Target

    We installed OWB 10g server components on a Unix box running Oracle 10g (R2) database. The Designer Repository is in one instance. The Runtime Repository and the Target are in another instance. The OWB client component was installed on Windows XP. We create a data source module and a target module in OWB. The data source is on another Unix box running Oracle 9i (R2) database. We try to create database links for data source module and target module, respective. But when we created and tested the DB links, the DB links were failed.
    For the database link of data source, we got the following error message:
    Testing...
    Failed.
    SQL Exception
    Repository Error:SQL Exception..
    Class Name: CacheMediator.
    Method Name: getDDEntryFromDB.
    Repository Error Message: ORA-12170: TNS:Connect timeout occurred
    For the database link of target , we got the following error message:
    Testing...
    Failed.
    API2215: Cannot create database link. Please contact Oracle Support with the stack trace and the details on how to reproduce it.
    Repository Error:SQL Exception..
    Class Name: oracle.wh.ui.integrator.common.RepositoryUtils.
    Method Name: createDBLink(String, String, String, String).
    Method Name: -1.
    Repository Error Message: java.sql.SQLException: ORA-00933: SQL command not properly ended.
    However, we could connect to the two databases (data source and target) using the OWB’s utility SQL Plus.
    Please help us to solve this problem. Thank you.

    As I said prior the database link creation should work from within the OWB client (also in 10).
    Regarding your issue when deploying, have you registered your target locations in the deployment manager and did you first deployed your target location's connector which points out to your source?
    I myself had some problems with database link creations in the past and I can't remember exactly what they were but it had something to do with
    - the use of abnormal characters in the database link name
    - long domain name used in as names.default_domain in my sqlnet.ora file
    What you can do is check the actual script created when deploying the database link so see if there's something strange and check if executing the created script manually works or not.

  • Schema registration for storing XML documents in XMLDB repository

    Hi,
    Can I store only the XMLSchema based documents in the XMLDB repository?.Cant I save documnts which do not have a schema?.
    Thanks in advance,
    Sirisha.

    Testing with 10.1.0.3.0 I get
    SQL*Plus: Release 10.1.0.3.0 - Production on Thu Sep 16 23:53:30 2004
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    SQL> spool registerSchema_&4..log
    SQL> set trimspool on
    SQL> connect &1/&2
    Connected.
    SQL> --
    SQL> declare
    2 result boolean;
    3 begin
    4 result := dbms_xdb.createResource('/home/&1/xsd/&4',
    5 bfilename(USER,'&4'),nls_charset_id('AL32UTF8'));
    6 end;
    7 /
    old 4: result := dbms_xdb.createResource('/home/&1/xsd/&4',
    new 4: result := dbms_xdb.createResource('/home/OTNTEST/xsd/pdbx.xsd',
    old 5: bfilename(USER,'&4'),nls_charset_id('AL32UTF8'));
    new 5: bfilename(USER,'pdbx.xsd'),nls_charset_id('AL32UTF8'));
    PL/SQL procedure successfully completed.
    SQL> commit
    2 /
    Commit complete.
    SQL> alter session set events='31098 trace name context forever'
    2 /
    Session altered.
    SQL> begin
    2 dbms_xmlschema.registerSchema
    3 (
    4 schemaURL => '&3',
    5 schemaDoc => xdbURIType('/home/&1/xsd/&4').getClob(),
    6 local => TRUE,
    7 genTypes => TRUE,
    8 genBean => FALSE,
    9 genTables => &5
    10 );
    11 end;
    12 /
    old 4: schemaURL => '&3',
    new 4: schemaURL => 'pdbx.xsd',
    old 5: schemaDoc => xdbURIType('/home/&1/xsd/&4').getClob(),
    new 5: schemaDoc => xdbURIType('/home/OTNTEST/xsd/pdbx.xsd').getClob(),
    old 9: genTables => &5
    new 9: genTables => TRUE
    begin
    ERROR at line 1:
    ORA-31084: error while creating table "OTNTEST"."datablock1708_TAB" for element
    "datablock"
    ORA-01792: maximum number of columns in a table or view is 1000
    ORA-02310: exceeded maximum number of allowable columns in table
    ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 17
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 26
    ORA-06512: at line 2
    SQL> quit
    Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - Production
    With the Partitioning, OLAP and Data Mining options
    $
    This issue is dealt with in
    http://download-west.oracle.com/docs/cd/B13789_01/appdev.101/b10790/xdb03usg.htm#sthref181
    See the section working with large XML Schemas

  • Bill of material related data.

    Dear experts,
    I want to know is their any standard  datasources to extract Bill of material related data(Material,Component) to BI.As the data in standard tables like MAST,STPO,STKO.
    Pts will be assigned.
    With Regards,
    Meiyappan

    Hi,
    Check these links:
    Hope these links would be of some help
    http://help.sap.com/saphelp_erp2005/helpdata/en/ea/e9b7234c7211d189520000e829fbbd/frameset.htm
    http://help.sap.com/saphelp_erp2005/helpdata/en/d1/2e4114a61711d2b423006094b9d648/frameset.htm
    http://www.sap-img.com/sap-sd/sales-bom-implementation.htm
    http://www.sap-basis-abap.com/sappp007.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/43/40b8aeaa5bba4d9a81c7332119a4b4/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/d9/6eeedf6d44f242902eafc361924026/content.htm
    Sasi

  • How do you schedule an appointment using a relative date?

    Is it possible to schedule a meeting using a relative date?  (i.e.  Every 3rd Saturday of each month).  It was very simple to do this on the Blackberry calendar; however, I have not been able to do this using the iphone calendar.

    1. If it is still covered by your Applecare plan then find the website for your local Applestore and look for the link for  "genius bar" appointments. The only way to make an appointment is online.
    2. If it is not covered by an Applecare then you can either make appointment as discussed above and pay Apple to fix or replace it (they choose which) or you can try a third party repair facility.  I've had good luck with PDASmart --  google for their webpage.

  • Index same KM links in two diffrent folders

    Hi
    I have an issue regarding indexing of a folder that consist of internal KM links. I have copied a taxonomy structure, which consist of a bunch of folders and links to KM documents, to another KM repository. When I am browsing this copied structure and performs a “search from here” from a specific folder, then I get an empty search result. When I perform the same action in the Taxonomy structure I get a correct search result.
    I have tried to create an new index for the copied structure and crawl all links again, the result is that the links are found but not indexed because documents are know.
    Does anyone have a workaround so it’s possible to use the “search form here” command on a copied folder which consists of links?
    I am currently running om Netweaver 04 Stack 17
    Thanks in advance
    Cheers
    John

    Hi,
      You can do that using taxonomies. You upload the document, assign metadata and then by mean TREX index is classified.
      You create KM Navigation Iviews to point these taxonomies folders.
      You can read about taxonomies:  http://help.sap.com/saphelp_nw04s/helpdata/en/6c/5145b1d1de11d6b2cc00508b6b8b11/frameset.htm
    Patricio.

  • Link the data in excel to dreamweaver

    I don't know how to link the data in the cell of the
    excel(2003); isn't the excel file. It's just the data in the cell
    or coulmn.
    I want to expess the data in the web page.
    I think the solution about it, so I tried the function of
    publish in excel , and I success to publishing the page.
    but it have the problem. The data was just refreshed. For the
    refresh, opened the page have to refresh.
    It's difficult to me, so could give me the advice about
    it...

    My thoughts are
    To me (IMHO) your problem is
    - excel spreadsheet is not on webserver
    - database is a better repository for
    storing/updating/displaying data
    If people need the data in a spreadsheet from time to time,
    the data
    (from database) could be formatted in a query and exported to
    a local
    machine where a csv file could be imported into Excel.

Maybe you are looking for

  • My keyboard does not change language in facebook log in

    I just bought MacBook Pro 15'4 I am in general new user with mac !!! My primary keyboard language is English and my secondary is Greek. When i try to log in to my facebook account i wrote my e-mail (in english of course) and my password is in greek a

  • Iwork '09 & ilife '09 for my mac mini

    i own iwork '09 & ilife '09 family pack disks.  they have been loaded on 2 macs to date.  can i now load them on my new mac mini

  • Iphone4s use as a router??

    Can a iphone4s use as a router??

  • New Registration Error

    I get this error tring to register.... any ideas? Oops! Our server is having a problem with your submission! Sorry for the inconvenience. AppEcbError: ECB request failed: Error: code='900' description='[SF_ERR_202]ystem exception -- Server exception:

  • Can't get e-mail on yahoo. Can't find URL/dc/launch

    I can login to yahoo.com. However, when I click on Mail, I get the error message "can't find URL/de/launch". I had computer tech fix my computer so I could connect to my office network. After that I have had this problem. Any help would be greatly ap