Querry related to xml

hi all,
I am generating an xml file by using dom parser.I am filling the xml file with the values retrived from database.In my xml file i have tags like
<importProject>
<ImportStaticFile>
<ImportContent>
<ImportSite>
My querry for retriving data from 3 tables is
String query = "SELECT Project_Master.proj_name,Project_Master.proj_desc,Project_Master.proj_location,folder_navigation.nav_name, content_document.cont_name,content_document.cont_type FROM Project_Master INNER JOIN folder_navigation ON Project_master.proj_id=folder_navigation.proj_id INNER JOIN content_document ON content_document.nav_id = folder_navigation.nav_id ";
Now my table contains 2 rows so every tag is getting created 2 times.But I want that if there are 2 projects then 2 prj tag shd be created and if there is no static file retrived from tables then no tag of the static file shd be there....
How shd i do this
Shd i modify my join condition or is there some other way

select distinct field1, field2, field3 from table1didn't go deeper into your problem, but got feeling that this is could solve your problem?

Similar Messages

  • Examples related to XML and XSLT

    Hello friends,
    In relation to XML and XSLT
    1. As I can transform an XML file using XSLT.
    2. I view the XML file using XSLT
    3. XSL and XSLT files are equal...?
    Thanks for the support

    http://docs.oracle.com/javaee/1.4/tutorial/doc/JAXPXSLT6.html

  • Best Practices - Relational to XML?

    Oracle 10gR2 Enterprise Edition 10.2.0.4.0
    Read the OpenWorld 2009 Oracle XML DB Design Guidelines, paraphrased as:
    - Use Case:
    -- Data already in relational form
    -- Need to generate different XML shape for presentation, exchange, reporting
    - Recommendations:
    -- Do Use XMLElement(), XMLForest(), XMLAgg() SQL/XML to define views
    -- Do Use XQuery with ora:view() for complex XML report generation
    -- Don't use use DBMS_XMLGEN(), DBMS_XMLQUERY(), XSU
    --- Lower perf, less declarative
    Although, Mark Drake's post in here Performance problems with XMLTABLE and XMLQUERY involving relational data has me wondering if the "Do Use XQuery with ora:view() for complex XML report generation" point has some caveats?
    I'm starting to build a solution to send relational data as XML to another system, where the relational data consists of several joined tables, including many one-to-one joins and a few one-to-many joins (including a pair of one-to-many joins where the rows from two joined tables must be interleaved). ( Note - To date, all of our XML processing has been for presentation and has been done on Windows .NET client apps. )
    Have built some simple SQL/XML views with XMLElement(), XMLAttributes(), XMLForest(), ... to begin building my SQL/XML and XML DB experience base, and am now working on an XQuery-based solution for the real solution.
    Questions:
    1. For performance with multiple 1:1 joined tables, should SQL VIEWs be preferred to building multiple FLOWR joins in XQuery?
    2. Can a relational tables with CLOB columns be referenced in the XQuery for clause of an SELECT XMLQuery() query?
    Note: Q2 is due to performance issues I encountered - when I added an XQuery for clause referencing a relational table containing several CLOB columns, the SELECT XMLQuery() query never returned.
    Alex

    stupid remark removed

  • Schema include with a relative path coming up relative to xml doc

    I have a main schema document that includes my other schema documents to make up the entire schema. All the documents are in the same default no namespace. All the includes are relative to the main schema doc (they are all in the same dir). I set the schemasource in the SAXParser to the main document. Then I try to parse and it isn't finding any of the included files. I have searched and searched these forums and found no answer.
    What i have tried is making an EntityResolver and when I print out the systemId in resolveEntity, it is printing out a location that is relative to the XML document that it being parsed, not a path relative to the main schema where all the schema files are. If i set the schemaLocation in the include to be the full path, it works fine. Why it is looking for them in the directory where the XML document is and not the schema?
    any help would be greately appreicated. Thanks!

    Probably because some standard says that what it is supposed to do. If you think it is wrong (and have a reference to some standards document to back up your opinion) you could complain to the people who maintain your parser.

  • Related to XML attributes

    now my xml.dtd file has the structure like this
    <!ATTLIST nete:forward
    filter CDATA #IMPLIED>
    that means that <nete:forward>element should contain one attribute....only....so in .xml file i can specify like<nete:forward filter="myfilter">........</nete:forward>
    now the requirement is ..it should support multiple attributes(infinite).......
    So instead of hard coding the dtd file like
    <!ATTLIST nete:forward
    filter CDATA #IMPLIED
    filter1 CDATA #IMPLIED
    filter2 CDATA #IMPLIED..........>
    .......Is there any way to specify ....to have multiple attributes for a element
    like
    <!ATTLIST nete:forward
    filter* CDATA #IMPLIED>
    so my final requirement is customer can add as many attributes he want like
    <nete:forward filter="xxx" filter1="yyy" filter2='zzz"........so on........</nete:forward>
    i think question is clear..........?

    I would expect element <nete:forward> to contain a set of <filter> elements rather than a set of filter attributes.

  • ABAP Querry related Querry

    Hi,
    I am creating an ABAP Querry using SQ03, SQ02 and SQ01.
    I have two tables to fetch data - MAST AND MARA.
    The only common field in the two is MATNR.
    First I created the Infoset using table MAST.
    I clicked on the <b>Join</b> Radio Button on SQ02.
    And then Inserted the MARA table using Insert Table Button.
    It automatically, linked the two tables using MATNR field because that is the only common field in the two tables.
    I can see the fields of MAST Table in SQ02 and SQ01,
    But I can't see the fields of MARA which I have to display on the Input screen as well as output screen.
    Help needed.
    Thanks in advance,
    Ishaq.

    Hi Ishaq,
      Have you selected the fields into field groups?
    Check my article which explains a sample scenario in the link below.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/40bec8da-4cd8-2910-27a9-81f5ce10676c
    Regards
    Sailaja.
    (Intelligroup).

  • Hi i had a querry related to CUPS ,

          Hello,
    the querry is whether i can broadcast text messages on the phone without user login in the IPPM on the phone..

    Also be advised that IPPM is end of life and removed starting in CUP 9.0.
    You may want to look at paging products. Both Syn-Apps and Informacast include text broadcast abilities in their product.
    Please remember to rate helpful responses and identify helpful or correct answers.

  • Relational to XML Question

    I have a need to extract from a parent and 4 children tables and store the results in a clob field in another table. I have been going over the documentation and don't see a clear answer. I have tried using 'select xmlquery' and also 'Select dbms_xmlquery.getXML', but don't see an obvious answer to what I am trying to do. Can someone please provide a suggestion as what would be best for my scenario?
    Thank you very much for your suggestion!
    Jerry

    One important piece of information - I using ver 10.2.0.3

  • Querry related to settlement rule

    Hi,
          I have given default receiver as CTR and given cost center in IE02 / Now when creating Order, i am able to get default settlement rule .. but i am getting 02 line items in settlement rule both with same settlement category as CTR and same cost center ,settlement percentage is also 100% ..the difference is first line item has FUL and second line item has PER as settlement type .. I would like to know where do we control to always get 02 distribution lines in default settlement rule ..
    Hope i am clear
    regards
    giri

    Giri,
    Have a look at these transactions:
    KSR1_ORI      
    Maintenance Order Strategies
    KSR2_ORI      
    Strategy Sequences for PM-Orders
    KSR3_ORI      
    Strategy Sequence - Ordtyp PM-Orders
    PeteA

  • Querry related to F.Locations

    Hi,
         I have 03 levels of Fun. Locations, Now i have my eqpts installed at the third level . On these eqpts i have created some orders ..on these eqpts on the third level , Now in IW38 , i entered third level F.Loc  i am getting list of orders, when i entered second level F.Loc i am not getting any order list .. so is there any possiblity to get list of orders even at second level of F.Loc ?
    Note :- My third level F.Loc is assigned to second Level F.Loc ..
    regards
    giri

    Dear Giri,
    We have defined  FL locations up to 6 levels such as RMHP-LOC1-CONVEY-0000A7-SDV (RMHP LOC1 CONA7 SHUTTLE DRIVE)
    & if a equipments are installed in lower level. So to get the list of order in IW38 we give selection criteria in FL filed as RMHP*  & execute it- it will display all orders below this functional location as we have structured all our Functional locations according to one structure indicator (i.e. All FLs follows a particular structure indicator).
    I agree with the solution provided by sebastien. Further you shall note that when you give Fl code in selection criteria of IW38 - you want the list of order on this location only & it includes list of orders of all equipments installed there. Otherwise think of a requirement if I wants list of order of a particular functional location only then how I would have got it (if it showed all orders below it). Functional location is a individual tecnical object where history is to be captured. Hence through structuring of FL in a hierarchy you can achive your requirement.
    Regards
    S P Behera

  • Querry related to Fix call option in IP10

    Hi,
         For performance based plans - in IP10 - for a scheduled plan Fix a call option is greyed out ..where the same is present in time based plans. Any specific reason for this ??
    regards
    giri

    You can't use Call Fix up for Multiple counter plan, I feel.
    Please go through below link.
    http://help.sap.com/saphelp_erp60_sp/helpdata/en/3c/abb5da413911d1893d0000e8323c4f/content.htm

  • XML BLOB/CLOB to relational structure

    Hi
    I would like peoples views on the best approach to the folllowing scenario
    We are implementing a new payments engine (major bank), the vendors DB is a mix of normal relational tables & XML including XML CLOB/BLOB. In the latter case, the original incoming payment message and the enriched outgoing payment message is stored as XML CLOB. The vendor has no OOTB reporting capabilities so our approach is to replicate the production database using GoldenGate. We then plan to cretae a fully relational oracle database as an ODS (near real time) with B.O sitting over the top for reporting. We will use Oracle Data Integrator to extract the data from the rplicated copy of the live DB every 10 mins, shred the XML clob components and store in normal relational tables in the ODI.
    Does anyone see major problems with this approach and can anyone suggest a better approach

    Hi
    I would like peoples views on the best approach to the folllowing scenario
    We are implementing a new payments engine (major bank), the vendors DB is a mix of normal relational tables & XML including XML CLOB/BLOB. In the latter case, the original incoming payment message and the enriched outgoing payment message is stored as XML CLOB. The vendor has no OOTB reporting capabilities so our approach is to replicate the production database using GoldenGate. We then plan to cretae a fully relational oracle database as an ODS (near real time) with B.O sitting over the top for reporting. We will use Oracle Data Integrator to extract the data from the rplicated copy of the live DB every 10 mins, shred the XML clob components and store in normal relational tables in the ODI.
    Does anyone see major problems with this approach and can anyone suggest a better approach

  • XML Publisher --Data Template-Help

    Hi
    we are using XML Publisher attached to R12 , we are using data template ( .XML file) in data definition .
    we have a requirement for master details report. for that we have parent query and child query .the data coming to parent query should be the parameter to child query
    Exp-- Parent Query-- select empno from emp
    Child Query --select  * from dept where empno=:p_empno (p_empno =empno from Parent query ) 
    For this requirement we are creating a data template (.XML file) . we are successfully to write for the parent query, but we fails when come to child query. Please help us how it can be wrote in data template . You can send any example related to this issue.

    This forum is noway related to XML Publisher. I had redirected you to the actual BI Publisher already once. You should post it there.

  • Is XML Publisher causing shared memory problem..?

    Hi Experts,
    Since this week, many of the Requisition/PO are erroring out with the below errors or similar to these errors:
    - ORA-04031: unable to allocate 15504 bytes of shared memorny ("sharedpool","PO_REQAPPROVAL_INIT1APPS","PL/SQL MPCODE","BAMIMA: Bam Buffer")
    ORA-06508: PL/SQL: could not find program unit being called.
    -Error Name WFENG_COMMIT_INSIDE
    3146: Commit happened in activity/function
    'CREATE_AND_APPROVE_DOC:LAUNCH_PO_APPROVAL/PO_AUTOCREATE_DOC.LAUNCH_PO_APPROVAL'
    Process Error: ORA-06508: PL/SQL: could not find program unit being called
    Few days back we were getting heap memory error for one of the XML Publisher report.
    I heard that XML Publisher requires lot of memory for sources/features,So I want to know whether XML Publisher can be one of the cause for memory problem to occur or this shared memory is not related with XML Publisher sources at all.
    Please advice.
    Many thanks..
    Suman
    Edited by: suman.g on 25-Nov-2009 04:03

    Hi Robert,
    Thanks for your quick reply...
    Apps version: 11.5.10.2
    database version: 9.2.0.8.0
    As I am a beginner in this so dont know much about this.. Can you please guide me on this.
    DBAs has increased the shared memory and problem has resolved but here I am more concrened whether the XML Publisher was or can be one od the cause for shared memory problem. Is there any way to check that or this occurs randomly and we can not check this.
    Please advice something.

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

Maybe you are looking for