XML is slow

Hi there,
we have a lot of customers that provide us with data via XML. Actually we are building a kind of central data store.
When validating the XML and storing it in relational tables it takes a lot of time.
1 XML-message may contain about 50 million records. It takes us 0.1 second to store 100 records, so 50 million will take 14 hours. This job has to run every night....!
We are using 10.2 database and DBMS_XMLStore for storing XML. This part is fast, but XMLType.existsNode is slow and getRootElement also very slow.
When I try the same action with PL/SQL tables I can prove that my PL/SQL program for processing and checking the data is fast and correct because then I can store the same amount of records in about 20 minutes.
Any suggestions on how to speed up the XML processing ?
10x very much

Can you post a sample XML with few rows and the program code that works in this XML?
What is the need to do existsNode and getRootElement calls when using DBMS_XMLStore ?

Similar Messages

  • Creation of IDoc XML very slow compared to IDoc Flatfile

    Hi everyone,
    I did a test in our ERP6 system and created IDoc files for a few invoices. The result is that the creation of a XML Idoc is 22x slower than the creation of a flatfile Idoc. The creation of one large IDoc file with 1000 invoices took ~15 seconds as flatfile and ~5:30 minutes as XML. So the choice between flatfile or XML is really a performance question for me. Is that normal? I expected a difference but not that much.
    Can I speed up the XML creation anyhow? XML is better to process and I really want to use it, but I can not with this bad performance...
    Thank you!
    Regards
    Andreas

    So the choice between flatfile or XML is really a performance question for me. Is that normal?
    Yes, it is normal. For every byte of data in xml file, it needs to create multiple bytes of structures according to XML format.
    But, i am wondering, why u need to go for xml, when there is no PI installed.
    Because, end output file is always in flatfile format.

  • Output report XML very slow

    Hi,
    I designed the report output includes 40 columns and 100000 rows. However, the request and opens slow in XML format on the client very slow.
    Please ask you one optimal solution to the problem above?
    (Version R12)
    Thank alot!
    Edited by: user12193364 on 14:03 24-04-2013

    Please post the details of the application release, database version and OS.
    Hi,
    I designed the report output includes 40 columns and 100000 rows. However, the request and opens slow in XML format on the client very slow.
    Please ask you one optimal solution to the problem above?Are you running the report from the application?
    Is the issue with processing the request or just with opening the output file?
    What is the size of the concurrent request output file?
    R12: Troubleshooting Known XML Publisher and E-Business Suite (EBS) Integration Issues [ID 1410160.1]
    EBS - Technology Area - Webcast Recording 'BI Publisher / XML Publisher Overview & Best Practices - Webinar' [video] [ID 1513592.1]
    Poor Performance /High Buffer Gets In FND_CONC_TEMPLATES [ID 1085293.1]
    Performance Issue - PDF Generated With BI Publisher [ID 956902.1]
    Overview of Available Patches for Oracle XML Publisher embedded in the Oracle E-Business Suite [ID 1138602.1]
    Tuning Output Post Processor (OPP) to Improve Performance [ID 1399454.1]
    Thanks,
    Hussein

  • Large Catelog Site Using XML to Slow

    Hello,
    I have been trememdously help by the forum here - and yet I
    return for more help. I have a catelog site that uses a ViewStack,
    TileList, ItemRenderer, and XMLFiles as dataproviders. The site
    works great except that there is a huge wait time before the photos
    load. I am concerned about site abandonment by customers. Would
    using ColdFusion and MySql provide a better solution to speed up
    this site? Or, is there a better way to speed the load time of the
    photos? Grateful for any suggestions.

    Thanks for the reply. Question. Would using a repeater with
    the TileList to address the XML provide a possible solution. Some
    images do show up already, however, they kind of randomly pop in
    one at a time until all are there. This seems to be kind of nerve
    racking to my customers, so they say. I was thinking the perhaps
    the larger photos was a problem. But the thumbnails are very slow
    as well.
    I am looking to the best way to eleviate this problems
    because it is a photo driven catalog site. I really need to cut the
    load time down if possible. Or, perhaps Flex Builder 3.0 is not the
    solution needed and PHP with a database is? Still looking for more
    possibilities.
    Thanks
    evware

  • XML file slow to load

    i have created a Spry Data Set from an xml file with approx. 4700 rows (not 4700 items). The problem is that it takes abourt 4/5 seconds to load when you click on the website. Is there a way i can improve the performance of this.
    Also the search function performs slowly in IE but seems ok in firefox.
    Can anyone suggest any way to improve the performance?
    Many thanks,
    Colin

    this is the select query code:
    mysql_select_db($database_myprojectbrowser, $myprojectbrowser);
    $query_product_table = "SELECT projects.project_code, projects.project_name, organisations.orgranisation, projects.date_closed, heirarchy_names.description, 'products_table.php', projects.p_id FROM projects Inner Join organisations ON projects.organisationID = organisations.organisationID Inner Join project_assignments ON projects.p_id = project_assignments.p_id Inner Join heirarchy_names ON project_assignments.heirID = heirarchy_names.heirID WHERE projects.n_name =  'scotwat' GROUP BY projects.p_id ORDER BY organisations.organisationID ASC, projects.project_code ASC ";
    $product_table = mysql_query($query_product_table, $myprojectbrowser) or die(mysql_error());
    $row_product_table = mysql_fetch_assoc($product_table);
    $totalRows_product_table = mysql_num_rows($product_table);
    print json_encode($product_table);
    as said above, i get an error msg

  • Reference Library for Converting Between LabVIEW and XML Data (GXML)

    Please provide feedback, comments and questions on the Reference Library for Converting Between LabVIEW and XML Data (GXML) in this thread.
    The latest version of the NI GXML Library is availble in VIPM on the NI LabVIEW  Tools Network repository.

    Francesco, Thank you for the feedback.  With this component it was my intention to make a more "terse" version of the LabVIEW Flatten to XML VI that was also supported on RT and that gave the user more flexbility regarding the structure of the parsing type definition. I think you are right that the XML parser is not compliant to section 2.11 of the XML spec.  The parser does specifically looking for a #D#A and this appears to be an oversight on my part.  Please confirm for me, the specifcation is saying that the XML parser should be able to recognize three possibilities as an "end of line" character: #D#A, #D, or #A.  Am I reading this right?There are more efficient (and in some cases much more efficient) ways of sharing data between LabVIEW and LabVIEW: some examples are flattened binary strings and the datalog binary format.  XML is slower than these optons but the upside is that it is human readable.  Furthermore XML is inherently hierarchical which is convenient for complex data structures like clusters of arrays of clusters, etc.  If you don't care about human readability then you are correct XML doesn't make as much sense.I will return to the GXML source code and try to fix this in the near future but I would hope that instead of creating yet another custom VI from scratch that you could reuse what I have provided for you.  I included enough documentation in the source code so that users could make some modificiations themselves. The target application for this reference library was LabVIEW to LabVIEW communication.  As such I documented the schema on the dev zone document from a LabVIEW perspective.  It includes all the supported datatypes and all the supported data structures (cluters, arrays, multidimensional arrays, clusters of multidimensional arrays, etc.)  I do see some value in making a more conventional XML spec but the time investment required didn't really line up with my intended use case. Were there any other downsides to GXML that I have missed?  Best Regards, Jeff TippsSystems Engineer - Sound and VibrationMessage Edited by Jeff T. on 04-21-2010 10:09 AM

  • Migratng  XML stored as unstructured to binary structured colum

    Hi i can't find any topic on that subject so, I start one myself. Has anyone has any experience with migrating to binary xml storage options. Especially how it affect existing sql procedures. And how to move data.

    In some limited testing I did using 11.1.0.6, with a XML document that was probably in the 1 - 2 MB range, it more than doubled my INSERT time when switch from using
    CREATE TABLE load_temp_result OF XMLTYPE XMLTYPE STORE AS BASICFILE CLOB;to usingCREATE TABLE load_temp_result OF XMLTYPE XMLTYPE STORE AS SECUREFILE BINARY XML;Of course this was only taking the time from the 0.1 second range into the 0.2 second range. For me, the slowness was more than made up for on the parsing side of that XML where it dropped from 15+ seconds to sub-second. So yes, like anything else, there is a trade-off. With using BINARY XML, the slowness comes from Oracle parsing the XML and creating a binary representation of what you are storing. This binary representation is what allows it to read the XML faster when making updates. I never did test UPDATE speeds.
    I also expect this time difference to have decreased now given improvements made to Oracle since the first release of that storage concept.
    So any negative impacts due to increased INSERT time should be offset by increased performance from UPDATEing the XML. Of course, your mileage may vary.

  • Why can't I find the answer to this anywhere?

    Hi, please help a beginner...
    I'm trying to build an online catalog with PHP and MySQL, using Dreamweaver.
    I have a problem with filters... here it is:
    I have some variables... let's take 1 for ex. $manuf in which I want to store the manufacturer (for the notebooks table, like "acer", "alienware" etc.).
    I figured I'll do the filters one of 2 ways:
    1. through dropdown list/menu and I've populated the list with the values "acer" "alienware" etc.
    and in "OnChange" event of the list/menu I want to assign the selected value of the list/menu to the variable $manuf.
    HOW do I do this??? I can't find anything like a property list1.selected for ex. to simply write <?php $manuf = list1.selected ?>
    where list1. should be the name of the list/menu (right?...)
    or
    2. it looks nicer with a SpryMenu.
    So I have a Spry Menu like "Manufacturer" etc.
    And Submenus like "Acer", "Alienware" etc.
    now... when I click the submenu labeled "Acer" I can just do the php $prod = "acer" BUT HOW can I also change the menu's caption to say "Manufacturer - Acer" instead of just "Manufacturer" ??? ('cause I want people to see what filter is applied). Again, I can't find any property like sprymenu1.caption or something like that.
    P.S.
    regarding the list/menu... I can populate it dynamically, from the database... with the values from $manuf field. BUT they repeat, because there are more than 1 "acer" for ex. in the catalog (database). HOW can I dynamically populate the list from the database, without repeating any value???

    For the time being I think I am going to start with just problem 1 because I think as I go through explaining this to you, the answer to #2 will become more clear.
    Javascript is like Actionscript (Flash) and can be executed in runtime meaning that if you execute javascript on a page you can see the results immediately.  PHP, on the other hand, is a preprocessor meaning that the code is executed prior to a page load and cannot be executed in runtime like Javascript can.  Your mix of the two for this instance is really not needed unless you want the form to act dynamically.  Because of this there are 2 ways  to approach the situation:
    1.  If you want to stick with PHP, when you submit the form, the values of the stored will be stored in the global variables ( http://php.net/manual/en/reserved.variables.php ).  Based on the method of your form will depend on how you extract the data.  For instance, if your form method is "POST" and the drop-down field named "manuf" has a value of "Dell", then when you submit the form, the variable $_POST['manuf'] will be equal to "Dell".
    With this example the form has to be submitted because Javascript cannot send commands to PHP once the page has been loaded because the PHP has already been executed.
    2.  This way is a lot more complicated so I won't go into much detail.  But, the Spry Dataset functions are intended for updating data live without the need to submit a form forcing a page refresh for the PHP to execute.  What happens with the Spry dataset is that you store all the information in mySQL and then the Spry Dataset will convert this into XML for you. PHP will be used to run the initial query of your database and depending on the size of your database can be used to filter the data so you do not end up with an extremely large XML file slowing your page load time.  I do have a basic example I did awhile back located here ( http://www.exitplaystation.com/warhawk/trophies.php ).  That used a static XML document at the time.  I do have a more dynamic example.  It was actually done for work, but I might be able to modify it with bogus information and upload it to show you a dynamic version.  In the meantime if you want to see a professional example of this in action:
    http://msn.foxsports.com/nfl/draft-tracker#round=1&team=ALL&school=ALL&position=ALL
    FoxSports uses the Spry Dataset sort functions and some extra code for their live NFL draft system.  Refreshes automatically and sorts from the top drop-down menus.  So as you can see all of their data is loaded from their database connection and refreshed in real-time.  This is a very complicated example and requires modifications to the Spry code, but it shows the power of what Spry can do.
    Let me know which way you would prefer to go with this.  Personally I would recommend #1 for starters until you get going and feel more comfortable with PHP and Javascript.  Also if you give more a little more detail on your code I could help you implement it if you are having difficulty understanding.

  • XmlSockets speed

    What is the usuall lag (ping) of xml flash sockets? And also,
    what are its limitations related to speed?
    btw, are the flash xml sockets slower than, for instance,
    java tcp sockets?
    Is it a good idea to embed java and flash in a browser to
    create a real time game using 1) tcp 2) udp sockets?
    Thank you

    They disappear at random, so every day it is a different story. I should have name and type of number (home, mobile etc) but instead I get either only the number without the name or the number without both name and photo. 

  • Slow extraction in big XML-Files with PL/SQL

    Hello,
    i have a performance problem with the extraction from attributes in big XML Files. I tested with a size of ~ 30 mb.
    The XML file is a response of a webservice. This response include some metadata of a document and the document itself. The document is inline embedded with a Base64 conversion.  Here is an example of a XML File i want to analyse:
    <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
       <soap:Body>
          <ns2:GetDocumentByIDResponse xmlns:ns2="***">
             <ArchivedDocument>
                <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***">
                   <Metadata archiveDate="2013-08-01+02:00" documentID="123">
                      <Descriptor type="Integer" name="fachlicheId">
                         <Value>123<Value>
                      </Descriptor>
                      <Descriptor type="String" name="user">
                         <Value>***</Value>
                      </Descriptor>
                      <InternalDescriptor type="Date" ID="DocumentDate">
                         <Value>2013-08-01+02:00</Value>
                      </InternalDescriptor>
                      <!-- Here some more InternalDescriptor Nodes -->
                   </Metadata>
                   <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream">
                      <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/>
                   </RepresentationDescription>
                </ArchivedDocumentDescription>
                <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0">
                   <Data fileName="20mb.test">
                      <BinaryData>
                        <!-- Here is the BASE64 converted document -->
                      </BinaryData>
                   </Data>
                </DocumentPart>
             </ArchivedDocument>
          </ns2:GetDocumentByIDResponse>
       </soap:Body>
    </soap:Envelope>
    Now i want to extract the filename and the Base64 converted document from this XML response.
    For the extraction of the filename i use the following command:
    v_filename := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    For the extraction of the binary data i use the following command:
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    My problem is the performance of this extraction. Here i created some summary of the start and end time for the commands:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 15:46:11,402668000
    10.09.13 - 15:47:21,407895000
    00:01:10,005227
    v_filename_bcm := apex_web_service.parse_xml(v_xml, '//ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName');
    10.09.13 - 15:47:21,407895000
    10.09.13 - 15:47:22,336786000
    00:00:00,928891
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    As you can see the extraction of the filename is slower then the document extraction. For the Extraction of the filename i need ~01
    I wonder about it and started some tests.
    I tried to use an exact - non dynamic - filename. So i have this commands:
    v_filename := '20mb_1.test';
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    Under this Conditions the time for the document extraction soar. You can see this in the following table:
    Start Time
    End Time
    Difference
    Command
    10.09.13 - 16:02:33,212035000
    10.09.13 - 16:02:33,212542000
    00:00:00,000507
    v_filename_bcm := '20mb_1.test';
    10.09.13 - 16:02:33,212542000
    10.09.13 - 16:03:40,342396000
    00:01:07,129854
    v_clob := apex_web_service.parse_xml_clob(v_xml, '//ArchivedDocument/DocumentPart/Data/BinaryData/text()');
    So i'm looking for a faster extraction out of the xml file. Do you have any ideas? If you need more informations, please ask me.
    Thank you,
    Matthias
    PS: I use the Oracle 11.2.0.2.0

    Although using an XML schema is a good advice for an XML-centric application, I think it's a little overkill in this situation.
    Here are two approaches you can test :
    Using the DOM interface over your XMLType variable, for example :
    DECLARE
      v_xml    xmltype := xmltype('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> 
           <soap:Body> 
              <ns2:GetDocumentByIDResponse xmlns:ns2="***"> 
                 <ArchivedDocument> 
                    <ArchivedDocumentDescription version="1" currentVersion="true" documentClassName="Allgemeines Dokument" csbDocumentID="***"> 
                       <Metadata archiveDate="2013-08-01+02:00" documentID="123"> 
                          <Descriptor type="Integer" name="fachlicheId"> 
                             <Value>123</Value> 
                          </Descriptor> 
                          <Descriptor type="String" name="user"> 
                             <Value>***</Value> 
                          </Descriptor> 
                          <InternalDescriptor type="Date" ID="DocumentDate"> 
                             <Value>2013-08-01+02:00</Value> 
                          </InternalDescriptor> 
                          <!-- Here some more InternalDescriptor Nodes --> 
                       </Metadata> 
                       <RepresentationDescription default="true" description="Description" documentPartCount="1" mimeType="application/octet-stream"> 
                          <DocumentPartDescription fileName="20mb.test" mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" hashValue=""/> 
                       </RepresentationDescription> 
                    </ArchivedDocumentDescription> 
                    <DocumentPart mimeType="application/octet-stream" length="20971520 " documentPartNumber="0" representationNumber="0"> 
                       <Data fileName="20mb.test"> 
                          <BinaryData> 
                            ABC123 
                          </BinaryData> 
                       </Data> 
                    </DocumentPart> 
                 </ArchivedDocument> 
              </ns2:GetDocumentByIDResponse> 
           </soap:Body> 
        </soap:Envelope>');
      domDoc    dbms_xmldom.DOMDocument;
      docNode   dbms_xmldom.DOMNode;
      node      dbms_xmldom.DOMNode;
      nsmap     varchar2(2000) := 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns2="***"';
      xpath_pfx varchar2(2000) := '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/';
      istream   sys.utl_characterinputstream;
      buf       varchar2(32767);
      numRead   pls_integer := 1;
      filename       varchar2(30);
      base64clob     clob;
    BEGIN
      domDoc := dbms_xmldom.newDOMDocument(v_xml);
      docNode := dbms_xmldom.makeNode(domdoc);
      filename := dbms_xslprocessor.valueOf(
                    docNode
                  , xpath_pfx || 'ArchivedDocument/ArchivedDocumentDescription/RepresentationDescription/DocumentPartDescription/@fileName'
                  , nsmap
      node := dbms_xslprocessor.selectSingleNode(
                docNode
              , xpath_pfx || 'ArchivedDocument/DocumentPart/Data/BinaryData/text()'
              , nsmap
      --create an input stream to read the node content :
      istream := dbms_xmldom.getNodeValueAsCharacterStream(node);
      dbms_lob.createtemporary(base64clob, false);
      -- read the content in 32k chunk and append data to the CLOB :
      loop
        istream.read(buf, numRead);
        exit when numRead = 0;
        dbms_lob.writeappend(base64clob, numRead, buf);
      end loop;
      -- free resources :
      istream.close();
      dbms_xmldom.freeDocument(domDoc);
    END;
    Using a temporary XMLType storage (binary XML) :
    create table tmp_xml of xmltype
    xmltype store as securefile binary xml;
    insert into tmp_xml values( v_xml );
    select x.*
    from tmp_xml t
       , xmltable(
           xmlnamespaces(
             'http://schemas.xmlsoap.org/soap/envelope/' as "soap"
           , '***' as "ns2"
         , '/soap:Envelope/soap:Body/ns2:GetDocumentByIDResponse/ArchivedDocument/DocumentPart/Data'
           passing t.object_value
           columns filename    varchar2(30) path '@fileName'
                 , base64clob  clob         path 'BinaryData'
         ) x

  • Why Isn't xmlindex being used in slow query on binary xml table eval?

    I am running a slow simple query on Oracle database server 11.2.0.1 that is not using an xmlindex. Instead, a full table scan against the eval binary xml table occurs. Here is the query:
    select -- /*+ NO_XMLINDEX_REWRITE no_parallel(eval)*/
          defid from eval,
          XMLTable(XMLNAMESPACES(DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
          'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7"),
          '$doc/eval/derivedFacts/ns7:derivedFact' passing eval.object_value as "doc" columns defid varchar2(100) path 'ns7:defId'
           ) eval_xml
    where eval_xml.defid in ('59543','55208'); The predicate is not selective at all - the returned row count is the same as the table row count (325,550 xml documents in the eval table). When different values are used bringing the row count down to ~ 33%, the xmlindex still isn't used - as would be expected in a purely relational nonXML environment.
    My question is why would'nt the xmlindex be used in a fast full scan manner versus a full table scan traversing the xml in each eval table document record?
    Would a FFS hint be applicable to an xmlindex domain-type index?
    Here is the xmlindex definition:
    CREATE INDEX "EVAL_XMLINDEX_IX" ON "EVAL" (OBJECT_VALUE)
      INDEXTYPE IS "XDB"."XMLINDEX" PARAMETERS
      ('XMLTable eval_idx_tab XMLNamespaces(DEFAULT ''http://www.cigna.com/acme/domains/eval/2010/03'',
      ''http://www.cigna.com/acme/domains/derived/fact/2010/03'' AS "ns7"),''/eval''
           COLUMNS defId VARCHAR2(100) path ''/derivedFacts/ns7:derivedFact/ns7:defId''');Here is the eval table definition:
    CREATE
      TABLE "N98991"."EVAL" OF XMLTYPE
        CONSTRAINT "EVAL_ID_PK" PRIMARY KEY ("EVAL_ID") USING INDEX PCTFREE 10
        INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT
        1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
        FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
        DEFAULT) TABLESPACE "ACME_DATA" ENABLE
      XMLTYPE STORE AS SECUREFILE BINARY XML
        TABLESPACE "ACME_DATA" ENABLE STORAGE IN ROW CHUNK 8192 CACHE NOCOMPRESS
        KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
        2147483645 PCTINCREASE 0 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT)
      ALLOW NONSCHEMA ALLOW ANYSCHEMA VIRTUAL COLUMNS
        "EVAL_DT" AS (SYS_EXTRACT_UTC(CAST(TO_TIMESTAMP_TZ(SYS_XQ_UPKXML2SQL(
        SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03"; (::)
    /eval/@eval_dt'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2),'SYYYY-MM-DD"T"HH24:MI:SS.FFTZH:TZM') AS TIMESTAMP
    WITH
      TIME ZONE))),
        "EVAL_CAT" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@category'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "ACME_MBR_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@acmeMemberId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50))),
        "EVAL_ID" AS (CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(XMLQUERY(
        'declare default element namespace "http://www.cigna.com/acme/domains/eval/2010/03";/eval/@evalId'
        PASSING BY VALUE SYS_MAKEXML(128,"XMLDATA") RETURNING CONTENT ),0,0,
        16777216,0),50,1,2) AS VARCHAR2(50)))
      PCTFREE 0 PCTUSED 80 INITRANS 4 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0
        FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
        CELL_FLASH_CACHE DEFAULT
      TABLESPACE "ACME_DATA" ; Sample cleansed xml snippet:
    <?xml version = '1.0' encoding = 'UTF-8' standalone = 'yes'?><eval createdById="xxxx" hhhhMemberId="37e6f05a-88dc-41e9-a8df-2a2ac6d822c9" category="eeeeeeee" eval_dt="2012-02-11T23:47:02.645Z" evalId="12e007f5-b7c3-4da2-b8b8-4bf066675d1a" xmlns="http://www.xxxxx.com/vvvv/domains/eval/2010/03" xmlns:ns2="http://www.cigna.com/nnnn/domains/derived/fact/2010/03" xmlns:ns3="http://www.xxxxx.com/vvvv/domains/common/2010/03">
       <derivedFacts>
          <ns2:derivedFact>
             <ns2:defId>12345</ns2:defId>
             <ns2:defUrn>urn:mmmmrunner:Medical:Definition:DerivedFact:52657:1</ns2:defUrn>
             <ns2:factSource>tttt Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>boolean</ns2:type>
                <ns2:value>true</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
          <ns2:derivedFact>
             <ns2:defId>52600</ns2:defId>
             <ns2:defUrn>urn:ddddrunner:Medical:Definition:DerivedFact:52600:2</ns2:defUrn>
             <ns2:factSource>cccc Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>string</ns2:type>
                <ns2:value>null</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
          <ns2:derivedFact>
             <ns2:defId>59543</ns2:defId>
             <ns2:defUrn>urn:ddddunner:Medical:Definition:DerivedFact:52599:1</ns2:defUrn>
             <ns2:factSource>dddd Member</ns2:factSource>
             <ns2:origInferred_dt>2012-02-11T23:47:02.645Z</ns2:origInferred_dt>
             <ns2:factValue>
                <ns2:type>string</ns2:type>
                <ns2:value>INT</ns2:value>
             </ns2:factValue>
          </ns2:derivedFact>
                With the repeating <ns2:derivedFact> element continuing under the <derivedFacts>The Oracle XML DB Developer's Guide 11g Release 2 isn't helping much...
    Any assitance much appreciated.
    Regards,
    Rick Blanchard

    odie 63, et. al.;
    Attached is the reworked select query, xmlindex, and 2ndary indexes. Note: though namespaces are used; we're not registering any schema defns.
    SELECT /*+ NO_USE_HASH(eval) +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
    eval_xml.eval_catt, df.defid FROM eval,
    --df.defid FROM eval,
    XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
                            'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
            '/eval' passing eval.object_value
             COLUMNS
               eval_catt VARCHAR2(50) path '@category',
               derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
    XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
                              DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
            '/ns7:derivedFact' passing eval_xml.derivedFact
             COLUMNS
               defid VARCHAR2(100) path 'ns7:defId') df
    WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
    --where df.defid = '52657';
    SELECT /*+ NO_USE_HASH(eval +/ --/*+ NO_QUERY_REWRITE no_parallel(eval)*/
    eval_xml.eval_catt, df.defid FROM eval,
    --df.defid FROM eval,
    XMLTable(XMLNamespaces( DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03',
                            'http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7" ),
            '/eval' passing eval.object_value
             COLUMNS
               eval_catt VARCHAR2(50) path '@category',
               derivedFact XMLTYPE path '/derivedFacts/ns7:derivedFact')eval_xml,
    XMLTable(XMLNamespaces('http://www.cigna.com/acme/domains/derived/fact/2010/03' AS "ns7",
                              DEFAULT 'http://www.cigna.com/acme/domains/eval/2010/03'),
            '/ns7:derivedFact' passing eval_xml.derivedFact
             COLUMNS
               defid VARCHAR2(100) path 'ns7:defId') df
    WHERE df.defid IN ('52657','52599') AND eval_xml.eval_catt LIKE 'external';
    --where df.defid = '52657'; create index defid_2ndary_ix on eval_idx_tab_II (defID);
         eval_catt VARCHAR2(50) path ''@CATEGORY''');
    create index eval_catt_2ndary_ix on eval_idx_tab_I (eval_catt);The xmlindex is getting picked up but a couple of problesm:
    1. In the developemnt environment, no xml source records for defid '52657' or '52599' are being displayed - just an empty output set occurs; in spite of these values being present and stored in the source xml.
    This really has me stumped, as can query the eval table to see the actual xml defid vaues '52657' and '52599' exist. Something appears off with the query - which didn't return records even before the corrresponding xml index was created.
    2. The query still performs slowly, in spite of using the xmlindex. The execution plan shows a full table scan of eval occurring whether or not a HASH JOIN or MERGE JOIN (gets used in place of the HASH JOIN when the NO_USE_HASH(eval) int is used.
    3. Single column 2ndary indexes created respectively for eval_catt and defid are not used in the execution plan - which may be expected upon further consideration.
    In the process of running stats at this moment, to see if performance improves....
    At this point, really after why item '1.' is occurring?
    Edited by: RickBlanchardSRSCigna on Apr 16, 2012 1:33 PM

  • [CS5 Win] Switching to InDesign is slow in CS5 for certain documents. XML links?

    Hi,
    I put this question in the scripting part, since you might be the ones to notice such things as this, and it might relate to importing XML by scripting. Also I got no positive response in one of the other forums, "complaining" about the sluggish performance redrawing the interface when setting focus to InDesign CS5. But now I can more clearly see that it is not slow for all documents.
    Having certain documents open in InDesign CS5 makes it very slow to switch to. The CPU peaks at 80-100% and it takes about a second or two until InDesign responds.
    The documents do not necessarily need to be large in size, but it appears that it mostly occurrs on documents that I've imported data to (or that our customers have), using our own code, which takes care of an XML file and portions it out into templates.
    One strange thing I just noticed was that selecting "relink all" by alt-clicking the relink button in the Links palette, makes InDesign look for (presumalbly) previously imported XML files. How can that be?
    I never store any links to the imported XML - as far as I know. Where should I look for that kind of links?
    How and why would previously imported XML file paths be stored? I just import the XML into the structure, "deal with it", and deletes the node that it was imported to. (The data is left in a structure that I build up myself.)
    Does anyone know if there is a change related to any of the facts stated above, in InDesign CS5?
    I'll attach a screen shot of the links dialogue (above), when it "looks for" an XML file. Could this kind of "missing file" cause the abovementioned "slowness" every time switching to InDesign from another application?
    (InDesign documents that don't relate to XML doesn't make switching to InDesign slow - but it gets slow when such a document is loaded, even under a tab that is not in focus.)
    Best regards,
    Andreas

    I've uploaded a video to demonstrate that InDesign sometimes can keep references to links:
    http://www.youtube.com/watch?v=OoQOSIlfYl4
    It's obvious that there are thousands of "orphans" in the documents. All XML files ever imported are kept in some way... The links (or the names of them) are obviously not removed from the document when deleting the Element in the XML Structure. Since the communication of the code with a database is built around XML import, the number of imported files is very large.
    Can these orphaned link references somehow be removed? When manually checking for missing files, InDesign loops though all of the XML files as seen in the YouTube video above, but ends up saying that there are no missing files. The next time I do the same check, all links are looped through again.
    Perhaps we should "link to file", using .xmlImportPreferences.createLinkToXML = true
    This propery has not been explicitly set, and therfore = false. But is there any way to get rid of all the old link names that's inside the document somehow?
    CS4 does the same thing, but there is "no 2 second lag time" switching from another application to InDesign CS4 with this kind of document open.

  • Slow processing when Parsing  XML files

    Hi
    I have written a utility in Java (using JAXP) which parses an .xml file. This xml file
    contains references to n number of other xml files. From here I direct the parser to each of these individual xml files where it searches for elements with a particular tag name, lets say 'node'. As in,
    <node name= "Parent A">
    <entry key="type" value="car"/>....
    </node>
    <node name= "Parent B">
    <entry key="type" value="Truck"/>
    </node>
    I am collecting all the 'nodes' from these n xml files and then finally building a single xml file which will contain only the 'node' and its attribute 'name' value. As in,
    <node name="Parent A"/>
    <node name="Parent B"/>
    In most cases n is a number greater than 100. Each of these n xml file have LOC exceeding 2 to 3000.
    NOW the issue is, the xml parser takes more than an hr to go through just 10 - 15 xml files , collects the 'node', and building a new DOM object which I finally get published using Xml Transformer.
    In effect , it beats the whole purpose of writing this utility as no programmer will stick around for an hr to to watch it happen/finish.
    Apart from maybe further fine tuning my logic, which I've almost exhusted, is there any 'faster' way of doing the whole thing.
    Please reply. Any comment would be greatly appreciated.
    I am using JAXP 1.3 specs to parse and build the DOM.

    DOM is slow.
    Do it yourself using a simple SAX-parser. For all startElement - check if it is a "node", and then write your output!
    Xerces is faster than the built in Crimoson SAX-parser in Java.
    Parsing with Xerces a 2GHz machine takes manages about 5-6 MB/s of XML files.
    Or use STX with Joost - although it's not THAT much faster
    http://www.xml.com/pub/a/2003/02/26/stx.html
    http://joost.sourceforge.net/
    Gil

  • Slow performance with javax.xml.ws.Endpoint.publish method

    I've published an endpoint on my computer with the javax.xml.ws.Endpoint.publish method. When I'm load testing my endpoint on the local machine, with the client side in another jvm, the endpoint reacts very fast (server side(endpoint) and the client side on the same computer). There's not performance problem there.
    But when I try to run a load test with the server side endpoint on my local computer and the client side on another computer the endpoint reacts slow, very slow compared to the local scenario. Instead of 500 requests / second I get like 3 requests / second. Why?
    When I look at the traffic between the client and the server running on different machines it's like 4,5 kB/sec on a 100Mbit connection. And almost no cpu activity (neither server or client).
    When I've a web server, like Tomcat or Sun Java Application Server and deploy my endpoint there the traffics goes up to 400kB/sec. So then it works fine with good performance over the same network, same ip address, same port and everything.
    Why is my endpoint so slow when I publish it with javax.xml.ws.Endpoint.publish instead of on a for example Tomcat. And why is the endpoint fast when I'm running client and server on the same machine?

    the timeout is a client side thing, most likely. you need to set the http request timeout on the client.

  • Does XML approach for passing parameters in the query make the query slow?

    Hi,
    I am using XML approach for passing parameters in a query. This is running very slow but when I pass comma separated values in parameter, it runs very fast.
    So it concludes that we should not use XML approach for passing parameters. Please confirm me that Am I right or wrong?
    I have also googled to clear my doubt but not get solution till now. please help me.
    Regards,
    Sachin

    914014 wrote:
    Hi,
    I am using XML approach for passing parameters in a query. This is running very slow but when I pass comma separated values in parameter, it runs very fast.
    So it concludes that we should not use XML approach for passing parameters. Please confirm me that Am I right or wrong?
    I have also googled to clear my doubt but not get solution till now. please help me.
    Regards,
    SachinShow us what you are doing, create a simple yet complete example we can run on our Oracle instances.
    Then we will know exactly what you mean, and can comment appropriately.
    Cheers,

Maybe you are looking for

  • IMac G5 or Powermac for Graphic Design Industry?

    I was wondering what kind of computer does the Graphic Design industry uses? iMac G5 or the Powermac? What do you recommend? I heard that the Powermac is usually used by professionals. Is this true? How does the iMac G5 compare to it? imac G5   Mac O

  • Detour path in GRC 10

    Dear Expert , Any idea where we can maintain Detour configuration in GRC AC 10 . In MSMP i can see route mapping but not sure if this is place where i need cinfigure detour as it doenot have option to set detour condition . Thanks & Regards Asheesh

  • Block PO QTY & NET Price

    Dear All, I have done PO & GRN with full qty so Delivery completion indicator set. I have done Invoice also. now i change PO qty & net price in standard system. but i dont want this system will not allow me, please guide me. Thanks Shital

  • GRUB Configuration

    I'm installing Arch on an HP Vectra with a single 4 GB Hard drive.  I edited the GRUB configuration file, and tried to boot the computer to Arch.  After the BIOS splash screen, the screen goes black, and then the word GRUB appears at the top left of

  • Installing Fx Factory Core Melt Plug Ins

    I'm trying to use some plug ins from Core Melt (Image Flow FX) on photos. This is the first time I've used any type of plug in for a project. For the life of me, I cannot figure out how to install these things! I've searched on the product website an