Different behaviour of XMLType storage

hi,
I have problem with different behaviour of storage type "BINARY XML" and regular storage ( simple CLOB I guess) of the XMLType datatype.
Setup
- Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
- XML file ( "Receipt.xml" ) with a structure like :
<?xml version="1.0" encoding="UTF-8"?>
<ESBReceiptMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
     <ESBMessageHeader>
          <MsgSeqNumber>4713</MsgSeqNumber>
          <MessageType>Receipt</MessageType>
          <MessageVersion>1.1</MessageVersion>
     </ESBMessageHeader>
     <Receipt>
          <ReceiptKey>1234567-03</ReceiptKey>          
          <ReceiptLines>
               <ReceiptLine><Material><MaterialKey>00011-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
               <ReceiptLine><Material><MaterialKey>00021-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
               <ReceiptLine><Material><MaterialKey>00031-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
.....etc....etc.....etc...
               <ReceiptLine><Material><MaterialKey>09991-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
               <ReceiptLine><Material><MaterialKey>10001-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
               <ReceiptLine><Material><MaterialKey>10011-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
          </ReceiptLines>
     </Receipt>
</ESBReceiptMessage>=> 1 Header element : "Receipt" and exactly 1001 "ReceiptLine" elements.
Problem:
Test 1 :
drop table xml_ddb;
CREATE TABLE xml_ddb (id number,xml_doc XMLType);
INSERT INTO xml_ddb (id, xml_doc)  VALUES (4716,XMLType(bfilename('XMLDIR', 'Receipt.xml'),nls_charset_id('AL32UTF8')));
select count(1) from (
SELECT dd.id,ta.Receiptkey,li.materialkey,li.qty
   FROM xml_ddb dd,
        XMLTable('/ESBReceiptMessage/Receipt' PASSING dd.xml_doc
                 COLUMNS ReceiptKey VARCHAR2(28) PATH 'ReceiptKey',
                         ReceiptLine XMLType PATH 'ReceiptLines/ReceiptLine') ta,
        XMLTable('ReceiptLine' PASSING ta.ReceiptLine
                 COLUMNS materialkey VARCHAR2(14)  PATH 'Material/MaterialKey',
                         qty         NUMBER(10)    PATH 'Qty') li
  COUNT(1)
      1001
1 row selected.The storage of the XMLType column has not been specified.
=> All 1001 detailled rows are selected.
=> Everything is fine
Test 2 :
drop table xml_ddb;
CREATE TABLE xml_ddb (id number,xml_doc XMLType) XMLType xml_doc store AS BINARY XML; -- <---- Different storage type
INSERT INTO xml_ddb (id, xml_doc)  VALUES (4716,XMLType(bfilename('XMLDIR', 'Receipt.xml'),nls_charset_id('AL32UTF8')));
select count(1) from (
SELECT dd.id,ta.Receiptkey,li.materialkey,li.qty
   FROM xml_ddb dd,
        XMLTable('/ESBReceiptMessage/Receipt' PASSING dd.xml_doc
                 COLUMNS ReceiptKey VARCHAR2(28) PATH 'ReceiptKey',
                         ReceiptLine XMLType PATH 'ReceiptLines/ReceiptLine') ta,
        XMLTable('ReceiptLine' PASSING ta.ReceiptLine
                 COLUMNS materialkey VARCHAR2(14)  PATH 'Material/MaterialKey',
                         qty         NUMBER(10)    PATH 'Qty') li
  COUNT(1)
      1000
1 row selected.Storage of the XMLType column has been defined as "BINARY XML"
=> Only 1000 rows are select
=> One row is missing.
After some tests : There seems to be a "hard border" of 1000 rows that comes with the different datatype ( So if you put 2000 rows into the XML you will get also only 1000 rows back )
Question
As I am a newbie in XMLDB :
- Is the "construction" with the nested tables in the select-statement maybe not recommended/"allowed" ?
- Are there different ways to get back "Head" + "Line" elements in a relational structure ( even if there are more than 1000 lines) ?
Thanks in advance
Bye
Stefan

hi,
General
You are right. I have a predefined XSD structure. And now I try to find a way to handle this in Oracle ( up to now, we are doing XML handling in Java ( with JAXB ) outside the DB)
=> So I will take a look at the "object-relational" storage. Thank's for that hint.
Current thread
The question, whether there is an "artifical" border of 1000 rows, when joining 2 XML tables together, is still open....
(although it might be not interesting for me anymore :-), maybe somebody else will need the answer...)
Bye
Stefan

Similar Messages

  • TS4009 I seem to have 2 iCloud accounts with different names - a free storage that came with my Macbook and one I used when I bought some iCloud space. How can I cancel the free account?

    I seem to have 2 iCloud accounts with different names - a free storage that came with my MacBook and one I used when I bought some iCloud space. How can I cancel the free account? Because of this I can't access iMatch and the ICloud account I paid for on my MacBook
    Any ideas cos its frustrating that I can't access something I paid for!

    Sign out of the account you do not wish to use, sign into the one that you do wish to use.

  • SQL Server Management Studio and Native Client different behaviour on delete

    I have a problem with transaction containing insert and delete to same table and some select/insert/update to some other tables. Problematic table has primary key defined as combination of column1 and column2.
    When two different instances using Native Client execute simultaneously this code and make inserts to table then delete part of code causes deadlock. However this doesn't happen when trying this situation in MS SQL Server Management Studio query.
    Is there some option that is missing from Native Client connection string which can cause this different behaviour?

    Hello,
    I don't think there is a difference in the behavior. SSMS uses ADO.NET and that Provider base on the Native Client.
    The difference will be more that way, when the transaction is commited and so the locks released. I guess your application keeps the transaction (much) longer open; you should commit a transaction as soon as possible to avoid long time locks and so
    deadlocks.
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • In-different behaviour of join condition

    My query is of in-different behaviour of join condition with join between a varchar2 column and a number column. I am using the following join condition :-
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    where CM.UPDATED_BY is a varchar2 column and LM.LOGIN_ID is a number column. Now, CM.UPDATED_BY also has number only but some previous & old data is having varchar2 data. So, for that reason, i put the to_char before LM.LOGIN_ID, otherwise, only
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    would having been okay. Now, my real question is that the query with the condition,
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    works fine as long as there is no 'character' data in 'CM.UPDATED_BY'. If i put the condition:
    CM.UPDATED_BY=to_char(LM.LOGIN_ID)
    then, the query is taking too long and the output is also not coming. Please help in solving my doubt as i need it resolved urgently.

    1) Did you intend for all 4 join conditions to be identical? From the text of your question, it sounds like some of the conditions should be different.
    2) How do you compare the running time when CM.UPDATED_BY has non-numerica data and when it has numeric data? If you are altering the data in the table, are you reanalyzing the table between runs? Is there a difference in the query plan in the two cases?
    Justin
    Distributed Database Consulting, Inc.
    www.ddbcinc.com/askDDBC

  • Different behaviour between 1.4,1.5_05, 1.5_07.

    Hi and thanks in advance for your help,
    Looking for pointers to a solution for a problem I have.
    A set of Java classes subscribe to a subscription service and listen for updates on a network. To do this the JVM uses JNI (actually a JIntegra vendor product) to talk to the Windows DLL files on the server, it�s these files that actually listen for the updates. The information is then passed back up to the JVM (via the JIntegra libraries). The strange behaviour i get is as follows:
    1.     No problems with JDK 1.4.*
    2.     JDK1.5_05: The program runs fine for a few minutes and then crashes out BUT with no stack trace or errors at all (it just terminates).
    3.     JDK 1.5_07: The program runs ok to start with but then consumes all the file handles on the Windows server. After approx 4hrs the program hangs as it has consumed all the servers spare file handles (in fact 3,800,000 of them).
    The added problem of debugging this, is that we use a vendor for the Java/COM+ interaction and we don�t have the source code, But I have tried different versions of the JIntergra without any change in the behaviour (so i don't think its that). It appears to be the JVM version that causes the change in behaviour.
    I do not understand why I get such different behaviour from the different versions of Java? Although both JDK1.5 don't work.
    I was wondering if someone can point me in the right direction for trying to get JVM information outputted when the JDK1.5_05 crashes out.
    How can I put a hook in the JVM in my code to output information when it exits. I realise its doing it unexpectedly so this type of solution might not work but I find it strange that no error is thrown.
    Also; Does anyone know of any bugs that might possibly explain any this behaviour?
    Any ideas anyone.

    Jintegra recommend using 1.5. I will check the JNI bugs database to see if anything like this has happened before:
    Got this one stacktrace recently:
    # An unexpected error has been detected by HotSpot Virtual Machine:
    # Internal Error (4D555445583F57494E13120E4350500080), pid=1944, tid=1500
    # Java VM: Java HotSpot(TM) Client VM (1.5.0_07-b03 mixed mode, sharing)
    # Can not save log file, dump to screen..
    # An unexpected error has been detected by HotSpot Virtual Machine:
    # Internal Error (4D555445583F57494E13120E4350500080), pid=1944, tid=1500
    # Java VM: Java HotSpot(TM) Client VM (1.5.0_07-b03 mixed mode, sharing)
    --------------- T H R E A D ---------------
    Current thread is native thread
    Stack: [0x045e0000,0x04620000), sp=0x0461fb08, free space=254k
    Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
    V [jvm.dll+0x11f2eb]
    V [jvm.dll+0x62f13]
    V [jvm.dll+0xd1741]
    V [jvm.dll+0xd18f0]
    V [jvm.dll+0x90d16]
    --------------- P R O C E S S ---------------
    Java Threads: ( => current thread )
    0x03029730 JavaThread "J-Integra COM initialization thread (please don't touch)" daemon [_thread_blocked, id=1684]
    0x00237c28 JavaThread "DestroyJavaVM" [_thread_blocked, id=3112]
    0x009fc7c8 JavaThread "Thread-0" [_thread_blocked, id=2996]
    0x009a16a8 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=2860]
    0x00238478 JavaThread "CompilerThread0" daemon [_thread_blocked, id=1464]
    0x0099f730 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=3168]
    0x009998e0 JavaThread "Finalizer" daemon [_thread_blocked, id=2928]
    0x0023fae0 JavaThread "Reference Handler" daemon [_thread_blocked, id=2268]
    Other Threads:
    0x009982d8 VMThread [id=2036]
    0x009a2ce8 WatcherThread [id=532]
    VM state:not at safepoint (normal execution)
    VM Mutex/Monitor currently owned by a thread: None
    Heap
    def new generation total 576K, used 244K [0x22ab0000, 0x22b50000, 0x22f90000)
    eden space 512K, 47% used [0x22ab0000, 0x22aec330, 0x22b30000)
    from space 64K, 6% used [0x22b40000, 0x22b410a8, 0x22b50000)
    to space 64K, 0% used [0x22b30000, 0x22b30000, 0x22b40000)
    tenured generation total 1408K, used 917K [0x22f90000, 0x230f0000, 0x26ab0000)
    the space 1408K, 65% used [0x22f90000, 0x23075430, 0x23075600, 0x230f0000)
    compacting perm gen total 8192K, used 1354K [0x26ab0000, 0x272b0000, 0x2aab0000)
    the space 8192K, 16% used [0x26ab0000, 0x26c02a20, 0x26c02c00, 0x272b0000)
    ro space 8192K, 67% used [0x2aab0000, 0x2b00d9f8, 0x2b00da00, 0x2b2b0000)
    rw space 12288K, 46% used [0x2b2b0000, 0x2b853808, 0x2b853a00, 0x2beb0000)
    Not sure what 'Internal Error (4D555445583F57494E13120E4350500080)' represents.
    Thanks for your advice.

  • Different behaviour of Flash content when on server

    Hi
    I have noticed different behaviours of my Flash movie in case
    a/ I am checking offline content (SWF) using the fault view (Show All)
    b/ I am checking  the SWF in a HTML file on a server.
    More concrete: Depending on a number of conditions I attach a movieclip to a certain object. This works fine in offline mode.
    Another example: Depending on a certain zoom level of the application, I unload some SWF.
    All works fine offline.
    When I check this online, the attach movieclip function only works in some cases, the unload of SWF does not work .
    What can be the cause ?
    bestregards
    eG

    Not sure about most of your questions -- this is all new stuff for me, too. But on this one item, I hope this helps:
    For example it tells me that a plugin (which has already been installed) needs to be installed.If you installed the Mozilla browser after initially installing the java plugin in IE, then the plugin needs to be installed within the Mozilla browser's plugins directory. And I have never been able to get Netscape or Firefox to install a Forms plugin properly. It seems like I need to run IE to get the plugin installed automatically. Then I go back to the other browser and all is ok. Looking into the Mozilla-based browser's plugins folders, I can see that running the plugin install through IE also copies the corresponding .dll file into all the Mozilla-based browsers' plugins folders.
    But since you have already installed the plugin in IE, I am not sure how you would get it to work for the other browser. But if you can identify the .dll required, just copy it yourself into your Mozilla browser's folder.

  • Different behaviour of j_security_check between 6.1 and 8.1

    Hi,
    we have a web-application that is secured via security-constraints in
    the web.xml. We are authenticating using FORM-based authentification and
    had this app deployed on a wls6.1. Now we have moved to 8.1 and have a
    different behaviour between 6.1 and 8.1 for log-in forms using the
    j_security_check. We have login-boxes on most pages:
    <form method="POST" action="j_security_check">
                                  Username<br><input type="text" name="j_username" size=11
    class="texteingabe"><br>
                                  Password<br><input type="password" name="j_password" size=8
    class="texteingabe"> <input type=image
    src="/dvrWebApp/htdocs/images/pfeil_rechts_hi.gif" border=0><br>
    </form>
    that on 6.1 redirected after the login to the page they were placed on -
    on 8.1 the login redirects to the root of the web-app, regardless of the
    page they are placed- can i change this back to the old 6.1-Behaviour?!
    cheers
    stf

    This sounds like bollocks, I was doing some form based security recently under 8.1 and there was no attempt to direct to any page i was not trying to access.
    The idea is you try to access a protected resource, your not authorized, so you are authenticated, then you continue on to that resource.
    Are you sure, that the resource your accessing does not direct you there because of some inbuilt business logic in your JSP/servlet ?

  • Different behaviour inside a Thread

    Hello ,
    I am developing a software to help in automating certain tasks related to certain voice switch , i connect to the switch using ssh . i send commands generally using a button that ptints the content of a textbox into the outputstream . every thing works fine as long as i print the command into the outputstream from the code of the button press event . This hangs the interdace because i have to wait for a specific output format . So i used a Thread to execute the same code so that the interface doesn't hang . The strange thing is after the thread finishes i am not able to receive any thing from the switch ( although the printing into the output stream doesn't produce any exceptions and this is not the same if i executed the same code without the thread) . The question is why the program has different behaviour inside a thread and outside it ?
    Best Regards ,

    >
    Sounds like this actually is more related to Swing than concurrency (if I'm correct). I think that everything is working, but you are failing to update the UI with the result. You are only allowed to update the UI from the AWT thread, so you probably need to publish the result using invokeLater or use a SwingWorker.

  • Different Batch Classification at storage location level

    Dear guru ,
    I have tested that i can insert different values of batch classification for the same material and the
    same batch number but in different plants.
    In MSC1N/MSC2N transaction the system allow a insertion at storage location level.
    Itu2019s possible to insert different values of batch classification for the same material , same batch number and the same plant but in different storage locations ?
    Thanks.

    hi,
       check in OMCT under which batch level the current system is, if the level is set to be material level or client level, if you make a change to the batch classificaiton, it will reflect in all the plants across the clients.
    Batch classification is not maintained at the storage location level.

  • Improve XML readability in Oracle 11g for binary XMLType storage for huge files

    I have one requirement in which I have to process huge XML files. That means there might be around 1000 xml files and the whole size of these files would be around 2GB.
    What I need is to store all the data in these files to my Oracle DB. For this I have used sqlloader for bulk uploading of all my XML files to my DB and it is stored as binary XMLTYPE in my database.Now I need to query these files and store the data in relational tables.For this I have used XMLTable Xpath queries. Everything is fine when I try to query single xml file within my DB. But if it is trying to query all those files it is taking too much time which is not acceptable.
    Here's my one sample xml content:
    <ABCD>
      <EMPLOYEE id="11" date="25-Apr-1983">
        <NameDetails>
          <Name NameType="a">
            <NameValue>
              <FirstName>ABCD</FirstName>
              <Surname>PQR</Surname>
              <OriginalName>TEST1</OriginalName>
              <OriginalName>TEST2</OriginalName>
            </NameValue>
          </Name>
          <Name NameType="b">
            <NameValue>
              <FirstName>TEST3</FirstName>
              <Surname>TEST3</Surname>
            </NameValue>
            <NameValue>
              <FirstName>TEST5</FirstName>
              <MiddleName>TEST6</MiddleName>
              <Surname>TEST7</Surname>
              <OriginalName>JAB1</OriginalName>
            </NameValue>
            <NameValue>
              <FirstName>HER</FirstName>
              <MiddleName>HIS</MiddleName>
              <Surname>LOO</Surname>
            </NameValue>
          </Name>
          <Name NameType="c">
            <NameValue>
              <FirstName>CDS</FirstName>
              <MiddleName>DRE</MiddleName>
              <Surname>QWE</Surname>
            </NameValue>
            <NameValue>
              <FirstName>CCD</FirstName>
              <MiddleName>YTD</MiddleName>
              <Surname>QQA</Surname>
            </NameValue>
            <NameValue>
              <FirstName>DS</FirstName>
              <Surname>AzDFz</Surname>
            </NameValue>
          </Name>
        </NameDetails>
      </EMPLOYEE >
    </ABCD>
    Please note that this is just one small record inside one big xml.Each xml would contain similar records around 5000 in number.Similarly there are more than 400 files each ranging about 4MB size approx.
    My xmltable query :
    SELECT t.personid,n.nametypeid,t.titlehonorofic,t.firstname,
            t.middlename,
            t.surname,
            replace(replace(t.maidenname, '<MaidenName>'),'</MaidenName>', '#@#') maidenname,
            replace(replace(t.suffix, '<Suffix>'),'</Suffix>', '#@#') suffix,
            replace(replace(t.singleStringName, '<SingleStringName>'),'</SingleStringName>', '#@#') singleStringName,
            replace(replace(t.entityname, '<EntityName>'),'</EntityName>', '#@#') entityname,
            replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName
    FROM xmlperson p,master_nametypes n,
             XMLTABLE (
              --'ABCD/EMPLOYEE/NameDetails/Name/NameValue'
              'for $i in ABCD/EMPLOYEE/NameDetails/Name/NameValue        
               return <row>
                        {$i/../../../@id}
                         {$i/../@NameType}
                         {$i/TitleHonorific}{$i/Suffix}{$i/SingleStringName}
                        {$i/FirstName}{$i/MiddleName}{$i/OriginalName}
                        {$i/Surname}{$i/MaidenName}{$i/EntityName}
                    </row>'
            PASSING p.filecontent
            COLUMNS
                    personid     NUMBER         PATH '@id',
                    nametypeid   VARCHAR2(255)  PATH '@NameType',
                    titlehonorofic VARCHAR2(4000) PATH 'TitleHonorific',
                     firstname    VARCHAR2(4000) PATH 'FirstName',
                     middlename  VARCHAR2(4000) PATH 'MiddleName',
                    surname     VARCHAR2(4000) PATH 'Surname',
                     maidenname   XMLTYPE PATH 'MaidenName',
                     suffix XMLTYPE PATH 'Suffix',
                     singleStringName XMLTYPE PATH 'SingleStringName',
                     entityname XMLTYPE PATH 'EntityName',
                    originalName XMLTYPE        PATH 'OriginalName'
                    ) t where t.nametypeid = n.nametype and n.recordtype = 'Person'
    But this is taking too much time to query all those huge data. The resultset of this query would return about millions of rows. I tried to index the table using this query :
    CREATE INDEX myindex_xmlperson on xml_files(filecontent) indextype is xdb.xmlindex parameters ('paths(include(ABCD/EMPLOYEE//*))');
    My Database version :
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production"
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    Index is created but still no improvement with the performance though. It is taking more than 20 minutes to query even a set of 10 similar xml files.Now you can imagine how much will it take to query all those 1000 xml files.
    Could someone please suggest me how to improve the performance of my database.Since I am new to this I am not sure whether I am doing it in proper way. If there is a better solution please suggest. Your help will be greatly appreciated.

    Hi Odie..
    I tried to run your code through all the xml files but it is taking too much time. It has not ended even after 3hours.
    When I tried to do a single insert select statement  for one single xml it is working.But stilli ts in the range of ~10sec.
    Please find my execution plan for one single xml file with your code.
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 2771779566"
    "| Id  | Operation                                     | Name                                     | Rows   | Bytes | Cost (%CPU)| Time     |"
    "|   0 | INSERT STATEMENT                   |                                              |   499G |   121T |   434M  (2) |999:59:59  |"
    "|   1 |  LOAD TABLE CONVENTIONAL    | WATCHLIST_NAMEDETAILS  |            |           |                 |                 |"
    "|   2 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   3 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   4 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   5 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   6 |   SORT AGGREGATE                   |                                             |     1       |     2   |                 |          |"
    "|   7 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|   8 |   SORT AGGREGATE                   |                                             |     1        |     2  |                 |          |"
    "|   9 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|  10 |   NESTED LOOPS                       |                                             |   499G    | 121T |   434M (2) | 999:59:59 |"
    "|  11 |    NESTED LOOPS                      |                                             |    61M     |  14G |  1222K (1) | 04:04:28 |"
    "|  12 |     NESTED LOOPS                     |                                             | 44924      |  10M |    61   (2) | 00:00:01 |"
    "|  13 |      MERGE JOIN CARTESIAN      |                                             |     5         | 1235 |     6   (0) | 00:00:01 |"
    "|* 14 |       TABLE ACCESS FULL          | XMLPERSON                        |     1          |  221 |     2   (0) | 00:00:01 |"
    "|  15 |       BUFFER SORT                     |                                             |     6          |  156 |     4   (0) | 00:00:01 |"
    "|* 16 |        TABLE ACCESS FULL         | MASTER_NAMETYPES        |     6          |  156 |     3   (0) | 00:00:01 |"
    "|  17 |      XPATH EVALUATION             |                                             |                |         |               |          |"
    "|* 18 |     XPATH EVALUATION              |                                             |               |          |               |          |"
    "|  19 |    XPATH EVALUATION               |                                              |               |         |              |          |"
    "Predicate Information (identified by operation id):"
    "  14 - filter(""P"".""FILENAME""='PFA2_95001_100000_F.xml')"
    "  16 - filter(""N"".""RECORDTYPE""='Person')"
    "  18 - filter(""N"".""NAMETYPE""=CAST(""P1"".""C_01$"" AS VARCHAR2(255) ))"
    "Note"
    "   - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)"
    Please note that this is for a single xml file. I have like more than 400 similar files in the same table.
    And for your's as well as Jason's Question:
    What are you trying to accomplish with
    replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName 
    originalName XMLTYPE PATH 'OriginalName'
    Like Jason, I also wonder what's the purpose of all those XMLType projections and strange replaces in the SELECT clause
    What I was trying to achieve was to create a table containing separate rows for all the multi item child nodes for this particular xml.
    But since there was an error beacuse of multiple child nodes like 'ORIGINALNAME' under 'NAMEVALUE' node, I tried this script to insert those values by providing a delimiter and replacing the tag names.
    Please see the link for more details - http://stackoverflow.com/questions/16835323/construct-xmltype-query-to-store-data-in-oracle11g
    This was the execution plan for one single xml file with my code :
    Plan hash value: 2851325155
    | Id  | Operation                                                     | Name                                         | Rows  | Bytes   | Cost (%CPU)  | Time       |    TQ  | IN-OUT | PQ Distrib |
    |   0 | SELECT STATEMENT                                   |                                                 |  7487   |  1820K |    37   (3)        | 00:00:01 |           |             |            |
    |*  1 |  HASH JOIN                                                 |                                                 |  7487   |  1820K  |    37   (3)        | 00:00:01 |           |             |            |
    |*  2 |   TABLE ACCESS FULL                                | MASTER_NAMETYPES            |     6     |   156     |     3   (0)         | 00:00:01 |           |             |            |
    |   3 |   NESTED LOOPS                                        |                                                 |  8168   |  1778K  |    33   (0)        | 00:00:01 |           |             |            |
    |   4 |    PX COORDINATOR                                    |                                                 |            |             |                      |               |           |             |            |
    |   5 |     PX SEND QC (RANDOM)                           | :TQ10000                                  |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | P->S     | QC (RAND)  |
    |   6 |      PX BLOCK ITERATOR                              |                                                 |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWC   |            |
    |*  7 |       TABLE ACCESS FULL                            | XMLPERSON                            |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWP   |            |
    |   8 |    COLLECTION ITERATOR PICKLER FETCH  | XQSEQUENCEFROMXMLTYPE |  8168  | 16336    |    29   (0)       | 00:00:01  |           |               |            |
    Predicate Information (identified by operation id):
       1 - access("N"."NAMETYPE"=CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(SYS_XQEXTRACT(VALUE(KOKBF$),'/*/@NameType'),0,0,20971520,0),50,1,2
                  ) AS VARCHAR2(255)  ))
       2 - filter("N"."RECORDTYPE"='Person')
       7 - filter("P"."FILENAME"='PFA2_95001_100000_F.xml')
    Note
       - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)
    Please let me know whether this has helped.
    My intention is to save the details in the xml to different relational tables so that I can easily query it from my application. I have similarly many queries which inserts the xml values to different tables like the one which I have mentioned here. I was thinking of creating a stored procedure to insert all these values in the relational tables once I receive the xml files. But even a single query is taking too much time to complete. Could you please help me in this regard. Waiting for your valuable feedback.

  • Different behaviour in MAX vs. LabVIEW when writing to IMAQdx GigE attribute

    Hi, I am controlling a Dalsa GigE camera in LabVIEW RT using IMAQdx.  Apart from a couple of quirks with the interface we are acquiring images without much problems at the moment.  
    However, there are one or two issues that are confusing us.  In this case, it is possible to set an attribute in MAX (a command attribute that instructs the camera to perform internal calibration) but when setting the same attribute in LabVIEW the error 0xBFF69010 (-1074360304) Unable to set attribute is thrown.  See attached images.
    I check whether the attribute is writable before attempting a write.  It is, however the write is unsuccessful and reading the iswritable attribute then returns false.  In MAX I can write to this attribute without any issues.  
    Is there anything that I need to configure/read/write in my LabVIEW code that MAX does.  Does MAX write to all attributes (based on the values in the XML file) when it opens the camera or does it simply read all the values from the camera.  When LabVIEW opens a camera reference does it perform the same steps as what MAX does - I'm trying to figure out what the difference between MAX and LabVIEW could be that could be causing this behaviour.
    Any help will be appreciated.
    Solved!
    Go to Solution.
    Attachments:
    Diagram.png ‏15 KB
    FrontPanel.png ‏8 KB
    It works in MAX.png ‏20 KB

    AnthonV wrote:
    Hi, I am controlling a Dalsa GigE camera in LabVIEW RT using IMAQdx.  Apart from a couple of quirks with the interface we are acquiring images without much problems at the moment.  
    However, there are one or two issues that are confusing us.  In this case, it is possible to set an attribute in MAX (a command attribute that instructs the camera to perform internal calibration) but when setting the same attribute in LabVIEW the error 0xBFF69010 (-1074360304) Unable to set attribute is thrown.  See attached images.
    I check whether the attribute is writable before attempting a write.  It is, however the write is unsuccessful and reading the iswritable attribute then returns false.  In MAX I can write to this attribute without any issues.  
    Is there anything that I need to configure/read/write in my LabVIEW code that MAX does.  Does MAX write to all attributes (based on the values in the XML file) when it opens the camera or does it simply read all the values from the camera.  When LabVIEW opens a camera reference does it perform the same steps as what MAX does - I'm trying to figure out what the difference between MAX and LabVIEW could be that could be causing this behaviour.
    Any help will be appreciated.
    Hi AnthonV,
    "Quirky" is a good way to describe the Spyder3 when it comes to the GigE Vision/GenICam interface (as opposed to Dalsa's driver which communicates using custom serial commands to the camera over ethernet)....
    The Spyder3 has a lot of timing-dependent issues. It is possible that the delay between opening the camera and setting that feature is different via MAX vs your code in LabVIEW. Also, there are certain cases where MAX will surpress the error from being displayed. Ignoring the error shown vs. not, do you see the feature take effect in either of the two cases?
    The basic behavior between MAX and LabVIEW is the same. In both cases when you open the camera all the settings are loaded from our camera file which has the saved camera settings. This file is created the first time you open the camera and is updated whenever you click Save in MAX or call an API function to save the settings. In any case, I do know that the Spyder3 has various issues saving/restoring settings to our camera files.
    I suggest talking with Dalsa about the issues you are having. They might be able to set you up with newer firmware that addresses some of these problems (we have worked with them in the past to identify many of them).
    Eric

  • Different behaviour in JMS Cluster automatic failover

    Hi,
              I am problem in JMS clustering, now let me explain the scenario.
              I have 2 managed servers participating in the weblogic cluster, now since JMS is a singleton service what i did is i have created 2 JMS servers and targeted them to Managedservers 1 and 2 respectively.I have also created a Distributed Destination and deploy they with the deployment "wizard" ("autodeploy") to all the member of the cluster.
              Now in my case I created two different type of client
              Asynchronous and synchronous .
              The first one register himself as MessageListener and also as ExceptionListener. When I bring down the managed server in which the client is connect the call back method onException is called.
              The second client instead register himself as ExceptionListener but not as MessageListener. It call in different thread the receive method on the destination.
              In this case if i I bring down the managed server in which the client is connect the call back method onException is NOT called, instead i receive the JMSException on all the call "receive".
              I expected that the behaviour was the same of the firts client.
              Thanks in advance.
              dani

    Its not clear from your description what you're trying to do, as typical apps use a single module, including those that use distributed destinations, and typical apps do not use the convention of specifying a module name in their JNDI name. (The "!" syntax makes me suspect that you're not using JNDI to lookup destinations, rather you're using the rarely recommended JMS session "createQueue()" call.).
    Never-the-less, I suspect the problem is simply that your using a distributed queue and haven't realized that queue browsers and consumers pin themselves to a single queue member. To ensure full coverage of a distributed queue, the best practice is to use a WebLogic MDB: WebLogic MDBs automatically ensure that each queue member has consumers.
    By the way, if you are using a distributed queues, then the best practice config is as follows for each homogeneous set of JMS servers:
    -- Configure a custom WL store per server, target to the server's default migratable target.
    -- Configure a JMS server per server, target to the server's default migratable target, set the store for the JMS server to be the same as the custom store.
    -- Configure a single JMS module, target to the cluster.
    -- Configure a single subdeployment for the module that references each JMS server (and nothing else).
    -- Configure one or more distributed queues for the module. Never use default targeting -- instead use advanced subdeployment targeting to target each distributed queue to the single subdeployment you defined earlier.
    -- Configure one or more custom connection factories in the module, use default targeting.
    I recommend that you read through the JMS admin and programmer's guides in the edocs if you haven't done so already. You might find that the JMS chapter of the new book "Professional Oracle WebLogic" is helpful.
    Tom
    Edited by: TomB on Nov 4, 2009 10:12 AM

  • Material with different concentration in one storage location

    We are working in a chemical industry that make caustic soda they are selling two material of caustic soda with two different concentrations (33% and 50%). Problem is that they move caustic 33% (one material) in to one storage location at the same time the move caustic 50%(second material) in the same material tank. In this way the concentrations of caustic change say 45%. If they want to increase the concentration they move caustics with greater concentration to make this stock to 50%. How to handle this situation. Every time the move these two materials in a same tank materials concentration changes.
    Regards
    Ahmed

    Ahmed,
    Can you please let us know,
    1. Is your material batch managed? If yes would you have multiple batches in the storage location?
    2. How is the material transferred to the tank? Is it through manual 311 movement?
    Regards,
    Prasobh

  • Calculate XMLTYPE storage

    Hello,
    We have a table with a XMLTYPE column.
    I want to find how much space this is taking up.
    When I execute this query on the table name, I don't believe I'm picking up the XMLTYPE column.
    select segment_name table_name,
    sum(bytes)/(1024*1024) table_size_mb
    from
    user_extents
    where segment_type='TABLE'
    and lower(segment_name) in ( 'table_name')
    group by segment_name
    We are using 11G R2 Oracle database.
    Thanks.

    That's not binary storage. That's CLOB storage.
    This would be binary storage
    XMLTYPE COLUMN "XML_DATA" STORE AS SECUREFILE BINARY XML
    I don't currently know the answer to your question but here is something to start from (it is in relation to 11.1)
    http://www.liberidu.com/blog/2008/09/05/xmldb-performance-xml-binary-xml-storage-models/

  • DW CS3 to CS4 : different behaviour on click event on TabbedPanels

    Hi all,
    In the CS3 spry tabbedpanels wingets, if they add a href link in a thumb-index, the directed page is correctly shown.
    In CS4 ds'nt work, the tab is simply shown !
    To try it, show the code below.
    Is there a resolution to use HREF in them in CS4?
    Best regards
    Pascal
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml">
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <title>Document sans nom</title>
    <script src="SpryAssets/SpryTabbedPanels.js" type="text/javascript"></script>
    <link href="SpryAssets/SpryTabbedPanels.css" rel="stylesheet" type="text/css" />
    </head>
    <body>
    <div id="TabbedPanels1" class="TabbedPanels">
      <ul class="TabbedPanelsTabGroup">
          <li class="TabbedPanelsTab" tabindex="0">Onglet 2</li>
          <li class="TabbedPanelsTab" tabindex="0"><a href="index.html">Click to go on index.html</a></li>
      </ul>
      <div class="TabbedPanelsContentGroup">
        <div class="TabbedPanelsContent">Contenu 1</div>
        <div class="TabbedPanelsContent">Contenu 2</div>
      </div>
    </div>
    <script type="text/javascript">
    <!--
    var TabbedPanels1 = new Spry.Widget.TabbedPanels("TabbedPanels1");
    //-->
    </script>
    </body>
    </html>

    Spry overwrites the "default" actions of the browser to allow different markup for the widget. To overcome this behaviour again you can add a small hack on your <a> element:
    onclick="window.location = this.href;"

Maybe you are looking for

  • Enter Incoming Invoice ( MIRO) unalbe to post

    Dear All, While posting the Incoming Invoice using MIRO tran. - credit memo, After Entering the GL Account Layout of Business Area , Cost Center, Partner Business area , and order field indicator shows all tick mark, ( correct sign) , continuos messa

  • How to call Reports in oracle forms 10g

    Dear All, How to call Reports in oracle forms 10g. I am using Oracle Forms 10g Reports 10g and Database 10 and and Operating system Windows XP. Please give me the Solution. Thanks and Regards, Faziludeen

  • ITunes 10.5 won't run after accepting license agreement

    I've been searching message boards for the past 2 days trying to figure out what's going on but have found nothing. I'm running Windows XP Pro. I updated to iTunes 10.5 only so I can update my iphone to IOS5. The install went fine without any errors.

  • How to write XSJS Select Query with input parameters

    Hello Experts, I am creating a xsjs file and in that file I am trying to write a Select Query based on a Calculation View I have tried it the following way: var query = 'SELECT TOP 100 \"Name\", \"Address\", \"City\", \"Country\" FROM \"_SYS_BIC\".\"

  • Texmacs PDF Output

    Hi guys! I'm having a slight problem with Texmacs's (from the extra repo) PDF output. When I'm trying to export PDF, everything seems to be fine, Texmacs says nothing with debug enabled, but : - I get a 1.2 MB file, even for 3 lines of text, which se