Calculate XMLTYPE storage

Hello,
We have a table with a XMLTYPE column.
I want to find how much space this is taking up.
When I execute this query on the table name, I don't believe I'm picking up the XMLTYPE column.
select segment_name table_name,
sum(bytes)/(1024*1024) table_size_mb
from
user_extents
where segment_type='TABLE'
and lower(segment_name) in ( 'table_name')
group by segment_name
We are using 11G R2 Oracle database.
Thanks.

That's not binary storage. That's CLOB storage.
This would be binary storage
XMLTYPE COLUMN "XML_DATA" STORE AS SECUREFILE BINARY XML
I don't currently know the answer to your question but here is something to start from (it is in relation to 11.1)
http://www.liberidu.com/blog/2008/09/05/xmldb-performance-xml-binary-xml-storage-models/

Similar Messages

  • Different behaviour of XMLType storage

    hi,
    I have problem with different behaviour of storage type "BINARY XML" and regular storage ( simple CLOB I guess) of the XMLType datatype.
    Setup
    - Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
    - XML file ( "Receipt.xml" ) with a structure like :
    <?xml version="1.0" encoding="UTF-8"?>
    <ESBReceiptMessage xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
         <ESBMessageHeader>
              <MsgSeqNumber>4713</MsgSeqNumber>
              <MessageType>Receipt</MessageType>
              <MessageVersion>1.1</MessageVersion>
         </ESBMessageHeader>
         <Receipt>
              <ReceiptKey>1234567-03</ReceiptKey>          
              <ReceiptLines>
                   <ReceiptLine><Material><MaterialKey>00011-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>00021-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>00031-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
    .....etc....etc.....etc...
                   <ReceiptLine><Material><MaterialKey>09991-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>10001-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
                   <ReceiptLine><Material><MaterialKey>10011-015-000</MaterialKey></Material><Qty>47.0</Qty></ReceiptLine>
              </ReceiptLines>
         </Receipt>
    </ESBReceiptMessage>=> 1 Header element : "Receipt" and exactly 1001 "ReceiptLine" elements.
    Problem:
    Test 1 :
    drop table xml_ddb;
    CREATE TABLE xml_ddb (id number,xml_doc XMLType);
    INSERT INTO xml_ddb (id, xml_doc)  VALUES (4716,XMLType(bfilename('XMLDIR', 'Receipt.xml'),nls_charset_id('AL32UTF8')));
    select count(1) from (
    SELECT dd.id,ta.Receiptkey,li.materialkey,li.qty
       FROM xml_ddb dd,
            XMLTable('/ESBReceiptMessage/Receipt' PASSING dd.xml_doc
                     COLUMNS ReceiptKey VARCHAR2(28) PATH 'ReceiptKey',
                             ReceiptLine XMLType PATH 'ReceiptLines/ReceiptLine') ta,
            XMLTable('ReceiptLine' PASSING ta.ReceiptLine
                     COLUMNS materialkey VARCHAR2(14)  PATH 'Material/MaterialKey',
                             qty         NUMBER(10)    PATH 'Qty') li
      COUNT(1)
          1001
    1 row selected.The storage of the XMLType column has not been specified.
    => All 1001 detailled rows are selected.
    => Everything is fine
    Test 2 :
    drop table xml_ddb;
    CREATE TABLE xml_ddb (id number,xml_doc XMLType) XMLType xml_doc store AS BINARY XML; -- <---- Different storage type
    INSERT INTO xml_ddb (id, xml_doc)  VALUES (4716,XMLType(bfilename('XMLDIR', 'Receipt.xml'),nls_charset_id('AL32UTF8')));
    select count(1) from (
    SELECT dd.id,ta.Receiptkey,li.materialkey,li.qty
       FROM xml_ddb dd,
            XMLTable('/ESBReceiptMessage/Receipt' PASSING dd.xml_doc
                     COLUMNS ReceiptKey VARCHAR2(28) PATH 'ReceiptKey',
                             ReceiptLine XMLType PATH 'ReceiptLines/ReceiptLine') ta,
            XMLTable('ReceiptLine' PASSING ta.ReceiptLine
                     COLUMNS materialkey VARCHAR2(14)  PATH 'Material/MaterialKey',
                             qty         NUMBER(10)    PATH 'Qty') li
      COUNT(1)
          1000
    1 row selected.Storage of the XMLType column has been defined as "BINARY XML"
    => Only 1000 rows are select
    => One row is missing.
    After some tests : There seems to be a "hard border" of 1000 rows that comes with the different datatype ( So if you put 2000 rows into the XML you will get also only 1000 rows back )
    Question
    As I am a newbie in XMLDB :
    - Is the "construction" with the nested tables in the select-statement maybe not recommended/"allowed" ?
    - Are there different ways to get back "Head" + "Line" elements in a relational structure ( even if there are more than 1000 lines) ?
    Thanks in advance
    Bye
    Stefan

    hi,
    General
    You are right. I have a predefined XSD structure. And now I try to find a way to handle this in Oracle ( up to now, we are doing XML handling in Java ( with JAXB ) outside the DB)
    => So I will take a look at the "object-relational" storage. Thank's for that hint.
    Current thread
    The question, whether there is an "artifical" border of 1000 rows, when joining 2 XML tables together, is still open....
    (although it might be not interesting for me anymore :-), maybe somebody else will need the answer...)
    Bye
    Stefan

  • Improve XML readability in Oracle 11g for binary XMLType storage for huge files

    I have one requirement in which I have to process huge XML files. That means there might be around 1000 xml files and the whole size of these files would be around 2GB.
    What I need is to store all the data in these files to my Oracle DB. For this I have used sqlloader for bulk uploading of all my XML files to my DB and it is stored as binary XMLTYPE in my database.Now I need to query these files and store the data in relational tables.For this I have used XMLTable Xpath queries. Everything is fine when I try to query single xml file within my DB. But if it is trying to query all those files it is taking too much time which is not acceptable.
    Here's my one sample xml content:
    <ABCD>
      <EMPLOYEE id="11" date="25-Apr-1983">
        <NameDetails>
          <Name NameType="a">
            <NameValue>
              <FirstName>ABCD</FirstName>
              <Surname>PQR</Surname>
              <OriginalName>TEST1</OriginalName>
              <OriginalName>TEST2</OriginalName>
            </NameValue>
          </Name>
          <Name NameType="b">
            <NameValue>
              <FirstName>TEST3</FirstName>
              <Surname>TEST3</Surname>
            </NameValue>
            <NameValue>
              <FirstName>TEST5</FirstName>
              <MiddleName>TEST6</MiddleName>
              <Surname>TEST7</Surname>
              <OriginalName>JAB1</OriginalName>
            </NameValue>
            <NameValue>
              <FirstName>HER</FirstName>
              <MiddleName>HIS</MiddleName>
              <Surname>LOO</Surname>
            </NameValue>
          </Name>
          <Name NameType="c">
            <NameValue>
              <FirstName>CDS</FirstName>
              <MiddleName>DRE</MiddleName>
              <Surname>QWE</Surname>
            </NameValue>
            <NameValue>
              <FirstName>CCD</FirstName>
              <MiddleName>YTD</MiddleName>
              <Surname>QQA</Surname>
            </NameValue>
            <NameValue>
              <FirstName>DS</FirstName>
              <Surname>AzDFz</Surname>
            </NameValue>
          </Name>
        </NameDetails>
      </EMPLOYEE >
    </ABCD>
    Please note that this is just one small record inside one big xml.Each xml would contain similar records around 5000 in number.Similarly there are more than 400 files each ranging about 4MB size approx.
    My xmltable query :
    SELECT t.personid,n.nametypeid,t.titlehonorofic,t.firstname,
            t.middlename,
            t.surname,
            replace(replace(t.maidenname, '<MaidenName>'),'</MaidenName>', '#@#') maidenname,
            replace(replace(t.suffix, '<Suffix>'),'</Suffix>', '#@#') suffix,
            replace(replace(t.singleStringName, '<SingleStringName>'),'</SingleStringName>', '#@#') singleStringName,
            replace(replace(t.entityname, '<EntityName>'),'</EntityName>', '#@#') entityname,
            replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName
    FROM xmlperson p,master_nametypes n,
             XMLTABLE (
              --'ABCD/EMPLOYEE/NameDetails/Name/NameValue'
              'for $i in ABCD/EMPLOYEE/NameDetails/Name/NameValue        
               return <row>
                        {$i/../../../@id}
                         {$i/../@NameType}
                         {$i/TitleHonorific}{$i/Suffix}{$i/SingleStringName}
                        {$i/FirstName}{$i/MiddleName}{$i/OriginalName}
                        {$i/Surname}{$i/MaidenName}{$i/EntityName}
                    </row>'
            PASSING p.filecontent
            COLUMNS
                    personid     NUMBER         PATH '@id',
                    nametypeid   VARCHAR2(255)  PATH '@NameType',
                    titlehonorofic VARCHAR2(4000) PATH 'TitleHonorific',
                     firstname    VARCHAR2(4000) PATH 'FirstName',
                     middlename  VARCHAR2(4000) PATH 'MiddleName',
                    surname     VARCHAR2(4000) PATH 'Surname',
                     maidenname   XMLTYPE PATH 'MaidenName',
                     suffix XMLTYPE PATH 'Suffix',
                     singleStringName XMLTYPE PATH 'SingleStringName',
                     entityname XMLTYPE PATH 'EntityName',
                    originalName XMLTYPE        PATH 'OriginalName'
                    ) t where t.nametypeid = n.nametype and n.recordtype = 'Person'
    But this is taking too much time to query all those huge data. The resultset of this query would return about millions of rows. I tried to index the table using this query :
    CREATE INDEX myindex_xmlperson on xml_files(filecontent) indextype is xdb.xmlindex parameters ('paths(include(ABCD/EMPLOYEE//*))');
    My Database version :
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    "CORE 11.2.0.2.0 Production"
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    Index is created but still no improvement with the performance though. It is taking more than 20 minutes to query even a set of 10 similar xml files.Now you can imagine how much will it take to query all those 1000 xml files.
    Could someone please suggest me how to improve the performance of my database.Since I am new to this I am not sure whether I am doing it in proper way. If there is a better solution please suggest. Your help will be greatly appreciated.

    Hi Odie..
    I tried to run your code through all the xml files but it is taking too much time. It has not ended even after 3hours.
    When I tried to do a single insert select statement  for one single xml it is working.But stilli ts in the range of ~10sec.
    Please find my execution plan for one single xml file with your code.
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 2771779566"
    "| Id  | Operation                                     | Name                                     | Rows   | Bytes | Cost (%CPU)| Time     |"
    "|   0 | INSERT STATEMENT                   |                                              |   499G |   121T |   434M  (2) |999:59:59  |"
    "|   1 |  LOAD TABLE CONVENTIONAL    | WATCHLIST_NAMEDETAILS  |            |           |                 |                 |"
    "|   2 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   3 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   4 |   SORT AGGREGATE                   |                                             |     1      |     2    |                 |          |"
    "|   5 |    XPATH EVALUATION                 |                                             |             |          |                 |          |"
    "|   6 |   SORT AGGREGATE                   |                                             |     1       |     2   |                 |          |"
    "|   7 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|   8 |   SORT AGGREGATE                   |                                             |     1        |     2  |                 |          |"
    "|   9 |    XPATH EVALUATION                 |                                             |              |         |                 |          |"
    "|  10 |   NESTED LOOPS                       |                                             |   499G    | 121T |   434M (2) | 999:59:59 |"
    "|  11 |    NESTED LOOPS                      |                                             |    61M     |  14G |  1222K (1) | 04:04:28 |"
    "|  12 |     NESTED LOOPS                     |                                             | 44924      |  10M |    61   (2) | 00:00:01 |"
    "|  13 |      MERGE JOIN CARTESIAN      |                                             |     5         | 1235 |     6   (0) | 00:00:01 |"
    "|* 14 |       TABLE ACCESS FULL          | XMLPERSON                        |     1          |  221 |     2   (0) | 00:00:01 |"
    "|  15 |       BUFFER SORT                     |                                             |     6          |  156 |     4   (0) | 00:00:01 |"
    "|* 16 |        TABLE ACCESS FULL         | MASTER_NAMETYPES        |     6          |  156 |     3   (0) | 00:00:01 |"
    "|  17 |      XPATH EVALUATION             |                                             |                |         |               |          |"
    "|* 18 |     XPATH EVALUATION              |                                             |               |          |               |          |"
    "|  19 |    XPATH EVALUATION               |                                              |               |         |              |          |"
    "Predicate Information (identified by operation id):"
    "  14 - filter(""P"".""FILENAME""='PFA2_95001_100000_F.xml')"
    "  16 - filter(""N"".""RECORDTYPE""='Person')"
    "  18 - filter(""N"".""NAMETYPE""=CAST(""P1"".""C_01$"" AS VARCHAR2(255) ))"
    "Note"
    "   - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)"
    Please note that this is for a single xml file. I have like more than 400 similar files in the same table.
    And for your's as well as Jason's Question:
    What are you trying to accomplish with
    replace(replace(t.originalName, '<OriginalName>'),'</OriginalName>', '#@#') originalName 
    originalName XMLTYPE PATH 'OriginalName'
    Like Jason, I also wonder what's the purpose of all those XMLType projections and strange replaces in the SELECT clause
    What I was trying to achieve was to create a table containing separate rows for all the multi item child nodes for this particular xml.
    But since there was an error beacuse of multiple child nodes like 'ORIGINALNAME' under 'NAMEVALUE' node, I tried this script to insert those values by providing a delimiter and replacing the tag names.
    Please see the link for more details - http://stackoverflow.com/questions/16835323/construct-xmltype-query-to-store-data-in-oracle11g
    This was the execution plan for one single xml file with my code :
    Plan hash value: 2851325155
    | Id  | Operation                                                     | Name                                         | Rows  | Bytes   | Cost (%CPU)  | Time       |    TQ  | IN-OUT | PQ Distrib |
    |   0 | SELECT STATEMENT                                   |                                                 |  7487   |  1820K |    37   (3)        | 00:00:01 |           |             |            |
    |*  1 |  HASH JOIN                                                 |                                                 |  7487   |  1820K  |    37   (3)        | 00:00:01 |           |             |            |
    |*  2 |   TABLE ACCESS FULL                                | MASTER_NAMETYPES            |     6     |   156     |     3   (0)         | 00:00:01 |           |             |            |
    |   3 |   NESTED LOOPS                                        |                                                 |  8168   |  1778K  |    33   (0)        | 00:00:01 |           |             |            |
    |   4 |    PX COORDINATOR                                    |                                                 |            |             |                      |               |           |             |            |
    |   5 |     PX SEND QC (RANDOM)                           | :TQ10000                                  |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | P->S     | QC (RAND)  |
    |   6 |      PX BLOCK ITERATOR                              |                                                 |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWC   |            |
    |*  7 |       TABLE ACCESS FULL                            | XMLPERSON                            |     1    |   221      |     2   (0)        | 00:00:01 |  Q1,00 | PCWP   |            |
    |   8 |    COLLECTION ITERATOR PICKLER FETCH  | XQSEQUENCEFROMXMLTYPE |  8168  | 16336    |    29   (0)       | 00:00:01  |           |               |            |
    Predicate Information (identified by operation id):
       1 - access("N"."NAMETYPE"=CAST(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(SYS_XQEXTRACT(VALUE(KOKBF$),'/*/@NameType'),0,0,20971520,0),50,1,2
                  ) AS VARCHAR2(255)  ))
       2 - filter("N"."RECORDTYPE"='Person')
       7 - filter("P"."FILENAME"='PFA2_95001_100000_F.xml')
    Note
       - Unoptimized XML construct detected (enable XMLOptimizationCheck for more information)
    Please let me know whether this has helped.
    My intention is to save the details in the xml to different relational tables so that I can easily query it from my application. I have similarly many queries which inserts the xml values to different tables like the one which I have mentioned here. I was thinking of creating a stored procedure to insert all these values in the relational tables once I receive the xml files. But even a single query is taking too much time to complete. Could you please help me in this regard. Waiting for your valuable feedback.

  • O-R XMLtype versus relational table storage with XMLType

    Hi,
    I am working on a application which centers around ingesting/storing, editing and publishing a huge XML file (spanning several days of a calendar).
    I am confused as to whether I should choose to store the XML as O-R XMLType (by registering the schema etc.) or just as an XMLType in a table.
    The requirements are:
    1. Farly frequent edits both and leaf and on branches.
    2. Merging of XML when a new one comes in.
    3. XML itself is of the order of 10s of Gigs.
    4. Publishing as XML.
    5. Middle-tier in JBOSS.
    6. Validation before ingesting (could be dont in the middle tier).
    7. Applying edits to the XML after it is stored and just before it is published.
    8. XSLT Tranformations to an incoming XML to convert it to my format to be stored. (could be done in the middle tier)
    This on Oracle 11g.
    Thanks in advance.
    Regards,
    Vishal

    Best place to start would be with an Oracle document at
    http://www.oracle.com/technetwork/database/features/xmldb/index.html
    Look at "Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case (PDF)"

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • How to get storage type of XMLType through OCI

    How can you get the storage type of an XMLType through OCI? An XMLType column can be created as BINARY, CLOB, or OBJECT RELATIONAL type, is there any way to get this information through OCI? SQLPlus seems to know how to detect these types with the DESCRIBE command, is it possible to get this information programmatically?
    SQL> describe T_SRC_XML_COL_CLOB_UTF8;
    Name Null? Type
    COL1 NOT NULL NUMBER(8)
    COL2 SYS.XMLTYPE
    SQL> describe T_SRC_XML_COL_BINARY;
    Name Null? Type
    COL1 NOT NULL NUMBER(8)
    COL2 SYS.XMLTYPE STORAGE BINARY
    SQL> describe T_SRC_XML_COL_OBJECT;
    Name Null? Type
    COL1 NOT NULL NUMBER(8)
    COL2 SYS.XMLTYPE(XMLSchema "http:
    //www.oracle.co
    m" Element "Parent") STORAGE
    Object-relational TYPE "Par
    ent808_T"

    Hi,
    Here's one possible (simplified) way to determine this (assumes all handles allocated, etc):
    - get a describe handle for table via OCIDescribeAny
    - get parameter handle via OCIAttrGet on the describe handle
    - get number of columns in table via OCIAttrGet on the parameter handle
    - get column list handle via OCIAttrGet on the parameter handle
    - loop for the number of columns
    - use OCIAttrGet to get the column data type
    - use OCIAttrGet to get if the column is a specific storage type
    Here's what the part to determine if the columns is a specific storage type would look like:
    ** determine if storage type is binary for this xmltype column
    rc = OCIAttrGet((void *) p_col,
                    OCI_DTYPE_PARAM,
                    (void *) &colstorage,
                    (ub4 *) 0,
                    (ub4) OCI_ATTR_XMLTYPE_BINARY_XML,
                    p_err);If the column is declared to have binary xml storage then colstorage will be set to 1 after the call, 0 if not.
    OCI_ATTR_XMLTYPE_BINARY_XML is from oci.h (as well as OCI_ATTR_XMLTYPE_STORED_OBJ)
    Perhaps that will be enough to get you what you need.
    Regards,
    Mark

  • Why doesn't this insert into XMLTYPE work?

    Hi again. Hopefully I'll be answering questions soon, but meanwhile I've got another one.
    I'm working in this environment...
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    The encoding for the database is WE8ISO8859P1.
    in SQL Plus. I created a table with an XMLTYPE column stored as binary. Here's the desc...
    PS: BWDSTG> desc bwddoc;
    Name Null? Type
    SUNAME VARCHAR2(100)
    SOURCE_DOC_TEXT CLOB
    DOC_TEXT SYS.XMLTYPE STORAGE BINARY
    LAST_UPDATE_DATE DATE
    PS: BWDSTG>
    The following error also occurred when I created the same table with a storage type of CLOB for DOC_TEXT. Here's the error I can't figure out...
    PS: BWDSTG> insert into bwddoc (doc_text) values ('<?xml version="1.0" encoding="UTF-8"?>
    2 <a>&#8211;</a>
    3 ');
    insert into bwddoc (doc_text) values ('<?xml version="1.0" encoding="UTF-8"?>
    ERROR at line 1:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00217: invalid character 8211 (U+2013)
    Error at line 2
    It accepts the command if I replace the &#8211; with plain text. Why does it care what the character entity reference is? It's changing the encoding pseudoattribute in the xml declaration to US-ASCII anyway, and this character entity should be perfectly acceptable. I'd appreciate it if anyone knows the reason for this (or what I'm not understanding, which as always is a distinct possibility).

    Sorry, let me try again. SQLPlus doesn't have a problem with the multiple lines, so I'm just trying to insert the XML.
    PS: BWDSTG> insert into bwddoc (doc_text) values (xmltype('<?xml version="1.0" encoding="US-ASCII"?>
    2 <a>&#8212;</a>
    3 '));
    insert into bwddoc (doc_text) values (xmltype('<?xml version="1.0" encoding="US-ASCII"?>
    ERROR at line 1:
    ORA-31011: XML parsing failed
    ORA-19202: Error occurred in XML processing
    LPX-00217: invalid character 8212 (U+2014)
    Error at line 2
    ORA-06512: at "SYS.XMLTYPE", line 310
    ORA-06512: at line 1
    My problem is that...
    <?xml version="1.0" encoding="US-ASCII"?>
    <a>&#8212;</a>
    should be perfectly good HTML. libxml2 and expat both have no problem parsing it. They just leave &#8212; (which is some kind of a dash) alone. But Oracle XMLType doesn't like it for some reason. I need to load a lot of data that has numeric character entities like this but I can't 'til I get this resolved.

  • Storage for SAP under Unix

    Hi Friends,
    I wanted to calculate total  storage space on our UNIX BOX,
    From the following  if i wanted to calculate the total storage allocated for a the SID(BEP) should i add
    41943049355264 2097152020971521048576+1044480 etc  ,... or should i take  the storage for  /USR/SAP/SID consider the same.
    /dev/vgdbciBEP00_01/lvol2
                       4194304  808578 3177224   20% /export/sapmnt/BEP
    /dev/vgdbciBEP00_01/lvol3
                       9355264 3959636 5058695   44% /export/usr/sap/trans/BEP
    /dev/vgdbciBEP00_01/lvol4
                       20971520 4390010 15545206   22% /oracle/BEP
    /dev/vgdbciBEP00_01/lvol5
                       2097152  459134 1535760   23% /oracle/BEP/sapreorg
    /dev/vgdbciBEP00_01/lvol6
                       1048576  432382  577734   43% /usr/sap/BEP/DVEBMGS00
    /dev/vgdbciBEP00_02/lvol1
                       26214400 16417145 9185178   64% /oracle/BEP/saparch
    /dev/vgdbciBEP00_03/lvol1
                       1044480  269124  726902   27% /oracle/BEP/origlogA
    /dev/vgdbciBEP00_03/lvol2
                       1044480  181204  809327   18% /oracle/BEP/origlogB
    /dev/vgdbciBEP00_04/lvol1
                       220200960 89871976 129310896   41% /oracle/BEP/sapdata1
    /dev/vgdbciBEP00_04/lvol2
                       188743680 91848896 96137904   49% /oracle/BEP/sapdata2
    /dev/vgdbciBEP00_04/lvol3
                       188743680 115036344 73132952   61% /oracle/BEP/sapdata3
    /dev/vgdbciBEP00_04/lvol4
                       220200960 119653304 99762168   55% /oracle/BEP/sapdata4
    /dev/vgdbciBEP00_04/lvol5
                       534773760 272340952 260382624   51% /oracle/BEP/sapdata5
    /dev/vgdbciBEP00_04/lvol6
                       513802240 247002256 264715672   48% /oracle/BEP/sapdata6
    Please help me regd the same.
    I will definetely give points.
    RAMA.

    Hi ,
    /dev/vg00/lvol23   2097152  168618 1808044    9% /usr/sap/BEP
    /dev/vgdbciBEP00_01/lvol2
                       4194304  807695 3178219   20% /export/sapmnt/BEP
    /dev/vgdbciBEP00_01/lvol3
                       9355264 3959753 5058581   44% /export/usr/sap/trans/BEP
    /dev/vgdbciBEP00_01/lvol4
                       20971520 4393288 15542122   22% /oracle/BEP
    /dev/vgdbciBEP00_01/lvol5
                       2097152  581940 1420628   29% /oracle/BEP/sapreorg
    /dev/vgdbciBEP00_01/lvol6
                       1048576  309243  693176   31% /usr/sap/BEP/DVEBMGS00
    /dev/vgdbciBEP00_02/lvol1
                       26214400 5607606 19319187   22% /oracle/BEP/saparch
    /dev/vgdbciBEP00_03/lvol1
                       1044480  271012  725132   27% /oracle/BEP/origlogA
    /dev/vgdbciBEP00_03/lvol2
                       1044480  181204  809327   18% /oracle/BEP/origlogB
    /dev/vgdbciBEP00_04/lvol1
                       220200960 89873864 129309024   41% /oracle/BEP/sapdata1
    /dev/vgdbciBEP00_04/lvol2
                       188743680 91848896 96137904   49% /oracle/BEP/sapdata2
    /dev/vgdbciBEP00_04/lvol3
                       188743680 135516424 52812872   72% /oracle/BEP/sapdata3
    /dev/vgdbciBEP00_04/lvol4
                       220200960 119653304 99762168   55% /oracle/BEP/sapdata4
    /dev/vgdbciBEP00_04/lvol5
                       534773760 272340952 260382624   51% /oracle/BEP/sapdata5
    /dev/vgdbciBEP00_04/lvol6
                       513802240 247002256 264715672   48% /oracle/BEP/sapdata6
    dbciBEP:/export/sapmnt/BEP
                       4194304  807696 3178216   20% /sapmnt/BEP
    I just want to make sure before i give details to the user, they want everything under SID
    Sorry for asking again.
    From the above, so here from the above total storage for BEP will be
    2097152 + 4194304 + 9355264 + 20971520 + 2097152 + 1048576 + 26214400 + 1044480 + 1044480 + 220200960 + 188743680 + 220200960
    + 534773760 + 513802240 + 4194304
    From the above it will be BEP =  1749983232
    Please reply  whether  i am right or wrong.
    Thanks
    rama

  • How to create XMLTYPE View from the XMLType table

    Hi:
    I have a large XML file and inserted to the XMLTYPE table
    For the XQUERY purpose I would like to create XMLView of the table.
    The examples I got from Oracle to create XML view are for small files.
    Can some one help me how to create XMLType VIEW for large XML Files ( 20,000 lines )?
    Ali_2

    Have a look at the examples given on XMLType Views (based on relational tables) or standard views (based on XMLType storage) in the FAQ url located on the main page of this forum site regarding XMLDB.

  • Semi structured storage

    hi,
    what are the steps to follow to store or import/ export native xml data into semistructured storage in oracle 11g?????
    how to test the performance of the semi structured data??????
    Edited by: user11269819 on Jul 17, 2009 11:00 AM

    I'm not sure which method you are referring to as "semi-structure" (hybrid?) so I'll point you to this [Oracle 11g – XMLType Storage Options | http://www.liberidu.com/blog/?p=203] which is derived from [Oracle® XML DB Developer&apos;s Guide 11g Release 1|http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28369/toc.htm]. You can also find more storage related information on Marco&apos;s blog under the [storage category | http://www.liberidu.com/blog/?cat=23]

  • Inserting large xml data into xmltype

    Hi all,
    In my project I need to insert very large XML data into xmltype column.
    My table:
    CREATE TABLE TransDetailstblCLOB ( id number, data_xml XMLType) XmlType data_xml STORE AS CLOB;
    I am using JDBC approach to insert values. It works fine for data less than 4000 bytes when using preparedStatement.setString(1, xmlData). As I have to insert large Xml data >4000 bytes I am now using preparedStatement.setClob() methods.
    My code works fine for table which has column declared as CLOB expicitly. But for TransDetailstblCLOB where the column is declared as XMLTYPE and storage option as CLOB I am getting the error : "ORA-01461: can bind a LONG value only for insert into a LONG column".
    This error means that there is a mismatch between my setClob() and column. which means am I not storing in CLOB column.
    I read in Oracle site that
    When you create an XMLType column without any XML schema specification, a hidden CLOB column is automatically created to store the XML data. The XMLType column itself becomes a virtual column over this hidden CLOB column. It is not possible to directly access the CLOB column; however, you can set the storage characteristics for the column using the XMLType storage clause."
    I dont understand its stated here that it is a hidden CLOB column then why not I use setClob()? It worked fine for pure CLOB column (another table) then Why is it giving such error for XMLTYPE table?
    I am struck up with this since 3 days. Can anyone help me please?
    My code snippet:
    query = "INSERT INTO po_xml_tab VALUES (?,XMLType(?)) ";
              //query = "INSERT INTO test VALUES (?,?) ";
         // Get the statement Object
         pstmt =(OraclePreparedStatement) conn.prepareStatement(query);
         // pstmt = conn.prepareStatement(query);
         //xmlData="test";
    //      If the temporary CLOB has not yet been created, create new
         temporaryClob = oracle.sql.CLOB.createTemporary(conn, true, CLOB.DURATION_SESSION);
         // Open the temporary CLOB in readwrite mode to enable writing
         temporaryClob.open(CLOB.MODE_READWRITE);
         log.debug("tempClob opened"+"size bef writing data"+"length "+temporaryClob.getLength()+
                   "buffer size "+temporaryClob.getBufferSize()+"chunk size "+temporaryClob.getChunkSize());
         OutputStream out = temporaryClob.getAsciiOutputStream();
         InputStream in = new StringBufferInputStream(xmlData);
    int length = -1;
    int wrote = 0;
    int chunkSize = temporaryClob.getChunkSize();
    chunkSize=xmlData.length();
    byte[] buf = new byte[chunkSize];
    while ((length = in.read(buf)) != -1) {
    out.write(buf, 0, length);
    wrote += length;
    temporaryClob.setBytes(buf);
    log.debug("Wrote lenght"+wrote);
         // Bind this CLOB with the prepared Statement
         pstmt.setInt(1,100);
         pstmt.setStringForClob(2, xmlData);
         int i =pstmt.executeUpdate();
         if (i == 1) {
         log.debug("Record Successfully inserted!");
         }

    try this, in adodb works:
    declare poXML CLOB;
    BEGIN
    poXML := '<OIDS><OID>large text</OID></OIDS>';
    UPDATE a_po_xml_tab set podoc=XMLType(poXML) WHERE poid = 102;
    END;

  • Storage Footprint on SC 2012 EP managed clients.

    I have a scenario in which we are deploying the SCCM2012 agent and EP as part of a VDI solution. In this solution, we would only be allowed 10GB of use for the user and any changes the user makes to their virtual machine. Because of this we need to manage
    space requirements and allocation closely.
    I am trying to calculate the storage footprint of Endpoint, specifically the definition files and how the updates folder might bloat over time. If anyone knows of a way to do this, or has anecdotal experience from their own deployments I'd really appreciate
    it.
    http://support.microsoft.com/kb/977939 - A Support KB on Forefront definition updates. Defines new packages as 40MB for initial “full” install, and 1-15MB for incremental/delta definition updates. When I checked our test installs, the definition update folder
    was 200MB for the initial install and 2 days of definition updates which doesn't seem to add up to this KB… too smalll a sample to know what this will look like over a week or a month but it has me nervous.
    http://blogs.technet.com/b/configmgrteam/archive/2012/04/12/software-update-content-cleanup-in-system-center-2012-configuration-manager.aspx - Blog article, discusses the process for cleaning up old definition files, specifically for the reason we’re concerned
    about with the VDI deployment: storage. It appears that this only addresses the repository and does not remove the definitions from the managed client collections. How might we clean up definition files if space becomes an issue?
    Again, any feedback appreciated. Thanks in advance!

    Hi,
    I checked our computers with SCEP 2012 agent installed for months. The size of definition updates folders are different.(130MB, 208 MB, 180MB...)
    Best Regards,
    Joyce
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • ICloud storage to open iPad storage

    Hi,
    If I back up my iPad data movies,music,tv on iCloud will that help open space on my I pad?

    Welcome to the Apple community.
    Can you tell is exactly what is happening when you try. Have you given it a little time to display the details, it can take a few moments to calculate the storage used by each and every app.

  • XML Storage and shredding

    Hello.
    I've been doing a POC wrt storing and extracting data from an XML document in Oracle. I've found that this is fairly simple using the following method:
    1. INSERT into <tab with XMLTYPE column> VALUES XMLTYPE(BFILENAME etc.)
    2. Use XMLTABLE to shred the XML, leaving the data for me to insert into a table.
    I was set on using this approach, until I found that processing really large XML files (> 100000 records) causes ORA-31186: Document contains too many nodes.
    Seeking alternatives, I then found this, courtesy of Sean Dillon, on the AskTom website:
    "Lastly, you can store the XMLType column in an object-relational storage
    architecture. This means that when you load the XML document, Oracle automatically
    shreds the document into objects and relational rows for you, behind the scenes."
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:18017595372392
    I'm particularly interested in the last sentence of the paragraph above. Can anyone show me an example of how this might work (i.e. inserting an XML doc automatically shreds it into relational data)?
    I'm guessing this involve schema registration? Is there a node limit with this approach?
    Thanks,
    Ray

    Cool creative use of Text File Splitter ;-)
    I know it is 11g related, but if you take "binary xml" out of the equation than you have the concepts you can apply on your 10.2.0.4 environment.
    There was a reason II / we pointed you to OBJECT RELATIONAL storage. Performance.
    The first question you ask is performance related...
    If you describe your storage objects via a DBMS_METADATA.GET_DDL('TABLE','RAY_TEST_11') or DBMS_METADATA.GET_DDL('TABLE','RAY_TEST_EXTRACT11'), what do you get?
    Would you think it is CLOB, XMLType CLOB or XMLType OR based?
    If you reread http://www.liberidu.com/blog/?p=203 and then look at the use cases http://www.liberidu.com/blog/images/xml-use-cases-and-xmltype-storage-models.JPG (This is an image taken from the XMLDB Developers Guide 11g chapter 1 or 2, that deal with design questions like "What storage model should I pick to get a decent performance.....), how do you think that relates to what you want...?
    What is big...?
    A XML Document off 1.5 Mb can be huge in XML, especially if the structure is very complex and/or nested.
    By "rows" you mean XML documents, records in a "relational world" of sense.
    That analogy is not correct.
    An XML document (= ONE document) is like an oracle schema that contains a peoplesoft/sap environment with thousands of tables and all the tables contain only 1 record. Some of them are relinked to themselves in the same table containing mulitple records. All records, although stored into different tables, are related to each other. A cascade delete would wipe out all data in all those thousands of tables...
    That is the XML world we are talking about.
    What is big? In XML this can mean a file of 1.5 Mb of size................................
    Message was edited by:
    Marco Gralike

  • Difference between an XMLType table and a table with an XMLType column?

    Hi all,
    Still trying to get my mind around all this XML stuff.
    Can someone concisely explain the difference between:
    create table this_is_xmltype_tab of xmltype;and
    create table this_is_tab_w_xmltpe_col(id number, document xmltype);What are the relative advantages and disadvantages of each approach? How do they really differ?
    Thanks,
    -Mark

    There is another pointer Mark, that I realized when I was thinking about the differences...
    If you would look up in the manual regarding "xdb:annotations" you would learn about a method using an XML Schema to generate out of the box your whole design in terms of physical layout and/or design principles. In my mind this should be the preferred solution if you are dealing with very complex XML Schema environments. Taking your XML Schema as your single point design layout, that during the actual implementation automatically generates and builds all your needed database objects and its physical requirements, has great advantages in points of design version management etc., but...
    ...it will create automatically an XMLType table (based on OR, Binary XML of "hybrid" storage principles, aka the ones that are XML Schema driven) and not AFAIK a XMLtype column structure: so as in "our" case a table with a id column and a xmltype column.
    In principle you could relationally relate to this as:
    +"I have created an EER diagram and a Physical diagram, I mix the content/info of those two into one diagram." "Then I _+execute+_ it in the database and the end result will be an database user/schema that has all the xxxx amount of physical objects I need, the way I want it to be...".+
    ...but it will be in the form of an XMLType table structure...
    xdb:annotations can be used to create things like:
    - enforce database/company naming conventions
    - DOM validation enabled or not
    - automatic IOT or BTree index creation (for instance in OR XMLType storage)
    - sort search order enforced or not
    - default tablenames and owners
    - extra column or table property settings like for partitioning XML data
    - database encoding/mapping used for SQL and binary storage
    - avoid automatic creation of Oracle objects (tables/types/etc), for instance, via xdb:defaultTable="" annotations
    - etc...
    See here for more info: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#ADXDB4519
    and / or for more detailed info:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030452
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030995
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#CHDCEBAG
    ...

Maybe you are looking for