Java5 Document into XMLType

Hi,
I had a look to the example on OTN and I found how to get an XMLType into a Java 5 Document.
I want to do exactly the contrary.
So far, I transform my XML Document into a string
(The XML data come from a file and are parsed with a JAXP DOM Parser, validated with an XSD, and then quite a lot of operations are performed on the document).
Then I create an XMLType using that string, to finally insert it into the database.
I'd like to go faster and to initialize my XMLType directly with the DOM Document
(to avoid any transformation and any additional parsing from Oracle XMLType)
So,
Document ==> XMLType, insert
should be faster than
Document ==> String , String ==> XMLType, insert
Any idea ?
Thanks

create directory object DIR_TEMP on the database pointing to the physical path where your xml files are available.
Note the path should be on the database server and not on the client and also your files should be present on the database server and not client for this to work.
create table xml_table (id number, xml_data xmltype)
CREATE OR REPLACE FUNCTION getClobDocument
( filename  in varchar2,
  charset   in varchar2 default NULL,
  dir       IN VARCHAR2 default 'DIR_TEMP',
) RETURN CLOB deterministic is
  file          bfile := bfilename(dir,filename);
  charContent   CLOB := ' ';
  targetFile    bfile;
  lang_ctx      number := DBMS_LOB.default_lang_ctx;
  charset_id    number := 0;
  src_offset    number := 1 ;
  dst_offset    number := 1 ;
  warning       number;
BEGIN
  IF charset is not null then
      charset_id := NLS_CHARSET_ID(charset);
  end if;
  targetFile := file;
  DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
  DBMS_LOB.LOADCLOBFROMFILE(charContent, targetFile,
  DBMS_LOB.getLength(targetFile), src_offset, dst_offset,
  charset_id, lang_ctx,warning);
  DBMS_LOB.fileclose(targetFile);
  return charContent;
end;
insert into xml_table values(1, XMLTYPE(GetClobDocument('purchaseorder.xml')))
/The function GetClobDocument returns the content of the xml file in CLOB format.
Before insert this is converted into XMLTYPE.

Similar Messages

  • Insert a XSL document into XMLType

    Hi dear reader
    I want to insert a XSL document into XMLType field , but the length of the XSL is 5900 character and I have problem with loading and executing the INSERT statement in database.
    for solving the load problem , I use binding variable and try to execute it with EXECUTE IMMEDIATE str.
    and get the following error :
    ORA-01704: string literal too long
    please advice;
    thanks

    create directory object DIR_TEMP on the database pointing to the physical path where your xml files are available.
    Note the path should be on the database server and not on the client and also your files should be present on the database server and not client for this to work.
    create table xml_table (id number, xml_data xmltype)
    CREATE OR REPLACE FUNCTION getClobDocument
    ( filename  in varchar2,
      charset   in varchar2 default NULL,
      dir       IN VARCHAR2 default 'DIR_TEMP',
    ) RETURN CLOB deterministic is
      file          bfile := bfilename(dir,filename);
      charContent   CLOB := ' ';
      targetFile    bfile;
      lang_ctx      number := DBMS_LOB.default_lang_ctx;
      charset_id    number := 0;
      src_offset    number := 1 ;
      dst_offset    number := 1 ;
      warning       number;
    BEGIN
      IF charset is not null then
          charset_id := NLS_CHARSET_ID(charset);
      end if;
      targetFile := file;
      DBMS_LOB.fileopen(targetFile, DBMS_LOB.file_readonly);
      DBMS_LOB.LOADCLOBFROMFILE(charContent, targetFile,
      DBMS_LOB.getLength(targetFile), src_offset, dst_offset,
      charset_id, lang_ctx,warning);
      DBMS_LOB.fileclose(targetFile);
      return charContent;
    end;
    insert into xml_table values(1, XMLTYPE(GetClobDocument('purchaseorder.xml')))
    /The function GetClobDocument returns the content of the xml file in CLOB format.
    Before insert this is converted into XMLTYPE.

  • Inserting a long XML document into XMLType

    I'm trying to load the following document into an XMLType column in 10.2. I've tried every example I can find and can push the data into CLOBs using the Java work around just fine (http://www.oracle.com/technology/sample_code/tech/java/codesnippet/xmldb/HowToLoadLargeXML.html).
    Can anyone provide a solution or let me know if there is a limitation please?
    Given the table;
    SQL> describe xmltable_1
    Name Null? Type
    DOC_ID NUMBER
    XML_DATA XMLTYPE
    How do I load this data into 'XML_DATA'?
    <?xml version="1.0" encoding="UTF-8"?>
    <metadata>
    <idinfo>
    <citation>
    <citeinfo>
    <origin>Rand McNally and ESRI</origin>
    <pubdate>1996</pubdate>
    <title>ESRI Cities Geodata Set</title>
    <geoform>vector digital data</geoform>
    <onlink>\\OIS23\C$\Files\Working\Metadata\world\cities.shp</onlink>
    </citeinfo>
    </citation>
    <descript>
    <abstract>World Cities contains locations of major cities around the world. The cities include national capitals for each of the countries in World Countries 1998 as well as major population centers and landmark cities. World Cities was derived from ESRI's ArcWorld database and supplemented with other data from the Rand McNally New International Atlas</abstract>
    <purpose>606 points, 4 descriptive fields. Describes major world cities.</purpose>
    </descript>
    <timeperd>
    <timeinfo>
    <sngdate>
    <caldate>1996</caldate>
    </sngdate>
    </timeinfo>
    <current>publication date</current>
    </timeperd>
    <status>
    <progress>Complete</progress>
    <update>None planned</update>
    </status>
    <spdom>
    <bounding>
    <westbc>
    -165.270004</westbc>
    <eastbc>
    177.130188</eastbc>
    <northbc>
    78.199997</northbc>
    <southbc>
    -53.150002</southbc>
    </bounding>
    </spdom>
    <keywords>
    <theme>
    <themekt>city</themekt>
    <themekey>cities</themekey>
    </theme>
    </keywords>
    <accconst>none</accconst>
    <useconst>none</useconst>
    <ptcontac>
    <cntinfo>
    <cntperp>
    <cntper>unknown</cntper>
    <cntorg>unknown</cntorg>
    </cntperp>
    <cntpos>unknown</cntpos>
    <cntvoice>555-1212</cntvoice>
    </cntinfo>
    </ptcontac>
    <datacred>ESRI</datacred>
    <native>Microsoft Windows NT Version 4.0 (Build 1381) Service Pack 6; ESRI ArcCatalog 8.1.0.570</native>
    </idinfo>
    <dataqual>
    <attracc>
    <attraccr>no report available</attraccr>
    <qattracc>
    <attraccv>1000000</attraccv>
    <attracce>no report available</attracce>
    </qattracc>
    </attracc>
    <logic>no report available</logic>
    <complete>no report available</complete>
    <posacc>
    <horizpa>
    <horizpar>no report available</horizpar>
    </horizpa>
    <vertacc>
    <vertaccr>no report available</vertaccr>
    </vertacc>
    </posacc>
    <lineage>
    <srcinfo>
    <srccite>
    <citeinfo>
    <title>ESRI</title>
    </citeinfo>
    </srccite>
    <srcscale>20000000</srcscale>
    <typesrc>CD-ROM</typesrc>
    <srctime>
    <timeinfo>
    <sngdate>
    <caldate>1996</caldate>
    </sngdate>
    </timeinfo>
    <srccurr>publication date</srccurr>
    </srctime>
    <srccontr>no report available</srccontr>
    </srcinfo>
    <procstep>
    <procdesc>no report available</procdesc>
    <procdate>Unknown</procdate>
    </procstep>
    </lineage>
    </dataqual>
    <spdoinfo>
    <direct>Vector</direct>
    <ptvctinf>
    <sdtsterm>
    <sdtstype>Entity point</sdtstype>
    <ptvctcnt>606</ptvctcnt>
    </sdtsterm>
    </ptvctinf>
    </spdoinfo>
    <spref>
    <horizsys>
    <geograph>
    <latres>0.000001</latres>
    <longres>0.000001</longres>
    <geogunit>Decimal degrees</geogunit>
    </geograph>
    <geodetic>
    <horizdn>North American Datum of 1927</horizdn>
    <ellips>Clarke 1866</ellips>
    <semiaxis>6378206.400000</semiaxis>
    <denflat>294.978698</denflat>
    </geodetic>
    </horizsys>
    </spref>
    <eainfo>
    <detailed>
    <enttyp>
    <enttypl>
    cities</enttypl>
    </enttyp>
    <attr>
    <attrlabl>FID</attrlabl>
    <attrdef>Internal feature number.</attrdef>
    <attrdefs>ESRI</attrdefs>
    <attrdomv>
    <udom>Sequential unique whole numbers that are automatically generated.</udom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>Shape</attrlabl>
    <attrdef>Feature geometry.</attrdef>
    <attrdefs>ESRI</attrdefs>
    <attrdomv>
    <udom>Coordinates defining the features.</udom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>NAME</attrlabl>
    <attrdef>The city name. Spellings are based on Board of Geographic Names standards and commercial atlases.</attrdef>
    <attrdefs>ESRI</attrdefs>
    </attr>
    <attr>
    <attrlabl>COUNTRY</attrlabl>
    <attrdef>An abbreviated country name.</attrdef>
    </attr>
    <attr>
    <attrlabl>POPULATION</attrlabl>
    <attrdef>Total population for the entire metropolitan area. Values are from recent census or estimates.</attrdef>
    </attr>
    <attr>
    <attrlabl>CAPITAL</attrlabl>
    <attrdef>Indicates whether a city is a national capital (Y/N).</attrdef>
    </attr>
    </detailed>
    <overview>
    <eaover>none</eaover>
    <eadetcit>none</eadetcit>
    </overview>
    </eainfo>
    <distinfo>
    <stdorder>
    <digform>
    <digtinfo>
    <transize>0.080</transize>
    </digtinfo>
    </digform>
    </stdorder>
    </distinfo>
    <metainfo>
    <metd>20010509</metd>
    <metc>
    <cntinfo>
    <cntorgp>
    <cntorg>ESRI</cntorg>
    <cntper>unknown</cntper>
    </cntorgp>
    <cntaddr>
    <addrtype>unknown</addrtype>
    <city>unknown</city>
    <state>unknown</state>
    <postal>00000</postal>
    </cntaddr>
    <cntvoice>555-1212</cntvoice>
    </cntinfo>
    </metc>
    <metstdn>FGDC Content Standards for Digital Geospatial Metadata</metstdn>
    <metstdv>FGDC-STD-001-1998</metstdv>
    <mettc>local time</mettc>
    <metextns>
    <onlink>http://www.esri.com/metadata/esriprof80.html</onlink>
    <metprof>ESRI Metadata Profile</metprof>
    </metextns>
    </metainfo>
    </metadata>
    rtacce>Vertical Positional Accuracy is expressed in meters. Vertical accuracy figures were developed by comparing elevation contour locations on 1:24,000 scale maps to elevation values at the same location within the digital database. Some manual interpolation was necessary to complete this test. The analysis results are expressed as linear error at a 90% confidence interval.</vertacce>
    </qvertpa>
    </vertacc>
    </posacc>
    <lineage>
    <srcinfo>
    <srccite>
    <citeinfo>
    <origin>National Imagery and Mapping Agency</origin>
    <pubdate>1994</pubdate>
    <title>Operational Navigational Chart</title>
    <geoform>map</geoform>
    <pubinfo>
    <pubplace>St.Louis, MO</pubplace>
    <publish>National Imagery and Mapping Agency</publish>
    </pubinfo>
    </citeinfo>
    </srccite>
    <srcscale>1000000</srcscale>
    <typesrc>stable-base material</typesrc>
    <srctime>
    <timeinfo>
    <rngdates>
    <begdate>1974</begdate>
    <enddate>1994</enddate>
    </rngdates>
    </timeinfo>
    <srccurr>Publication dates</srccurr>
    </srctime>
    <srccitea>ONC</srccitea>
    <srccontr>All information found on the source with the exception of aeronautical data</srccontr>
    </srcinfo>
    <srcinfo>
    <srccite>
    <citeinfo>
    <origin>National Imagery and Mapping Agency</origin>
    <pubdate>199406</pubdate>
    <title>Digital Aeronautical Flight Information File</title>
    <geoform>model</geoform>
    <pubinfo>
    <pubplace>St. Louis, MO</pubplace>
    <publish>National Imagery and Mapping Agency</publish>
    </pubinfo>
    </citeinfo>
    </srccite>
    <typesrc>magnetic tape</typesrc>
    <srctime>
    <timeinfo>
    <sngdate>
    <caldate>1994</caldate>
    </sngdate>
    </timeinfo>
    <srccurr>Publication date</srccurr>
    </srctime>
    <srccitea>DAFIF</srccitea>
    <srccontr>Airport records (name, International Civil Aviation Organization, position, elevation, and type)</srccontr>
    </srcinfo>
    <srcinfo>
    <srccite>
    <citeinfo>
    <origin>Defense Mapping Agency</origin>
    <pubdate>1994</pubdate>
    <title>Jet Navigational Chart</title>
    <geoform>map</geoform>
    <pubinfo>
    <pubplace>St.Louis, MO</pubplace>
    <publish>Defense Mapping Agency</publish>
    </pubinfo>
    </citeinfo>
    </srccite>
    <srcscale>2,000,000</srcscale>
    <typesrc>stable-base material</typesrc>
    <srctime>
    <timeinfo>
    <rngdates>
    <begdate>1974</begdate>
    <enddate>1991</enddate>
    </rngdates>
    </timeinfo>
    <srccurr>Publication date</srccurr>
    </srctime>
    <srccitea>JNC</srccitea>
    <srccontr>All information found on the source with the exception of aeronautical data. JNCs were used as source for the Antartica region only.</srccontr>
    </srcinfo>
    <srcinfo>
    <srccite>
    <citeinfo>
    <origin>USGS EROS Data Center</origin>
    <pubdate></pubdate>
    <title>Advance Very High Resolution Radiometer</title>
    <geoform>remote-sensing image</geoform>
    <pubinfo>
    <pubplace>Sioux Falls, SD</pubplace>
    <publish>EROS Data Center</publish>
    </pubinfo>
    </citeinfo>
    </srccite>
    <srcscale>1000000</srcscale>
    <typesrc>magnetic tape</typesrc>
    <srctime>
    <timeinfo>
    <rngdates>
    <begdate>199003</begdate>
    <enddate>199011</enddate>
    </rngdates>
    </timeinfo>
    <srccurr>Publication date</srccurr>
    </srctime>
    <srccitea>AVHRR</srccitea>
    <srccontr>6 vegetation types covering the continental US and Canada</srccontr>
    </srcinfo>
    <procstep>
    <procdesc>For the first edition DCW, stable-based positives were produced from the original reproduction negatives (up to 35 per ONC sheet). These were digitized either through a scanning-raster to vector conversion or hand digitized into vector form. The vector data was then tagged with attribute information using ARC-INFO software. Transformation to geographic coordinates was performed using the projection graticules for each sheet. Digital information was edge matched between sheets to create large regional datasets. These were then subdivided into 5 x 5 degree tiles and converted from ARC/INFO to VPF. The data was then pre-mastered for CD-ROM. QC was performed by a separate group for each step in the production process.</procdesc>
    <procdate>199112</procdate>
    <proccont>
    <cntinfo>
    <cntorgp>
    <cntorg>Environmental Systems Research Institute</cntorg>
    </cntorgp>
    <cntpos>Applications Division</cntpos>
    <cntaddr>
    <addrtype>mailing and physical address</addrtype>
    <address>380 New York St.</address>
    <city>Redlands</city>
    <state>CA</state>
    <postal>92373</postal>
    <country>US</country>
    </cntaddr>
    <cntvoice>909-793-2853</cntvoice>
    <cntfax>909-793-5953</cntfax>
    </cntinfo>
    </proccont>
    </procstep>
    <procstep>
    <procdate>199404</procdate>
    <proccont>
    <cntinfo>
    <cntorgp>
    <cntorg>Geonex</cntorg>
    </cntorgp>
    <cntpos></cntpos>
    <cntaddr>
    <addrtype>mailing and physical address</addrtype>
    <address>8950 North 9th Ave.</address>
    <city>St. Petersburg</city>
    <state>FL</state>
    <postal>33702</postal>
    <country>US</country>
    </cntaddr>
    <cntvoice>(813)578-0100</cntvoice>
    <cntfax>(813)577-6946</cntfax>
    </cntinfo>
    </proccont>
    </procstep>
    <procstep>
    <procdesc>Transferred digitally directly into the VPF files.</procdesc>
    <procdate>199408</procdate>
    <proccont>
    <cntinfo>
    <cntorgp>
    <cntorg>Geonex</cntorg>
    </cntorgp>
    <cntpos></cntpos>
    <cntaddr>
    <addrtype>mailing and physical address</addrtype>
    <address>8950 North 9th Ave.</address>
    <city>St. Petersburg</city>
    <state>FL</state>
    <postal>33702</postal>
    <country>US</country>
    </cntaddr>
    <cntvoice>813-578-0100</cntvoice>
    <cntfax>813-577-6946</cntfax>
    </cntinfo>
    </proccont>
    </procstep>
    <procstep>
    <procdesc>Stable-based positives were produced from the original reproduction negatives (up to 35 per ONC sheet). These were digitized either through a scanning-raster to vector conversion or hand digitized into vector form. The vector data was then tagged with attribute information using ARC-INFO software. Transformation to geographic coordinates was performed using the projection graticules for each sheet. Digital information was edge matched between sheets to create large regional datasets. These were then subdivided into 5 x 5 degree tiles and converted from ARC/INFO to VPF. The data was then pre-mastered for CD-ROM. QC was performed by a separate group for each step in the production process.</procdesc>
    <procdate>199112</procdate>
    <proccont>
    <cntinfo>
    <cntorgp>
    <cntorg>Environmental Systems Research Institute</cntorg>
    </cntorgp>
    <cntpos>Applications Division</cntpos>
    <cntaddr>
    <addrtype>mailing and physical address</addrtype>
    <address>380 New York St.</address>
    <city>Redlands</city>
    <state>CA</state>
    <postal>92373</postal>
    <country>US</country>
    </cntaddr>
    <cntvoice>909-793-2853</cntvoice>
    <cntfax>909-793-5953</cntfax>
    </cntinfo>
    </proccont>
    </procstep>
    <procstep>
    <procdesc>Daily AVHRR images were averaged for two week time periods over the entire US growing season. These averaged images, their rates of change, elevation information, and other data were used to produce a single land classification image of the contental US. The VMap-0 data set extended this coverage over the Canadian land mass, however vegetation classification was further subdivided into nine vegetation types.</procdesc>
    <procdate>199402</procdate>
    <srcprod>EROS data</srcprod>
    <proccont>
    <cntinfo>
    <cntorgp>
    <cntorg>USGS Eros Data Center</cntorg>
    </cntorgp>
    <cntpos></cntpos>
    <cntaddr>
    <addrtype>mailing and physical address</addrtype>
    <address></address>
    <city>Sioux Falls</city>
    <state>SD</state>
    <postal></postal>
    <country>US</country>
    </cntaddr>
    <cntvoice></cntvoice>
    <cntfax></cntfax>
    </cntinfo>
    </proccont>
    </procstep>
    <procstep>
    <procdesc>The Eros data (raster files) were converted to vector polygon, splined (remove stairstepping), thinned (all ploygons under 2km2 were deleted), and tied to existing DCW polygons (water bodies, built-up areas). The resulting file was tiled and converted to a VPF Vegetation coverage for use in the DCW. All processing was performed using ARC-INFO software.</procdesc>
    <procdate>199412</procdate>
    <srcprod>VMap-0 Vegetation Coverage</srcprod>
    <proccont>
    <cntinfo>
    <cntorgp>
    <cntorg>Geonex</cntorg>
    </cntorgp>
    <cntpos></cntpos>
    <cntaddr>
    <addrtype>mailing and physical address</addrtype>
    <address>8950 North 9th Ave.</address>
    <city>St. Petersburg</city>
    <state>FL</state>
    <postal>33702</postal>
    <country>US</country>
    </cntaddr>
    <cntvoice>813-578-0100</cntvoice>
    <cntfax>813-577-6946</cntfax>
    </cntinfo>
    </proccont>
    </procstep>
    <procstep>
    <procdesc>Data was translated from VPF format to ArcInfo Coverage format. The coverages were then loaded into a seamless ArcSDE layer.</procdesc>
    <procdate>02152001</procdate>
    <proccont>
    <cntinfo>
    <cntorgp>
    <cntorg>Geodesy Team, Harvard University</cntorg>
    </cntorgp>
    <cntemail>[email protected]</cntemail>
    </cntinfo>
    </proccont>
    </procstep>
    </lineage>
    </dataqual>
    <spdoinfo>
    <direct>Vector</direct>
    <ptvctinf>
    <sdtsterm>
    <sdtstype>Complete chain</sdtstype>
    </sdtsterm>
    <sdtsterm>
    <sdtstype>Label point</sdtstype>
    </sdtsterm>
    <sdtsterm>
    <sdtstype>GT-polygon composed of chains</sdtstype>
    </sdtsterm>
    <sdtsterm>
    <sdtstype>Point</sdtstype>
    </sdtsterm>
    <vpfterm>
    <vpflevel>3</vpflevel>
    <vpfinfo>
    <vpftype>Node</vpftype>
    </vpfinfo>
    <vpfinfo>
    <vpftype>Edge</vpftype>
    </vpfinfo>
    <vpfinfo>
    <vpftype>Face</vpftype>
    </vpfinfo>
    </vpfterm>
    </ptvctinf>
    </spdoinfo>
    <spref>
    <horizsys>
    <geograph>
    <latres>0.000000</latres>
    <longres>0.000000</longres>
    <geogunit>Decimal degrees</geogunit>
    </geograph>
    <geodetic>
    <horizdn>D_WGS_1984</horizdn>
    <ellips>WGS_1984</ellips>
    <semiaxis>6378137.000000</semiaxis>
    <denflat>298.257224</denflat>
    </geodetic>
    </horizsys>
    <vertdef>
    <altsys>
    <altdatum>Mean Sea Level</altdatum>
    <altunits>1.0</altunits>
    </altsys>
    </vertdef>
    </spref>
    <eainfo>
    <detailed>
    <enttyp>
    <enttypl>
    lc.pat</enttypl>
    </enttyp>
    <attr>
    <attrlabl>FID</attrlabl>
    <attrdef>Internal feature number.</attrdef>
    <attrdefs>ESRI</attrdefs>
    <attrdomv>
    <udom>Sequential unique whole numbers that are automatically generated.</udom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>Shape</attrlabl>
    <attrdef>Feature geometry.</attrdef>
    <attrdefs>ESRI</attrdefs>
    <attrdomv>
    <udom>Coordinates defining the features.</udom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>AREA</attrlabl>
    <attrdef>Area of feature in internal units squared.</attrdef>
    <attrdefs>ESRI</attrdefs>
    <attrdomv>
    <udom>Positive real numbers that are automatically generated.</udom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>PERIMETER</attrlabl>
    <attrdef>Perimeter of feature in internal units.</attrdef>
    <attrdefs>ESRI</attrdefs>
    <attrdomv>
    <udom>Positive real numbers that are automatically generated.</udom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>LC#</attrlabl>
    <attrdef>Internal feature number.</attrdef>
    <attrdefs>ESRI</attrdefs>
    <attrdomv>
    <udom>Sequential unique whole numbers that are automatically generated.</udom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>LC-ID</attrlabl>
    <attrdef>User-defined feature number.</attrdef>
    <attrdefs>ESRI</attrdefs>
    </attr>
    <attr>
    <attrlabl>LCAREA.AFT_ID</attrlabl>
    </attr>
    <attr>
    <attrlabl>LCPYTYPE</attrlabl>
    <attrdef>Land cover poygon type</attrdef>
    <attrdefs>NIMA</attrdefs>
    <attrdomv>
    <edom>
    <edomv>1</edomv>
    <edomvd>Rice Field</edomvd>
    </edom>
    <edom>
    <edomv>2</edomv>
    <edomvd>Cranberry bog</edomvd>
    </edom>
    <edom>
    <edomv>3</edomv>
    <edomvd>Cultivated area, garden</edomvd>
    </edom>
    <edom>
    <edomv>4</edomv>
    <edomvd>Peat cuttings</edomvd>
    </edom>
    <edom>
    <edomv>5</edomv>
    <edomvd>Salt pan</edomvd>
    </edom>
    <edom>
    <edomv>6</edomv>
    <edomvd>Fish pond or hatchery</edomvd>
    </edom>
    <edom>
    <edomv>7</edomv>
    <edomvd>Quarry, strip mine, mine dump, blasting area</edomvd>
    </edom>
    <edom>
    <edomv>8</edomv>
    <edomvd>Oil or gas</edomvd>
    </edom>
    <edom>
    <edomv>10</edomv>
    <edomvd>Lava flow</edomvd>
    </edom>
    <edom>
    <edomv>11</edomv>
    <edomvd>Distorted surface area</edomvd>
    </edom>
    <edom>
    <edomv>12</edomv>
    <edomvd>Unconsolidated material (sand or gravel, glacial moraine)</edomvd>
    </edom>
    <edom>
    <edomv>13</edomv>
    <edomvd>Natural landmark area</edomvd>
    </edom>
    <edom>
    <edomv>14</edomv>
    <edomvd>Inundated area</edomvd>
    </edom>
    <edom>
    <edomv>15</edomv>
    <edomvd>Undifferentiated wetlands</edomvd>
    </edom>
    <edom>
    <edomv>99</edomv>
    <edomvd>None</edomvd>
    </edom>
    </attrdomv>
    </attr>
    <attr>
    <attrlabl>TILE_ID</attrlabl>
    <attrdef>VPF Format tile ID</attrdef>
    <attrdefs>NIMA</attrdefs>
    </attr>
    <attr>
    <attrlabl>FAC_ID</attrlabl>
    </attr>
    </detailed>
    <overview>
    <eaover>The DCW used a product-specific attribute coding system that is composed of TYPE and STATUS designators for area, line, and point features; and LEVEL and SYMBOL designators for text features. The TYPE attribute specifies what the feature is, while the STATUS attribute specifies the current condition of the feature. Some features require both a TYPE and STATUS code to uniquely identify their characteristics. In order to uniquely identify each geographic attribute in the DCW, the TYPE and STATUS attribute code names are preceded by the two letter coverage abbreviation and a two letter abbreviation for the type of graphic primitive present. The DCW Type/Status codes were mapped into the FACC coding scheme. A full description of FACC may be found in Digital Geographic Information Exchange Standard Edition 1.2, January 1994.</eaover>
    <eadetcit>Entities (features) and Attributes for DCW are fully described in: Department of Defense, 1992, Military Specification Digital Chart of the World (MIL-D-89009): Philadelphia, Department of Defense, Defense Printing Service Detachment Office.</eadetcit>
    </overview>
    </eainfo>
    <distinfo>
    <distrib>
    <cntinfo>
    <cntorgp>
    <cntorg>NIMA</cntorg>
    </cntorgp>
    <cntpos>ATTN: CC, MS D-16</cntpos>
    <cntaddr>
    <addrtype>mailing and physical address</addrtype>
    <address>6001 MacArthur Blvd.</address>
    <city>Bethesda</city>
    <state>MD</state>
    <postal>20816-5001</postal>
    <country>US</country>
    </cntaddr>
    <cntvoice>301-227-2495</cntvoice>
    <cntfax>301-227-2498</cntfax>
    </cntinfo>
    </distrib>
    <distliab>None</distliab>
    <stdorder>
    <digform>
    <digtinfo>
    <formname>VPF</formname>
    <formverd>19930930</formverd>
    <formspec>Military Standard Vector Product Format (MIL-STD-2407). The current version of this document is dated 28 June 1996. Edition 3 of VMap-0 conforms to a previous version of the VPF Standard as noted. Future versions of VMap-0 will conform to the current version of VPF Standard.</formspec>
    <transize>0.172</transize>
    </digtinfo>
    <digtopt>
    <offoptn>
    <offmedia>CD-ROM</offmedia>
    <recfmt>ISO 9660</recfmt>
    </offoptn>
    </digtopt>
    </digform>
    <fees>Not Applicable</fees>
    </stdorder>
    </distinfo>
    <distinfo>
    <distrib>
    <cntinfo>
    <cntorgp>
    <cntorg>USGS Map Sales</cntorg>
    </cntorgp>
    <cntaddr>
    <addrtype>mailing address</addrtype>
    <address>Box 25286</address>
    <city>Denver</city>
    <state>CO</state>
    <postal>80225</postal>
    <country>US</country>
    </cntaddr>
    <cntvoice>303-236-7477</cntvoice>
    <cntfax>303-236-1972</cntfax>
    </cntinfo>
    </distrib>
    <distliab>None</distliab>
    <stdorder>
    <digform>
    <digtinfo>
    <transize>0.172</transize>
    </digtinfo>
    </digform>
    <fees>$82.50 per four disk set</fees>
    <ordering>For General Public: Payment (check, money order, purchase order, or Government account) must accompany order.
    Make all drafts payable to Dept. of the Interior- US Geological Survey.
    To provide a general idea of content, a sample data set is available from the TMPO Home Page at:</ordering>
    </stdorder>
    </distinfo>
    <distinfo>
    <distrib>
    <cntinfo>
    <cntorgp>
    <cntorg>Geodesy Team, Harvard University</cntorg>
    <cntper>Geodesy Team</cntper>
    </cntorgp>
    <cntemail>[email protected]</cntemail>
    </cntinfo>
    </distrib>
    <resdesc>Geodesy layer</resdesc>
    <distliab>None</distliab>
    <stdorder>
    <digform>
    <digtinfo>
    <formname>SHP</formname>
    <transize>0.172</transize>
    </digtinfo>
    <digtopt>
    <onlinopt>
    <computer>
    <networka>
    <networkr>geodesy.harvard.edu</networkr>
    </networka>
    </computer>
    </onlinopt>
    </digtopt>
    </digform>
    <fees>none</fees>
    </stdorder>
    <availabl>
    <timeinfo>
    <sngdate>
    <caldate>1992</caldate>
    </sngdate>
    </timeinfo>
    </availabl>
    </distinfo>
    <metainfo>
    <metd>20010226</metd>
    <metc>
    <cntinfo>
    <cntorgp>
    <cntorg>Geodesy Team</cntorg>
    <cntper>REQUIRED: The person responsible for the metadata information.</cntper>
    </cntorgp>
    <cntvoice>REQUIRED: The telephone number by which individuals can speak to the organization or individual.</cntvoice>
    <cntemail>[email protected]</cntemail>
    </cntinfo>
    </metc>
    <metstdn>FGDC Content Standards for Digital Geospatial Metadata</metstdn>
    <metstdv>FGDC-STD-001-1998</metstdv>
    <mettc>local time</mettc>
    <metextns>
    <onlink>http://www.esri.com/metadata/esriprof80.html</onlink>
    <metprof>ESRI Metadata Profile</metprof>
    </metextns>
    </metainfo>
    </metadata>

    Have you tired the directory and bfile methods? Here is the example for that in the Oracle XML Developer's Guide:
    CREATE DIRECTORY xmldir AS 'path_to_folder_containing_XML_file';
    Example 3-3 Inserting XML Content into an XMLType Table
    INSERT INTO mytable2 VALUES (XMLType(bfilename('XMLDIR', 'purchaseOrder.xml'),
    nls_charset_id('AL32UTF8')));
    1 row created.
    The value passed to nls_charset_id() indicates that the encoding for the file to be read is UTF-8.
    ben

  • How to insert document into xmltype column through an http post request with perl

    Oracle 11.2.0.3
    Windows server 2008r2
    Apache tomcat 7.0
    Oracle APEX 4.2.1
    Oracle APEX Listener 2.0
    I would like to insert a XML document into the database through an APEX restful web service. The POST into the web service in done with PERL. The following code will insert an empty record in a table with column of XMLType type.
    Perl Code
    use strict;
    use warnings;
    use LWP::UserAgent;
    use HTTP:Headers;
    my $headers = HTTP::Headers->new();
    my $url = "http://host:port:apex/<application_workspace>/<restfull service module>/<uri template>/
    my $sendthis = ('<?xml version="1.0" enconding="utf-8"?>
    <students>
    <row>
           <name>Mark</name>
          <age>30</age>
    </row>
    <./students>';)
    $headers -> header('Content-Type' => 'text/xml; charset=utf-8');
    my $request = HTTP:Request->new('POST', $url, $headers, $sendthis);
    $request-> protocol('HTTP/1.1');
    my $browser = LWP::UserAgent->new();
    my $response = $browser->request($request);
    my $gotthis= $response->content();
    my $the_file_data = $response->content();
    APEX restful service
    Method: POST
    Source type: PL/SQL
    MIME Types allowed: blank
    require secure access: none
    source:
    {declare
    doc varchar2(32000);
    begin
    insert into table <column name>
    values(doc);
    commit;
    end;
    Table code
    { create table <tablename>
    (column name XMLType>);
    The above code will insert an empty column into the table.
    Any ideas why?

    It's a really bad idea to assemble XML using strings and string concatenation in SQL or PL/SQL. First there is a 4K limit in SQL, and 32K limit in PL/SQL, which means you end up constructing the XML in chunks, adding uneccessary complications. 2nd you cannot confirm the XML is valid or well formed using external tools.
    IMHO it makes much more sense to keep the XML content seperated from the SQL / PL/SQL code
    When the XML can be stored a File System accessable from the database, The files can be loaded into the database using mechansims like BFILE.
    In cases where the XML must be staged on a remote file system, the files can be loaded into the database using FTP or HTTP and in cases where this is not an option, SQLLDR.
    -Mark

  • Error while loading valid, schema-related document into xdb

    Hi Mark.
    I tried to load other xml documents (ftp://ftp.tigr.org/pub/data/a_thaliana/ath1/PSEUDOCHROMOSOMES/) into the xdb.
    The documents are related to the same schema as the documents i used until now.
    I had to change the schema a bit. Three elements (protein_sequence, cds_sequence, transcript_sequence) are now stored as CLOB instead of VARCHAR2.
    My first idea was, that the new documents are not valid, but they are.
    The old documents still work. I get an ORA-00600 everytime.
    I tried to get some information from the trace file but this did not help really.
    I don't believe this will help you but i post it anyway:
    Dump file d:\oracle\admin\pdw\udump\pdw_s000_6388.trc
    Mon Jan 02 09:21:36 2006
    ORACLE V9.2.0.7.0 - Production vsnsta=0
    vsnsql=12 vsnxtr=3
    Windows 2000 Version 5.0 Service Pack 4, CPU type 586
    Oracle9i Enterprise Edition Release 9.2.0.7.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.7.0 - Production
    Windows 2000 Version 5.0 Service Pack 4, CPU type 586
    Instance name: pdw
    Redo thread mounted by this instance: 1
    Oracle process number: 10
    Windows thread id: 6388, image: ORACLE.EXE
    *** 2006-01-02 09:21:36.218
    *** SESSION ID:(23.9078) 2006-01-02 09:21:36.187
    QMHD escaped text too long: dstlen=0 dstbuf=/
    QMHD escaped text too long: dstlen=0 dstbuf=home
    QMHD escaped text too long: dstlen=0 dstbuf=public
    QMHD escaped text too long: dstlen=0 dstbuf=sys
    QMHD escaped text too long: dstlen=0 dstbuf=xdbconfig.xml
    QMHD escaped text too long: dstlen=0 dstbuf=/
    QMHD escaped text too long: dstlen=0 dstbuf=home
    QMHD escaped text too long: dstlen=0 dstbuf=public
    QMHD escaped text too long: dstlen=0 dstbuf=sys
    QMHD escaped text too long: dstlen=0 dstbuf=xdbconfig.xml
    QMHD escaped text too long: dstlen=0 dstbuf=home
    QMHD escaped text too long: dstlen=0 dstbuf=RSCHULZE
    QMHD escaped text too long: dstlen=0 dstbuf=PDV70
    QMHD escaped text too long: dstlen=0 dstbuf=SCOTT
    QMHD escaped text too long: dstlen=0 dstbuf=SCOTT1
    QMHD escaped text too long: dstlen=0 dstbuf=SCOTT2
    QMHD escaped text too long: dstlen=0 dstbuf=hbachman
    QMHD escaped text too long: dstlen=0 dstbuf=pdw_biowh31
    QMHD escaped text too long: dstlen=0 dstbuf=pdw_stage
    QMHD escaped text too long: dstlen=0 dstbuf=pdw_tigr_chromosome
    QMHD escaped text too long: dstlen=0 dstbuf=uniprot
    QMHD escaped text too long: dstlen=0 dstbuf=pdw_tigr_chromosome
    QMHD escaped text too long: dstlen=0 dstbuf=data
    QMHD escaped text too long: dstlen=0 dstbuf=xsd
    QMHD escaped text too long: dstlen=0 dstbuf=data
    QMHD escaped text too long: dstlen=0 dstbuf=seq1.txt
    QMHD escaped text too long: dstlen=0 dstbuf=seq2.txt
    QMHD escaped text too long: dstlen=0 dstbuf=seq3.txt
    QMHD escaped text too long: dstlen=0 dstbuf=seq4.txt
    QMHD escaped text too long: dstlen=0 dstbuf=seq5.txt
    *** 2006-01-02 09:36:57.421
    ksedmp: internal or fatal error
    ORA-00600: internal error code, arguments: [kokbgcip1], [196609], [63], [], [], [], [], []
    Current SQL statement for this session:
    INSERT /*+ NO_REF_CASCADE */ INTO "PDW_TIGR_CHROMOSOME"."SYS_NT0upV7+xbRnu3KquZIaRFgQ=="("NESTED_TABLE_ID","ARRAY_INDEX","SYS_NC_ROWINFO$") VALUES(:1,:2,:3)
    ----- Call Stack Trace -----
    calling call entry argument values in hex
    location type point (? means dubious value)
    ksedmp+327          CALLrel  ksedst+0
    ksfdmp.108+14       CALLrel  ksedmp+0 3
    _kgerinv+131         CALLreg  00000000             D3E4368 3
    kgeasnmierr+19      CALLrel  kgerinv+0 D3E4368 4BE2A94 1AF86D4 2
    4B56B40
    kokbgcip+343        CALLrel  kgeasnmierr+0 D3E4368 4BE2A94 1AF86D4 2 4
    30001 4 3F
    qerocImageIterStar  CALLrel  kokbgcip+0
    t+160
    qerocStart+265      CALLrel  qerocImageIterStar 237701B4 D3E41E0
    t+0
    _kokbint+242         CALL???  00000000             81DFD20 1
    kokbeva+338         CALLrel  kokbint+0 237700B8 237700FC
    ..1.42_2.filter.9+1 CALLreg 00000000 D3E41E0
    101
    insolev.73+164      CALLrel  evaopn2+0 237700B8
    insbrp.73+1558      CALLrel  insolev.73+0 2374A8B4 81DDFF8 1
    insrow+173          CALLrel  insbrp.73+0
    insdrv.73+1302      CALLrel  insrow+0 81DDFF8 4B584E4 0
    ..1.6_1.filter.73+2 CALLrel _insdrv.73+0         81DDFF8
    28
    ..1.5_2.except.29+1 CALLrel _insexe+0            2374A8B4 4B58730
    6740
    ..1.2_1.filter.25+3 CALLrel _opiexe+0            4 3 4B58B24
    347
    opikpr+512          CALLrel  opiall0+0 65 22 4B58D20 0 0 4B58DB8 9C
    20 0 0 0 0 0
    ..1.1_1.filter.34+1 CALLreg 00000000 65 14 4B59980
    356
    rpidrus.43+167      CALLrel  opiodr+0 65 14 4B59980 0
    _skgmstack+113       CALLreg  00000000             4B590A0
    rpidru+109          CALLrel  skgmstack+0 4B590B8 D3E41F0 F618 59941C
    4B590A0
    _rpiswu2+839         CALLreg  00000000             4B59498
    kprball+1537        CALLrel  rpiswu2+0 1EA8DD18 102 4B59414 4
    1E833D34 102 4B59414 0 59932C
    5991DC 4B59498 8
    kokbint+1766        CALLrel  kprball+0 4B59980 500
    kokbeva+338         CALLrel  kokbint+0 2374E498 2374E4DC
    ..1.42_2.filter.9+1 CALLreg 00000000 D3E41E0
    101
    insolev.73+164      CALLrel  evaopn2+0 2374E498
    insbrp.73+1558      CALLrel  insolev.73+0 25EC7468 7D0BD90 1
    insrow+173          CALLrel  insbrp.73+0
    insdrv.73+1302      CALLrel  insrow+0 7D0BD90 4B5AD78 0
    ..1.6_1.filter.73+2 CALLrel _insdrv.73+0         7D0BD90
    28
    ..1.5_2.except.29+1 CALLrel _insexe+0            25EC7468 4B5AFC4
    6740
    ..1.2_1.filter.25+3 CALLrel _opiexe+0            4 3 4B5B3B8
    347
    opikpr+512          CALLrel  opiall0+0 65 22 4B5B5B4 0 0 4B5B64C 77
    20 0 0 0 0 0
    ..1.1_1.filter.34+1 CALLreg 00000000 65 14 4B5BE70
    356
    rpidrus.43+167      CALLrel  opiodr+0 65 14 4B5BE70 0
    _skgmstack+113       CALLreg  00000000             4B5B934
    rpidru+109          CALLrel  skgmstack+0 4B5B94C D3E41F0 F618 59941C
    4B5B934
    _rpiswu2+839         CALLreg  00000000             4B5BD2C
    kprball+1537        CALLrel  rpiswu2+0 1EA8DD18 102 1EA891BC 4
    1E833D34 102 1EA8923C 0
    59932C 5991DC 4B5BD2C 9
    qmskInsertXmlType+  CALLrel  kprball+0 4B5BE70 180
    1203
    qmskStoreXobWithIm  CALLrel  qmskInsertXmlType+
    age+526 0
    qmskStoreXob+16     CALLrel  qmskStoreXobWithIm
    age+0
    qmskFlushXob+22     CALLrel  qmskStoreXob+0 4BFCFCC 4BE6698 0
    _qmeSaveContents+44  CALLreg  00000000            
    6
    qmePreSave+2417     CALLrel  qmeSaveContents+0 4BFE790 2 1
    _qmtEventFire+259    CALLreg  00000000             D3E4368 3 4BFE8B0
    qmxiWriteXobToImag  CALLrel  qmtEventFire+0 D3E4368 3 4BFE8B0
    eInternal+60
    qmxiWriteXobToImag  CALLrel  qmxiWriteXobToImag D3E4368 4B5D418 0 4BFE8B0
    eWithHeap+82 eInternal+0 700ECBC 4B5D398 4B5D364
    4B5D384 60A09FC 4B5C37C 0 6
    D3E4368 C 4B5D418
    qmxtgGetOpqImageFr  CALLrel  qmxiWriteXobToImag D3E4368 4B5D418 0 4BFE8B0
    omXob+430 eWithHeap+0 700ECBC 4B5D398 4B5D364
    4B5D384 4B5D470 C 6 700ECF8
    4B5D418
    qmskStoreXobWithIm  CALLrel  qmxtgGetOpqImageFr
    age+370 omXob+0
    qmskStoreXob+16     CALLrel  qmskStoreXobWithIm
    age+0
    qmskFlushXob+22     CALLrel  qmskStoreXob+0 4BFE8B0 4BFF740 4B5D6FC
    _qmeInsertResRow+22  CALLreg  00000000            
    5
    qmeInsertRes+1831   CALLrel  qmeInsertResRow+0
    qmeLinkInternal+47  CALLrel  qmeInsertRes+0
    12
    qmeCreOrBindRes+33  CALLrel  qmeLinkInternal+0 4BFE790 60A5D14 1DFE0323 1D 0
    1 1 0 4B5D7A0 4B5D85C 0 0
    qmeCreateRes+115    CALLrel  qmeCreOrBindRes+0 4BFE790 60A5D14 1DFE0304 1F
    1DFE0323 1D 0 1 0 4B5D85C 0 0
    4B5D85C 0
    qmeuCreateOrUpdate  CALLrel  qmeCreateRes+0 4BFE790 1DFE0304 1F 1DFE0323
    Res+2232 1D 0 0 0
    qmhput+393 CALLrel _qmeuCreateOrUpdate 
    Res+0
    _qmhProcessRequestD  CALLreg  00000000             1DFE4398
    ata+2576
    qmpsrun+1220 CALL??? 00000000
    opitsk+838          CALLrel  qmps_run+0 D3E4368
    opiino+698          CALLrel  opitsk+0 0 0 D3EA5C8 278A650 3 0
    ..1.1_1.filter.34+1 CALLreg 00000000 3C 4 4B5F448
    356
    opirip+662          CALLrel  opiodr+0 3C 4 4B5F448 0
    opidrv+654          CALLrel  opirip+0 32 0 0
    sou2o+25            CALLrel  opidrv+0
    opimai+336          CALLrel  sou2o+0 4B5FE1C 32 0 0
    BackgroundThreadSt  CALLrel  opimai+0
    art@4+356
    77E7B385 CALLreg 00000000
    --------------------- Binary Stack Dump ---------------------
    ========== FRAME [1] (_ksedmp+327 -> _ksedst+0) ==========
    Dump of memory from 0x04B56A68 to 0x04B56AE0
    4B56A60 04B56AE0 00522AC8 [.j...*R.]
    4B56A70 00000000 00000000 00000000 00000000 [................]
    4B56A80 FFFFFFFF 0000003F 0D3E4444 01AF86D4 [....?...DD>.....]
    4B56A90 0D3E444C 04B56ADC 0283676E 0D3E4444 [LD>..j..ng..DD>.]
    4B56AA0 0D3E444C 00000000 00000000 01AF86D4 [LD>.............]
    4B56AB0 00000002 00000009 0D3E4368 04BE2A94 [........hC>..*..]
    4B56AC0 01A89F00 0D3E41E0 04B56A74 0D3E4368 [.....A>.tj..hC>.]
    4B56AD0 04B571F8 015DDE00 02775F44 FFFFFFFF [.q....].D_w.....]
    ========== FRAME [2] (_ksfdmp.108+14 -> _ksedmp+0) ==========
    Dump of memory from 0x04B56AE0 to 0x04B56AEC
    4B56AE0 04B56AEC 0078468B 00000003 [.j...Fx.....]
    ========== FRAME [3] (_kgerinv+131 -> 00000000) ==========
    Dump of memory from 0x04B56AEC to 0x04B56B0C
    4B56AE0 04B56B0C [.k..]
    4B56AF0 02836849 0D3E4368 00000003 081B3444 [Ih..hC>.....D4..]
    4B56B00 0D3E41E0 081B3444 081B3444 [.A>.D4..D4..]
    ========== FRAME [4] (_kgeasnmierr+19 -> _kgerinv+0) ==========
    Dump of memory from 0x04B56B0C to 0x04B56B28

    I have a new problem:
    Due to the changes of the schema ( transcript_sequence ... stored as clob) i got an error in a view.
    ORA-00932 : incosistent datatypes
    The error occurs in V007 in the following rows
    extractValue(value(tu),'/TU/TRANSCRIPT_SEQUENCE'),
    extractValue(value(model),'/MODEL/CDS_SEQUENCE'),
    extractValue(value(model),'/MODEL/PROTEIN_SEQUENCE')
    But in V005 there is no error althought there is the row extractValue(value(tu),'/TU/TRANSCRIPT_SEQUENCE'),
    the views
    create or replace view V007_MODEL(FEAT_NAME,
                                                           GENE_SYNONYM,
                                                           GENE_SYNONYM_SYN_TYPE,
                                                           CHROMO_LINK,
                                                           TU_DATE,
                                                           TU_COORDSET_END5,
                                                           TU_COORDSET_END3,
                                                      TRANSCRIPT_SEQUENCE,
                                                           URL,
                                                           URL_URLNAME,
                                                           CURATED,
                                                           MODEL_COMMENT,
                                                           MODEL_FEAT_NAME,
                                                           PUB_LOCUS,
                                                           CDNA_SUPPORT_ACCESSION,
                                                           CDNA_SUPPORT_ACCESSION_DBXREF,
                                                           CDNA_SUPPORT_ACCESSION_IS_FLI,
                                                           CDNA_SA_UNIQUE_TO_ISOFORM,
                                                           CDNA_SA_ANNOT_INCORP,                                             
                                                           MODEL_GENE_SYNONYM,
                                                           MODEL_GENE_SYNONYM_SYN_TYPE,
                                                           MODEL_CHROMO_LINK,
                                                           MODEL_DATE,
                                                           MODEL_COORDSET_END5,
                                                           MODEL_COORDSET_END3,
                                                           MA_AT_METHOD,
                                                           MA_AT_ATT_SCORE,
                                                           MA_AT_ATT_SCORE_DESC)
                                                           CDS_SEQUENCE,
                                                           PROTEIN_SEQUENCE)
    as
    select
         distinct
         extractValue(value(tu)           ,'/TU/FEAT_NAME'),
         extractValue(value(tu_gene_synonym)     ,'/GENE_SYNONYM/text()'),
         extractValue(value(tu_gene_synonym)     ,'/GENE_SYNONYM/@SYN_TYPE'),
         extractValue(value(tu_chromo_link)      ,'/CHROMO_LINK/text()'),
         extractValue(value(tu)           ,'/TU/DATE'),
         extractValue(value(tu)           ,'/TU/COORDSET/END5'),
         extractValue(value(tu)           ,'/TU/COORDSET/END3'),
         extractValue(value(tu)      ,'/TU/TRANSCRIPT_SEQUENCE'),
         extractValue(value(url)           ,'/URL/text()'),
         extractValue(value(url)           ,'/URL/@URLNAME'),
         extractValue(value(model)                    ,'/MODEL/@CURATED'),
         extractValue(value(model)                    ,'/MODEL/@COMMENT'),
         extractValue(value(model)                    ,'/MODEL/FEAT_NAME'),
         extractValue(value(model)                    ,'/MODEL/PUB_LOCUS'),
         extractValue(value(accession)      ,'/ACCESSION/text()'),
         extractValue(value(accession)      ,'/ACCESSION/@DBXREF'),
         extractValue(value(accession)           ,'/ACCESSION/@IS_FLI'),
         extractValue(value(accession)           ,'/ACCESSION/@UNIQUE_TO_ISOFORM'),
         extractValue(value(accession)           ,'/ACCESSION/@ANNOT_INCORP'),
         extractValue(value(model_gene_synonym),'/GENE_SYNONYM/text()'),
         extractValue(value(model_gene_synonym),'/GENE_SYNONYM/@SYN_TYPE'),
         extractValue(value(model_chromo_link) ,'/CHROMO_LINK/text()'),
         extractValue(value(model)                    ,'/MODEL/DATE'),
         extractValue(value(model)                    ,'/MODEL/COORDSET/END5'),
         extractValue(value(model)                    ,'/MODEL/COORDSET/END3'),
         extractValue(value(attribute_type)     ,'/ATTRIBUTE_TYPE/@METHOD'),
         extractValue(value(att_score)               ,'/ATT_SCORE/text()'),
         extractValue(value(att_score)               ,'/ATT_SCORE/@DESC')--,
         extractValue(value(model)                    ,'/MODEL/CDS_SEQUENCE'),
         extractValue(value(model)                    ,'/MODEL/PROTEIN_SEQUENCE')
    from TIGR t,
         table(xmlsequence(extract(value(t)                    ,'/TIGR/PSEUDOCHROMOSOME')))                               p,
         table(xmlsequence(extract(value(p)                ,'/PSEUDOCHROMOSOME/ASSEMBLY/GENE_LIST/PROTEIN_CODING/TU')))           tu,
         table(xmlsequence(extract(value(tu)                    ,'/TU/GENE_SYNONYM'))) (+) tu_gene_synonym,
         table(xmlsequence(extract(value(tu)                    ,'/TU/CHROMO_LINK'))) (+) tu_chromo_link,
         table(xmlsequence(extract(value(tu)                    ,'/TU/URL'))) (+) url,
         table(xmlsequence(extract(value(tu)                ,'/TU/MODEL')))                                                                       model,
         table(xmlsequence(extract(value(model)               ,'/MODEL/CDNA_SUPPORT/ACCESSION')))                                         (+) accession,
         table(xmlsequence(extract(value(model)               ,'/MODEL/GENE_SYNONYM'))) (+) model_gene_synonym,
         table(xmlsequence(extract(value(model)               ,'/MODEL/CHROMO_LINK'))) (+) model_chromo_link,
         table(xmlsequence(extract(value(model)               ,'/MODEL/MODEL_ATTRIBUTE/ATTRIBUTE_TYPE'))) (+) attribute_type,
         table(xmlsequence(extract(value(attribute_type),'/ATTRIBUTE_TYPE/ATT_SCORE')))                         (+) att_score,
         table(xmlsequence(extract(value(model)               ,'/MODEL/EXON')))                                                                            exon
    create or replace view V005_GENE_INFO(FEAT_NAME,
                                                                GENE_SYNONYM,
                                                                GENE_SYNONYM_SYN_TYPE,
                                                                CHROMO_LINK,
                                                                TU_DATE,
                                                                TU_COORDSET_END5,
                                                                TU_COORDSET_END3,
                                                                TRANSCRIPT_SEQUENCE,
                                                                URL,
                                                                URL_URLNAME,
                                                                LOCUS,
                                                                ALT_LOCUS,
                                                                PUB_LOCUS,
                                                                GENE_NAME,
                                                                GENE_NAME_IS_PRIMARY,
                                                                COM_NAME,
                                                                COM_NAME_CURATED,
                                                                COM_NAME_IS_PRIMARY,
                                                                GENE_INFO_COMMENT,
                                                                PUB_COMMENT,
                                                                EC_NUM,
                                                                EC_NUM_IS_PRIMARY,
                                                                GENE_SYM,
                                                                GENE_SYM_IS_PRIMARY,
                                                                IS_PSEUDOGENE,
                                                                FUNCT_ANNOT_EVIDENCE_TYPE,
                                                                FAE_ASSIGN_ACC,
                                                                FAE_ASSIGN_ACC_ASSIGN_METHOD,
                                                                GENE_INFO_DATE)
    as
    select
         extractValue(value(tu)                ,'/TU/FEAT_NAME'),
         extractValue(value(gene_synonym)               ,'/GENE_SYNONYM/text()'),
         extractValue(value(gene_synonym)               ,'/GENE_SYNONYM/@SYN_TYPE'),
         extractValue(value(chromo_link)                ,'/CHROMO_LINK/text()'),
         extractValue(value(tu)                ,'/TU/DATE'),
         extractValue(value(tu)                ,'/TU/COORDSET/END5'),
         extractValue(value(tu)           ,'/TU/COORDSET/END3'),
         extractValue(value(tu)                ,'/TU/TRANSCRIPT_SEQUENCE'),
         extractValue(value(url)                ,'/URL/text()'),
         extractValue(value(url)                ,'/URL/@URLNAME'),
         extractValue(value(tu)                              ,'/TU/GENE_INFO/LOCUS'),
         extractValue(value(alt_locus)                    ,'/ALT_LOCUS/text()'),
         extractValue(value(tu)                              ,'/TU/GENE_INFO/PUB_LOCUS'),
         extractValue(value(gene_name)                    ,'/GENE_NAME/text()'),
         extractValue(value(gene_name)                    ,'/GENE_NAME/@IS_PRIMARY'),
         extractValue(value(com_name)                     ,'/COM_NAME/text()'),
         extractValue(value(com_name)                     ,'/COM_NAME/@CURATED'),
         extractValue(value(com_name)                     ,'/COM_NAME/@IS_PRIMARY'),
         extractValue(value(tu)                              ,'/TU/GENE_INFO/COMMENT'),
         extractValue(value(tu)                              ,'/TU/GENE_INFO/PUB_COMMENT'),
         extractValue(value(ec_num)                     ,'/EC_NUM/text()'),
         extractValue(value(ec_num)                     ,'/EC_NUM/@IS_PRIMARY'),
         extractValue(value(gene_sym)                     ,'/GENE_SYM/text()'),
         extractValue(value(gene_sym)                     ,'/GENE_SYM/@IS_PRIMARY'),     
         extractValue(value(tu)                              ,'/TU/GENE_INFO/IS_PSEUDOGENE'),     
         extractValue(value(funct_annot_evidence),'/FUNCT_ANNOT_EVIDENCE/@TYPE'),
         extractValue(value(assign_acc) ,'/ASSIGN_ACC/text()'),
         extractValue(value(assign_acc) ,'/ASSIGN_ACC/@ASSIGN_METHOD'),
         extractValue(value(tu)                              ,'/TU/GENE_INFO/DATE')
    from TIGR t,
         table(xmlsequence(extract(value(t)                         ,'/TIGR/PSEUDOCHROMOSOME')))                               p,
         table(xmlsequence(extract(value(p) ,'/PSEUDOCHROMOSOME/ASSEMBLY/GENE_LIST/PROTEIN_CODING/TU')))     tu,
         table(xmlsequence(extract(value(tu)                              ,'/TU/GENE_SYNONYM'))) (+) gene_synonym,
         table(xmlsequence(extract(value(tu)                              ,'/TU/CHROMO_LINK'))) (+) chromo_link,
         table(xmlsequence(extract(value(tu)                              ,'/TU/URL'))) (+) url,
         table(xmlsequence(extract(value(tu)           ,'/TU/GENE_INFO/ALT_LOCUS'))) (+) alt_locus,
         table(xmlsequence(extract(value(tu)           ,'/TU/GENE_INFO/GENE_NAME'))) (+) gene_name,
         table(xmlsequence(extract(value(tu)           ,'/TU/GENE_INFO/COM_NAME'))) com_name,
         table(xmlsequence(extract(value(tu)           ,'/TU/GENE_INFO/EC_NUM'))) (+) ec_num,
         table(xmlsequence(extract(value(tu)           ,'/TU/GENE_INFO/GENE_SYM'))) (+) gene_sym,
         table(xmlsequence(extract(value(tu)           ,'/TU/GENE_INFO/FUNCT_ANNOT_EVIDENCE'))) (+) funct_annot_evidence,
         table(xmlsequence(extract(value(funct_annot_evidence),'/FUNCT_ANNOT_EVIDENCE/ASSIGN_ACC'))) (+) assign_acc
    I have a second question.
    Usually I use WEBDAV or FTP to load the xml documents.
    There are 5 documents for TIGR Arabidopsis. Now it works to load the documents into the xdb. I usually use WEBDAV. But when I load the first document I get an error. Nevertheless the document is shredded. Because the WEBDAV error message is not meaningful, I used PL/SQL.
    I tried it like this
    insert into TIGR values (xmltype(bfilename(USER,'/home/pdw_tigr_chromosome/data/CHR1.R5v01212004wos.xml'),nls_charset_id('AL32UTF8')))
    ( I deleted the expressions " xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="tigrxml.xsd" " in the document before).
    But this didn't work.
    So I tried
    insert into TIGR values (xmltype(xdbURIType('/home/pdw_tigr_chromosome/data/CHR1.R5v01212004wos.xml').getClob(),'tigrxml.xsd',1,1))
    and it worked. But it seemed to need more time than per WEBDAV or FTP ( but probably I err ). It took 1h 48m for a 74MB file.
    When I load the documents with PL/SQL the document is not shredded. At least the document has its original size when i have a look at the repository per WEBDAV and not the usual 0 bytes. But the data is correctly stored in the xmltype table TIGR
    Message was edited by:
    Nick_MD

  • How to convert Xml Document into orcale tempary table

    i am creating xml document into java and passing this xml into oracle database and i need to fetch this xml into result set | rowset.
    Xml Structure
    <Department deptno="100">
    <DeptName>Sports</DeptName>
    <EmployeeList>
    <Employee empno="200"><Ename>John</Ename><Salary>33333</Salary>
    </Employee>
    <Employee empno="300"><Ename>Jack</Ename><Salary>333444</Salary>
    </Employee>
    </EmployeeList>
    </Department>
    i need like this format
    Deptno DeptName empno Ename Salary
    100 Sports 200 Jhon 2500
    100 Sports 300 Jack 3000

    It does depend on your version as odie suggests.
    Here's a way that will work in 10g...
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select xmltype('<Department deptno="100">
      2  <DeptName>Sports</DeptName>
      3  <EmployeeList>
      4  <Employee empno="200"><Ename>John</Ename><Salary>33333</Salary>
      5  </Employee>
      6  <Employee empno="300"><Ename>Jack</Ename><Salary>333444</Salary>
      7  </Employee>
      8  </EmployeeList>
      9  </Department>
    10  ') as xml from dual)
    11  --
    12  -- End of test data, Use query below
    13  --
    14  select x.deptno, x.deptname
    15        ,y.empno, y.ename, y.salary
    16  from t
    17      ,xmltable('/'
    18                passing t.xml
    19                columns deptno   number       path '/Department/@deptno'
    20                       ,deptname varchar2(10) path '/Department/DeptName'
    21                       ,emps     xmltype      path '/Department/EmployeeList'
    22               ) x
    23      ,xmltable('/EmployeeList/Employee'
    24                passing x.emps
    25                columns empno    number       path '/Employee/@empno'
    26                       ,ename    varchar2(10) path '/Employee/Ename'
    27                       ,salary   number       path '/Employee/Salary'
    28*              ) y
    SQL> /
        DEPTNO DEPTNAME        EMPNO ENAME          SALARY
           100 Sports            200 John            33333
           100 Sports            300 Jack           333444
    SQL>If the XML is a string e.g. a CLOB then it can easily be converted to XMLTYPE using the XMLTYPE function.

  • Acrobat XI Pro Won't Convert 206 Page Word Document into PDF

    Hi there
    As menetioned above, Acrobat XI Pro Won't Convert my 206 Page Word Document into a PDF.  The Word document was originally a PDF file that I converted to Word and it has split all the text into sections.
    It sounds like converting a PDF into Word isn't the best way to edit, re-format and then save as a PDF again.  I would love to hear your advice on this.
    Thanks very much for your help!
    Fiona

    First before you recreate the PDF from the Word Document.
    In word: Open Document
    Next open a new Blank docment
    switch back to Word click on the ¶ button
    scroll to of go to very end of docment.
    click just to right of the perion in the last sentence.
    now go to very beginning of document
    Hold down the Shift and click to the right of first letter in document.
    Now choose copy.
    Now switch to Blank document
    Choose Paste special.
    Now choose Text only.
    If works all the words will be there spaced correct but with no ¶'s.
    Now insert returns as desired.
    Now save as a docx file under a different name.
    IF you are on a Mac use the following directions:
    go to File menu > Print > PDF Hold down PDF button until Context menu pops up.
    Choose adobe PDF.
    follow steps when the first window opens.
    Save as PDF in desired location.
    Now open the PDF in Acrobat. Document should be properly formatted and ready to go.
    AS you've found The conversion is not seamless. Acrobat doesn't distingish between automatic end of line breaks and Returns and you have to put the pieces back again.  I wish Adobe and MS would get over the jealouscy of each other and share howcode works so Thatapplications could work seamlessly together.  BUt they never will.

  • How do I merge more than one PDF document into one singular pdf using adobe viewer touch on a microsoft surface?

    How do I merge more than one PDF document into one singular pdf using adobe viewer touch on a microsoft surface?

    Or using the Adobe PDF Pack https://www.acrobat.com/

  • How can I paste a pdf of a Publisher document into Pages so I can edit?

    I have been asked to print a document that was created in Publisher on a PC. I have been sent a pdf of the document, which is A4, printed both sides, with four A6 text boxes on each side, with the same artwork and text in each box. The trouble is, the layout is wrong so when it is cut into four the margins are not even. I have Acrobat 7.0 Standard (which is know is very old) but I haven't used it much and can't work out how to edit the pdf. At the moment all I can do is copy and paste the entire document into Pages but then I can't extract just one A6 text box from the rest. If I could do that it would be a breeze to set it up correctly in Pages so that it works when cut into four A6 pieces. When I go to Get Info it says it is not locked and I have permission to read and write.
    Can anyone explain these things to a beginner, please?
    Thanks!

    Chris
    There are several ways we can go about this.
    *1. In Adobe Illustrator* you can open the pdf file directly and chop out the bits you don't want and resave them individually. There maybe problems with fonts not being embedded as it comes from a PC where such things are mostly verbotten.
    *2. In Acrobat* you can do some editing and avoid the fonts issue. But the procedure is a bit more involved.
    If it helps, you need 2 things:
    +Menu > Tools > Advanced Editing > Select Object Tool+ and
    +Menu > Split Document… / Crop Pages…+
    Using this you can unpick the pdf file removing the parts you don't want. Not as simple as it sounds because you need to understand why some objects select with others and why some are unselectable which has to do with grouping, masking and compounding.
    *3. In Preview* cut apart the multiple pages and parts of each page. To split the multiple pages you will have to open the set and then delete the pages you don't want and save the remaining page with a new name.
    To delete a page click on it in the sidebar and hit delete.
    To crop a page use the +Select Tool+ in the top of the window and drag around the part of the page you want then +Menu > Tools > Crop+ and resave with a new name. Be aware this does not actually get rid of anything outside the select it just crops the view of it.
    *4. Which brings us to Pages*. Having divided the pdf into individual page files you can then drag them into Shapes on your page the size of the parts you want to show.
    When you click on the shape with page in it a little bar shows up underneath with +Edit Mask+. Click on the +Edit Mask+. You will see the shape, which is the mask, gets a grey dotted outline with white resizing boxes on the corners and mid points. If you need to adjust the crop or size of the shape drag on these.
    Behind the shape outline is the image with the cropped parts outside shown greyed out. The greyed out part is the hidden part. If you click outside the shape but on the greyed out part of the image you select the image itself and can pull at its corners or sides to change its shape. Or if you just click on the greyed part of the image you can move it around so that it moves relative to the shape cropping it.
    If you have several sections to be shown in each shape, you can move them one at a time to show each one. Say you have 4: 2 top and 2 bottom, option drag the shape with the picture in it to make 4 copies. Then in each one pull the image to reveal in turn top-left, top-right, bottom-left and bottom-right.
    Do this till you have all the parts separated and rearrange them where you want.
    *A bonus tip:* Cut out a step by opening Preview, next to Pages, with the multiple pages showing in the sidebar. You can drag the pages from the sidebar directly into the shape in Pages, without having to divide the .pdf.
    Any further questions just ask.
    Peter

  • Hi there, I have a Macbook Pro and I have been using Libre Office but I am not happy with it. I need a good word processing package to produce a manuscript and don't know if I have to get Word for Mac and how to put my documents into it.

    Hi there and hello everybody.
    I am writing a book and have been using Libre Office and would like a better word processing package.
    I have heard that Pages 5 is better than the latest one. I need to be able to put my Libre Office documents
    into Pages. Anyone know how to do this please?

    magathon wrote:
    Thanks. So would you say Word is the best processing package? And how do I choose which package 'opens' them.
    This is a basic computer skill you might want to learn. You can always control which application opens any document, and the steps below apply to any Mac or Windows computer you use:
    Method 1: Drag the document icon on top of the application (Word, Pages, LibreOffice...) icon and release. The system will open the document in the application you dragged it to.
    Method 2: Go into the application (Word, Pages, LibreOffice...) and choose File/Open. Locate and select the document, and click the Open or OK button.
    Method 3: Select the document icon and choose File/Open With, then choose the application you want to open the document with.
    The only time these don't work is when the application cannot open that type of document. For example, you usually can't make a photo editor open a text file. But Word, Pages, and LibreOffice all know how to open text files, RTF files, and Word files.
    But if it is important that the document formatting is exactly preserved when exchanging with other Word users, you really should use Word. If you can't afford to pay the retail price for a copy, Microsoft now lets you use Office for $6.99 a month or $69.99 a year.

  • How Do I combine 2 InDesign Documents into 1 (in a very particular way)

    How do I combine two InDesign Documents into one in a very specific way.
    Here's my problem: I have two separate InDesign CS5 files for playing cards. File one is the front. File two is the back.
    The company that does my final print production needs the the cards sent as two separate PDFs. One for the card fronts, one for the card backs. Both of these InDesign (and the PDF export) files are arranged in the same way: page 1 is card #1, page 2 is card #2, etc. Nice and simple.
    However, I use a different company for producing prototype versions of the decks (the production printer can't do this economically). My prototype producer needs me to send a single PDF where page 1 is Card #1 Back, page 2 is Card #1 Front, Page 3 is Card #2 back, Page 4 is Card #2 front, etc.
    The decks I am building usually run 56 cards, but can be as large as 180 cards.
    So, what's the easiest way to automate this process? is there a way to combine two separate indesign files by "interleaving" the pages from the two original documents into a new larger document (where I could do a new PDF export)?  Or, is there a better way to do this in Acrobat Pro? At the moment I have to resort to drag and dropping each card from one PDF thumbnail pallet to the other. It's really time consuming and mind-numbingly boring.  Any Ideas?

    This script is actually pretty straightforward, so it's a good first
    challenge I think.
    I would script this in InDesign -- probably because I'm more familiar
    with InDesign scripting.
    To start learning the InDesign scripting basics, open the ExtendScript
    Toolkit (ESTK) which is almost certainly installed on your computer
    already. Go to the Help menu, and you'll find an entry there called
    Adobe Intro to Scripting. Read that. I think that's the best place to start.
    For the script in question, you would want to:
    1. make sure both InDesign documents are open. They would both have the
    same number of pages. (No scripting involved here, just make sure
    they're open before running the script.)
    2. loop backwards through the collection of document pages of one of the
    documents. (document.pages is the collection you're looking for)
    3. use the move() method of a document page to move it. (myPage.move())
    4. Learn about the LocationOptions enumerator so that when you move the
    page you can tell InDesign if you want to move it before, or after
    another page. (LocationOptions.AFTER or LocationOptions.BEFORE)
    5. Get a reference to the page in the second document before (or after)
    which you want to move the page in the first document.
    5. that's basically it
    I suggest that as you work your way through this (if you decide to take
    up the challenge), you post any questions you may have on the InDesign
    Scripting forum here.

  • How do I bind multiple PDF Documents into one single document?

    I am trying to bind multiple PDF documents into one single document can anyone please help me?

    You would need Adobe Acrobat to do that. Reader doesn't have the ability.

  • How do I merge multiple pdf documents into 1 with my iPad ?

    How do I merge multiple pdf documents into 1 with my iPad ? We are required by our customer 1 document that includes 4 different documents from different departments. Currently with our PC we can merge PDF files, excel or word files in to 1 PDF document. How can I do the same with our iPads and iPad minis using adobe?

    blogs.adobe.com We are excited to announce that Adobe CreatePDF application is now available on iOS. With this, Adobe brings rich, high-fidelity and Acrobat-like PDF creation to the iOS devices. You can now convert all your documents on iPad, iPhone & iPod touch devices to PDF for reliable, secure sharing and viewing across PCs, tablets & Smartphones.
    Robert Monsalve
    [signature removed by host]

  • How do I move a document into a folder in PDF reader on an iPad

    How do I move a document into a file in PDF reader on my iPad

    When you're looking at the list of documents, touch the Edit button at the upper-right. Then select the file you want to move by touching the check box to its left, and click the Move File icon, it's the third one from the left at the top of the screen. You'll be prompted to "Move selected document" after which the "Choose a location" dialog will appear to allow you to select a destination folder.

  • How can I [cmd]+A including text boxes? Or how to import one Pages document into another Pages document?

    Hi there,
    I am working on my thesis with several Pages documents (for the first time). And I really have searched all over the Internet to find an answer to exactly my question but I cannot find it.
    First my question was: How can I import one Pages document into another one? I found the answer on that (I think) and there is nothing else than just make a new section and copy/paste it in.
    But then another issue pops up: I cannot [cmd]+A my whole document including the texboxes to paste it in the other Pages document. Then the document will be paste without the textboxes (and I have quite a lot of them in that document) and the text will not be at the right place.
    So please explain to me: How I can combine two Pages documents and keep them exactly the way they are? Is that even possible?
    I thank you so much in advance!

    Hi Fruhulda,
    I have already tried to mark the textboxes in line but that did not work, and besides then the text and the boxes won't be at the rigt places either. Thank you for your reply .

Maybe you are looking for