XSLT on very large documents w/XSU

Hi:
I need to load some very large documents, and I am looking at the example in Ch. 14 (14-4?) of the Muench book. It looks fine, but my question is that if, for instance, the element names don't match the table column names, and I need to use XSLT to change them, can I "stream" this? I am afraid that since XSLT wants to work on a "tree", it will want me to tranform the 6 gig (e.g.) file before passing it to the SAX parser. Is this correct? Can I make "mini" trees inside the multitableInsertHandler and work off of those one at a time? Can I do something like "XMLSave.applyStylesheet" (whatever it is) on a doc. from XMLDocumentSplitter? Complex question(s) I know. I think I am close, but missing the obvious.
Thanks.
Mike
null

Thanks for the clarification. I am planning on having one "rowset" element,and many "row" elements, each of which would have a "table" and "operation" attribute, because I am trying to mirror another database via its log file. Each row will be an insert/update/delete for one of several tables. Do you see a problem with this approach (i.e., doing away with most of the "rowset" elements)? I don't plan on batching at this point.
On batching: If I did batch, and if the transaction fails, and I get a vector back where everything is set to (-1) or whatever it does, how do I know which records in the transaction actually succeeded?
Thanks again,
Mike
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Steven Muench ([email protected]):
Yes. The Multitable insert handler transforms one "subdocument" at a time using XSLT into a multitable-insert document. The result isn't processed directly by XSU, but is processed by some code in the example that iterates the resulting <ROWSET table="xxx"> tags and "choreographs" the use of an OracleXMLSave for each target table.
Steve Muench
Lead Product Manager for BC4J and Lead XML Evangelist, Oracle Corp
Author, Building Oracle XML Applications
<HR></BLOCKQUOTE>
null

Similar Messages

  • After closing large documents (drawings) the window closes but the process runs still in the background. I open the next document, the same procedure and after dowing this several times the RAM is full the system becoms very slow. what can i do???

    after closing large documents (drawings) the window closes but the process runs still in the background. I open the next document, the same procedure and after dowing this several times the RAM is full the system becoms very slow. what can i do???

    You can always shut it down manually via the Task Manager
    (CtrlShiftEsc)...
    On Mon, Sep 1, 2014 at 3:05 PM, frank koethen <[email protected]>

  • XSLT of large documents in Oracle

    Hi,
    I'm trying to run XSLT in Oracle for large documents (say 1MB) via a stored procedure. It works for small documents but for large ones I'm getting an errror:
    : ORA-19011: Character string buffer too small
    ORA-06512: at "SYS.XMLTYPE", line 0
    ORA-06512: at line 1
    ORA-06512: at "MAPSUBMISSION_FN", line 7
    here's the sproc I'm using. How should I increase the buffer?
    thanks
    CREATE OR REPLACE FUNCTION MapSubmission_FN(outputId IN NUMBER, newtaskId IN NUMBER, mapId IN NUMBER)
    RETURN NUMBER
    IS
    newoutputId NUMBER;
    BEGIN
         select task_output_seq.nextval into newoutputId from dual;
         insert into task_output_tbl values (newoutputId, newtaskId,'xml',
         (select xmltransform(output_tbl.xml_output_data_lob, (select xml_data from xml_tbl_wrk
                   where temp_id=mapId)).getStringVal()
         from task_output_tbl output_tbl where task_output_id=outputId), current_timestamp);
         RETURN newoutputId;
    END;

    Would you try change getStringVal() to getClobVal()?

  • Best parser for handling very large XML  document

    which is the best parser whenread and extract information from very large XML document

    Any SAX-parser, since DOM would use 6 times as much primary memory as the file-size.
    Xerces SAX-parser is in my experience the fastest.
    Gil

  • Have Windows XP and Adobe 9 Reader and need to send a series of large documents to clients as a matter of urgency     When I convert 10 pages a MS-Word file to Pdf this results in file of 6.7 MB which can't be emailed.     Do I combine them and then copy

    I have Windows XP and Adobe 9 Reader and need to send a series of large documents to clients as a matter of urgency When I convert 10 pages a MS-Word file to Pdf this results in file of 6.7 MB which can't be emailed.  Do I combine them and then copy to JPEG 2000 or do I have to save each page separately which is very time consuming Please advise me how to reduce the size and send 10 pages plus quickly by Adobe without the huge hassles I am enduring

    What kind of software do you use for the conversion to pdf? Adobe Reader can't create pdf files.

  • Is there a way to split a very large 1 page pdf into letter size multiple page pdf?

    I often have very large single page pdfs that need to be printed onto letter size paper.  Usually I don't have access to the printer where I'm working so I have to send the file to someone for printing. 
    I have AXI pro, they don't. 
    I want to make sure the job is printed as I specify and most of the users are using Reader.  So I want to give the someone the pdf ready to print sized in legal.  This requires manipulation of the pdf that I don't seem to be able to figure out how to do.
    In older versions of Acrobat, I could print to a new pdf and designate the page size.  Acrobat would create the multipage pdf.  The newer versions don't allow this. 
    With OSX 10.8 & AXI you can't save, export, split a one page (68" x 16") document into multiple page letter size (16 pages) pdf.
    Perhaps this can be done by printing to eps and running through distiller again or something else, but I'm stumped at the moment.
    Any suggestions of how to attach this would be appreciated.
    Thanks.

    That's a tough one. Acrobat is not designed for tiling PDF files to create another PDF. That's really what you're asking. There is the option to PRINT to a PDF, and turn on the Poster feature. If were in Windows where there is a real Adobe PDF printer driver, you could probably use that feature. But for various reasons (too complicated to describe here), that was withdrawn on the Macintosh.
    If you have a copy of Adobe InDesign, and if you installed an Adobe PDF 9 PPD file (see description below), it could be done in a somewhat awkward way. InDesign allows you to place PDF files so you would need to make a page of the proper size and place your large PDF:
    Then after installing the Adobe PDF 9 PPD file, you could choose File > Print. Then choose to print a PostScript file to the Adobe PDF 9.0 PPD file. In the Setup panel, you'd choose a Letter size page. Then you'd choose the Tile option at the bottom and set the Overlap amount:
    Then you'd save the PostScript file and process through Distiller.
    My blog post below describes how to find and install the Adobe PDF 9.0 PPD file:
    http://indesignsecrets.com/creating-postscript-files-in-snow-leopard-for-older-print-workf lows.php

  • Best technology to navigate through a very large XML file in a web page

    Hi!
    I have a very large XML file that needs to be displayed in my web page, may be as a tree structure. Visitors should be able to go to any level depth nodes and access the children elements or text element of those nodes.
    I thought about using DOM parser with Java but dropped that idea as DOM would be stored in memory and hence its space consuming. Neither SAX works for me as every time there is a click on any of the nodes, my SAX parser parses the whole document for the node and its time consuming.
    Could anyone please tell me the best technology and best parser to be used for very large XML files?

    Thank you for your suggestion. I have a question,
    though. If I use a relational database and try to
    access it for EACH and EVERY click the user makes,
    wouldn't that take much time to populate the page with
    data?
    Isn't XML store more efficient here? Please reply me.You have the choice of reading a small number of records (10 children per element?) from a database, or parsing multiple megabytes. Reading 10 records from a database should take maybe 100 milliseconds (1/10 of a second). I have written a web application that reads several hundred records and returns them with acceptable response time, and I am no expert. To parse an XML file of many megabytes... you have already tried this, so you know how long it takes, right? If you haven't tried it then you should. It's possible to waste a lot of time considering alternatives -- the term is "analysis paralysis". Speculating on how fast something might be doesn't get you very far.

  • In a very long Document such as SO or PO how to search for say an ITEM?

    Hi Forum,
    Sometimes documents such as quotations, Sales Orders, Purchase Orders could be very long over say over 300 items.
    How can we search for something within that document while staying in the document - For example specific item in the document, or specific any field in the document.
    I thought the column title should have had FILTER option at least. I don't seem to find one. Is there any other way of searching for something within a large document?
    Thank you.

    Hi,
    There is no built-in function for this. You have few options:
    1. By Drag & Relate. This may not work well.
    2. By query report. Using query is the fastest and reliable way to reach your goal.
    Thanks,
    Gordon

  • Large documents and contents-errors in text - for thesis

    Hi - i´ll try to make it short. (quite experinced user on Mac and Pages - but hates Word)
    Large document like master/PhD on Pages 09 Version 4.1 (923) (using EndNote X4 for bibliography - but EndNote is not the problem! I tried it without...) - more than a hundred pages. (now with Lion 10.7.2 it seems to be even worse than with SLep).
    Content is numbered with up to four "levels" (like: 2.3.1.2 Xxxxxx). I seem to make a lot of things right because the content at the beginning of the document is correct with all the points (just the content is 4 pages long - and everything including the page-numbers is correct) -
    BUT:
    In the text most "sub-/headlines" or new points are correct - but some are just very wrong like:
    it should be 8.3.2.1 - instead it is 8.3.6 or even 2.1 - and everytime i try to change it its just wrong in another or the same way... - but in any case (as long as i am doing it right with tabs and using the right level) the content at the beginning of the document is correct.
    That is exactly why i hate Word - it just did the same crap (but Word also mixed up the footnote-numbering - in Pages 09 at least all the foodnotes are right).
    Any idea? - the next thing i will try is just to copy every paragraph into a new document - but i doubt that it will help - is there anybody at apple who could help???
    Greetings
    simplex

    I didn't answer either because I found your writing impenetrable and without the file to work with improbable we could talk you through it.
    It sounds to me like you have simply skipped a sequence somewhere and it has lost track of the numbering, or you got it to restart somewhere in an attempt to 'fix' your error.
    Peter

  • Numbers file sizes very large

    I've noticed that file sizes in Numbers are very large. I was using Excel for a long time and all the files I created were between 20-30KB. The files consisted of an Excel workbook, with 1-2 worksheets in it. The same files in Numbers are 200-300KB and if I save them over the network from another computer to my computer, they jump up to over 1MB. Any ideas on this?

    Hello
    Nothing to do with graphics items.
    An XL file as well as an AppleWorks one is a compiled document in which many components are stored in a very compact shape. One byte was sufficient in AW to represent an operand as they where aboutone hundred.
    In Numbers, everything is described in pure text with complementary delimiters.
    When a formula uses the operand "COUNTBLANK" this one appears with its 10 letters.
    In XL as well as in AW6, a date is stored as a floating number while in Numbers it's stored as the string "mercredi 23 janvier 2008 22:33:19"
    Sama thing for every attributes of every cell.
    So this results in a huge file store in XML format. To spare space, when we close a document, the XML file is packed in .gz format.
    This is the Index.xml.gz file that we may see clicking a Numbers document wit ctrl depressed and selecting the contextual menu item "Show Package's contents".
    Double click onto Index.xml.gz will unpack it.
    giving the expanded Index.xml file.
    I assumes that they are applications dedicated to XML files.
    I don't know them so I just drag and drop the XML file onto a free text editor named Bean which I find really interesting.
    Doing that, we may examine the file's contents.
    If someone knows a correct free application able to open and display correctly the XML files, I'm interested
    Yvan KOENIG (from FRANCE mercredi 23 janvier 2008 22:49:23)

  • Quickly open of documents in a large document library

    I would like to find a solution to be able to quickly open documents that are in a document library that contains sub folders.
    I already discarded the file system, and generation of workspace because they have physical limitations. my document library is very large.
    I'm sure there is a solution.
    Thanks

    Hi,
    According to your post, my understanding is that you want to open of documents in a large document library.
    If you install OWA, you can open the documents in the brower directly.
    If you only install Office, you can open the documents without saving them.
    However, you need to set Browser File Handling to select Permissive.
    For more information, you can refer to:
    Manage Office Web Apps (Installed on SharePoint 2010 Products)
    SharePoint 2010 – How to open files that prompt for Save or Cancel
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • Very large PDF, can't shrink it!

    I've created a 44 page Indesign document using CS4 v6.0. The pages contain a random selection of 5 vector Illustrator eps files thought the document. The eps files range between 300k - 700k. When I export to create a PDF using the smallest file size settings I get a 80mb file. I've tried to optimize this in Acrobat Pro v9 but this makes no difference.
    Any suggestions would be greatfully received.
    Thanks
    Chris

    I think this question has come up before a while ago.
    So, you have several very large
    b vector
    files. Let's see what Acrobat can do to shrink a file.
    1.
    b Subset fonts.
    Instead of inserting an entire 50-150K per font, just insert the (comparatively few) characters that are used in the document. Net gain: I'd say 50-100K. If you use
    i very few
    characters in some fonts, it just might be worth outlining them instead.
    2.
    b Use image compression.
    Some bitmap images may be stored uncompressed; the compression might not be optimal; or images are stored lossless (ZIP), while they can be stored lossy (JPEG) without significant problems. Or they are stored lossy but can even be storied
    i lossier
    -- decreasing the visual quality significantly, but hopefully still recognizable. Hey, it's your call.
    3.
    b Use image downsampling.
    Some bitmap images may have a larger resolution than needed (>300 dpi for color and grayscale; >1200 dpi for black-and-white) or even
    i wanted
    -- Distiller, for example, has a threshold for color images at 450 dpi. Any larger is not just plain unnecessary, or even sensible overkill, it's a pure waste of perfectly good disk bytes and CPU cycles.
    For a regular file that still should print nicely, a threshold of 300 dpi is enough. For something the client must be able to print for himself, 150 dpi is enough. For a screen preview, a mere 75 dpi should do (tell the client not to rely on the screen).
    Now, where did I mention
    i vector art
    coming in? I didn't. Each and every line in vector art is as significant as every other one -- short or long, thick or thin (or invisible), filled or stroked. There are no set rules on which lines can be 'simplified' or discarded. Acrobat cannot 'downsample' vector art.
    One possible solution is to rasterize your vector images, even if only for preview purposes. If the art is all full colour images, you might even rasterize them to 300 (or 450) dpi and use it for production. The files will still be large, but now it only depends on the physical dimensions of the images.
    Another solution is to review the artwork. Are there lots of small details that *will* be invisible in the final print? Are there lots of paths built up from lots of points that can be replaced with a single curve? Stuff like that adds up.

  • Handling very large diagrams in Pages?

    I am writing a book that requires sometimes the use of large diagrams. These are vector-based diagrams (PDF). Originally, I planned to use iBooks Author and widgets to let the user zoom/pan/scroll and use other nice interactive stuff, but after having tried everything I have decided to give up on iBooks Author and iBooks for now because of its dismal handling of images (pixels only, low resolution, limited size only, etc.).
    I am planning to move my project over to Pages. Not having the 'interactive widget'  approach means I need some way to handle large images. I have been thinking about putting very large images in multiple times on different pages with different masks. Any other possible trick? Can I have documents with multiple page sizes? Do I need a trick like the one above or can an ePub book be zoomed/panned/scrolled, maybe using something different to read it than iBooks?

    Peter, that was indeed what I expected. But it turns out iBooks Author can take PDF, but iBooks cannot and iBooks Author renders them to low resolution images (probably PNG) when compiling the .ibook form the .iba.
    Even if you use PNG in the first place, the export function of iBooks Author (either to PDF or to iBook) create low resolution renders.
    The iBooks format is more a web-based format. The problem lies not in what iBooks Author can handle, but in how it compiles these to the iBooks format. It uses the same export function for PDF, making also PDF export ugly and low-res.
    iBooks Author has more drawbacks, for instance, if you have a picture and you change the image inside the picture, you can't. You have to teplace the entire picture. That process breaks all the links to the picture.
    iBooks Author / iBooks is by far not mature.

  • --single very large photo montage TIF from 15 individual TIFs--

    This is novice question, sorry if posted in wrong area...
    Any easy way to take 15 TIF 300ppi horizontal images from (mixed) 6mp, 8mp, & 10mp cameras & place them in 3 columns 5 rows deep with 1" space between all images sized to 8"x12" each to make a single very large TIF of about 4'x5' with 2" border?  What size TIF is this likely to be?
    This crude graphic approximates what I want single giant TIF to look like:
    (o = image area, y = white space; am using Photoshop CS)
    yyyyyyyyyyyyyyyyyyyyyyy
    yyyyyyyyyyyyyyyyyyyyyyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyyyyyyyyyyyyyyyyyyyyyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyyyyyyyyyyyyyyyyyyyyyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyyyyyyyyyyyyyyyyyyyyyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyyyyyyyyyyyyyyyyyyyyyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyooooooyoooooyooooooyy
    yyyyyyyyyyyyyyyyyyyyyyy
    yyyyyyyyyyyyyyyyyyyyyyy
    Regards

    One easy way to do it is to make a new Ps document that is the final size and resolution you need, then set up a grid of guides where you need them and drag the images into the new document, sizing each layered image to its predetermined space in the grid. The Free Transform tool will snap to the guides and your sizing will be very fast and accurate. You should be able to slap this together in half an hour or so.

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

Maybe you are looking for

  • Help needed regarding 'Address Text' in Smartforms

    hi guys, To display the address text in Smartforms, we include an 'Address' component and provide the respective address number to it. I just want to know which is the table where the complete address text corresponding to the address number is saved

  • Runtime error related to incorrect reference passing

    Hi All, Very strange problem this one, I know exactly what the problem is (I knew it was 1 of 2 things instantly), but have no idea why the JVM is doing it!!! The short answer is Java is incorrectly passing some values by reference. It is incorrectly

  • Trouble installing Digital Photo Professional and other Canon software

    Hello everyone, I have been trying to install several software programs on a new iMac without any success. My machine is the 20" Intel Core 2 Duo running OS X 10.4.8. I am attempting to install Canon's Digital Photo Professional version 1.0 and the E

  • Constant wifi drop & disconnect

    My one month old iMac does not maintain an interent connection for very long. Sometimes it stays connected for <maybe> an hour, but  MORE often it drops either the wifi signal all together or just stops connecting to the internet... when it does seem

  • Align all pages to center

    I'm trying to edit hundreds of product images in fireworks with the least amount of repetition. I have loads of variations of ratios that break the nice look on the website so I created a master page at the correct dimensions then mass import all ima