Loading Matrix Bound to UDO with Large Data Set

Hello Experts,
I have been looking on the forums for the best method out there to effectively load a Matrix that is bound to a User Defined Object (UDO).  In short, I will explain to you what I would like to do.  I have a form that has a matrix on it bound to a User Defined Object.  This matrix takes data stored in other UDO forms/tables and processes it to extract new information. 
Unfortunately, the resulting dataset is quite large (up to 1000 rows).  I realize if this were just a "report" I could easily do this with a Grid.  I also realize if this were just a Matrix bound to a User Defined Table, I could bind it to a DataTable and perform the query that way.  However, since this is a Matrix bound to a DBDataSource (as I would like to have SAP handle any updates/finds) I believe my only options are to try and use a DBDataSource.Query method and try to work with Conditions. 
The DBDataSource.Query method has not proven to be effective due to the complexity of the query and the multiple tables involved.  I have read from others on the forum that I could just load the matrix by temporarily databinding the matrix to a DataTable and then, after it is loaded, switch the databinding back to the DBDataSource but this does not work as it comes back with an error informing me (rightly so) that there are already rows in the matrix.
One final option would be to use the User Interface (UI) to cycle through and update each cell of the matrix with the results of a recordset, but, as I said, this can be a large dataset and that could take hours (literally).
In short, I was wondering if anyone out there can advise me on the most effective options I have.  Is there  a way to quickly load a matrix bound to a DBDataSource?  Is there someway I can load the matrix by binding it to a DataTable and then quickly move this information over to the DBDataSource (I already attempted this and the method I used was as slow as using the UI to update the Matrix)?  Are there effective ways to use the DBDataSource.Query method that I do not know much about (and cannot find many examples of how this functionality is truly used)?  Should I abandon the DBDataSource (though I believe this is the SAP preferred method) and, if so, is there another technique to appropriately update the database other than using DBDataSource?  Others have mentioned handling the updates to the database themselves but I am not sure what this means (maybe it means using SQL UPDATE/INSERT?).  Is there a ways to Flush matrix information to a DBDataSource if the DBDataSource was not used in the loading and is not currently bound to the matrix? 
Sorry for the numerous amount of questions but thanks for the advise.

        Dim oForm As SAPbouiCOM.Form
        Dim creationPackage As SAPbouiCOM.FormCreationParams
        creationPackage = sbo_application.CreateObject(SAPbouiCOM.BoCreatableObjectType.cot_FormCreationParams)
        creationPackage.UniqueID = "MyFormID"
        creationPackage.FormType = "MyFormID"
        creationPackage.ObjectType = "UDO_TEST"
        creationPackage.BorderStyle = SAPbouiCOM.BoFormBorderStyle.fbs_Fixed
        oForm = sbo_application.Forms.AddEx(creationPackage)
        oForm.Visible = True
        oForm.Width = 300
        oForm.Height = 400
        Dim oItem As SAPbouiCOM.Item
        oItem = oForm.Items.Add("1", BoFormItemTypes.it_BUTTON)
        oItem.Top = 336
        oItem.Left = 5
        oItem = oForm.Items.Add("2", BoFormItemTypes.it_BUTTON)
        oItem.Top = 336
        oItem.Left = 80
Now put and Edit box to DocEntry
        oItem = oForm.Items.Add("3", SAPbouiCOM.BoFormItemTypes.it_EDIT)
        oItem.Top = 5
        oItem.Left = 5
        oItem.Width = 100
        Dim oEditText As SAPbouiCOM.EditText = oItem.Specific
        oForm.DataSources.DataTables.Add("oMatrixDT")
        oItem = oForm.Items.Add("oMtrx1", SAPbouiCOM.BoFormItemTypes.it_MATRIX)
        oItem.Top = 20
        oItem.Left = 20
        oItem.Width = oForm.Width - 30
        oItem.Height = oForm.Height - 100
        Dim oMatrix As SAPbouiCOM.Matrix = oItem.Specific
        Dim oColumn As SAPbouiCOM.Column = oMatrix.Columns.Add("#", SAPbouiCOM.BoFormItemTypes.it_EDIT)
        oColumn.TitleObject.Caption = "#"
        oColumn = oMatrix.Columns.Add("oClmn0", SAPbouiCOM.BoFormItemTypes.it_LINKED_BUTTON)
        oColumn.TitleObject.Caption = "BP Code"
        Dim oLinkedButton As SAPbouiCOM.LinkedButton = oColumn.ExtendedObject
        oLinkedButton.LinkedObject = SAPbouiCOM.BoLinkedObject.lf_BusinessPartner
        ' Now bind Columns to UDO Objects in Add Mode
        oEditText.DataBind.SetBound(True, "@UDO_TEST", "DocEntry")
        oMatrix.Columns.Item("oClmn0").DataBind.SetBound(True, "@UDO_TEST1", "U_CARDCODE")
        oForm.DataBrowser.BrowseBy = "3"

Similar Messages

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

  • Working with Large data sets Waveforms

    When collection data at a high rate ( 30K ) and for a long period (120 seconds) I'm unable rearrange the data due to memory errors, is there a more efficient method?
    Attachments:
    Convert2Dto1D.vi ‏36 KB

    Some suggestions:
    Preallocate your final data before you start your calculations.  The build array you have in your loop will tend to fragment memory, giving you issues.
    Use the In Place Element to get data to/from your waveforms.  You can use it to get single waveforms from your 2D array and Y data from a waveform.
    Do not use the Transpose and autoindex.  It is adding a copy of data.
    Use the Array palette functions (e.g. Reshape Array) to change sizes of current data in place (if possible).
    You may want to read Managing Large Data Sets in LabVIEW.
    Your initial post is missing some information.  How many channels are you acquiring and what is the bit depth of each channel?  30kHz is a relatively slow acquisition rate for a single channel (NI sells instruments which acquire at 2GHz).  120s of data from said single channel is modestly large, but not huge.  If you have 100 channels, things change.  If you are acquiring them at 32-bit resolution, things change (although not as much).  Please post these parameters and we can help more.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • Just in case any one needs a Observable Collection that deals with large data sets, and supports FULL EDITING...

    the VirtualizingObservableCollection does the following:
    Implements the same interfaces and methods as ObservableCollection<T> so you can use it anywhere you’d use an ObservableCollection<T> – no need to change any of your existing controls.
    Supports true multi-user read/write without resets (maximizing performance for large-scale concurrency scenarios).
    Manages memory on its own so it never runs out of memory, no matter how large the data set is (especially important for mobile devices).
    Natively works asynchronously – great for slow network connections and occasionally-connected models.
    Works great out of the box, but is flexible and extendable enough to customize for your needs.
    Has a data access performance curve so good it’s just as fast as the regular ObservableCollection – the cost of using it is negligible.
    Works in any .NET project because it’s implemented in a Portable Code Library (PCL).
    The latest package can be found on nugget. Install-Package VirtualizingObservableCollection. The source is on github. 

    Good job, thank you for sharing
    Best Regards,
    Please remember to mark the replies as answers if they help

  • Running out of memory while using cursored stream with large data

    We are following the suggestions/recommendations for the cursored stream:
    CursoredStream cursor = null;
              try
                   Session session = getTransaction();
                   int batchSize = 50;
                   ReadAllQuery raq = getQuery();
                   raq.useCursoredStream(batchSize, batchSize);
                   int num = 0;
                   ArrayList<Request> limitRequests = null;
                   int totalLimitRequest = 0;
                   cursor = (CursoredStream) session.executeQuery(raq);
                   while( !cursor.atEnd() )
                        Request request = (Request) cursor.read() ;
                        if( num == 0 )
                             limitRequests = new ArrayList<Request>(batchSize);
                        limitRequests.add(request);
                        totalLimitRequest++;
                        num++;
                        if( num >= batchSize )
                             log.warn("Migrating batch of " + batchSize + " Requests.");
                             updateLimitRequestFillPriceForBatch(limitRequests);
                             num = 0;
                             cursor.releasePrevious();
                   if( num > 0 )
                        updateLimitRequestFillPriceForBatch(limitRequests);
                   cursor.close();
    We are committing every 50 records in the unit of work, if we set DontMaintianCache on the ReadAllQuery we are getting PrimaryKeyExceptions intermittently, and we do not see much difference in the IdentityMap size.
    Any suggestions/ideas for dealing with large data sets? Thanks

    Hi,
    If I use read-only classes with CursoredStream and execute the query within UOW, should I be saving any memory?
    I had to use UOW because when I use Session to execute the query I get
    6115: ISOLATED_QUERY_EXECUTED_ON_SERVER_SESSION
    Cause: An isolated query was executed on a server session: queries on isolated classes, or queries set to use exclusive connections, must not be executed on a ServerSession or in CMP outside of a transaction.
    I assume marking the descriptor as read-only will avoid registering in UOW, but I want to make sure that this is the case while using CursoredStream.
    We are running in OC4J(OAS10.1.3.4) with BeanManagedTransaction.
    Please suggest.
    Thanks
    -Raam
    Edited by: Raam on Apr 2, 2009 1:45 PM

  • Large data sets and key terms

    Hello, I'm looking for some guidance on how BI can help me. I am a business analyst in a health solutions firm, but not proficient in SQL. However, I have to work with large data sets that just exceed the capabilities of Excel.
    Basically, I'm having to use Excel to manaully search for key terms and apply a values to those results. For instance, I have a medical claims file, with Provider Names, Tax ID, Charges, etc. It's 300,000 records long and 15-25 columsn wide. I need to search for key terms in the provider name like Ambulance, Fire Dept, Rescue, EMT, EMS, etc. Anything that resembles an ambulance service. Also, need to include abbreviations of them such as AMB, FD, or variations like EMT, E M T, EMS, E M S, etc. Each time I do a search, I have filter and apply an "N/A" flag.
    That's just one key term. I also have things like Dentists or DDS, Vision, Optomemtry and a dozen other Provider Types that need to be flagged as "N/A".
    Is this something that can be handled using BI? I have access to a BI group, but I need to understand more about the capabilities of what can be done. As an analyst, I'm having to deal with poor data inegrity. So, just cleaning up the file can be extremely taxing and cumbersome.
    Some insight would be very helpful. Thanks.

    I am not sure if you are looking for an explanation about different BI products? If so, may be this forum is not the place to get a straight answer.
    But, Information Discovery product suite might be useful in your case. Regarding the "large date set" you mentioned, searching and analyzing 300,000 records may not be considered a large data set at least in Endeca standards :).
    All your other requests, could also be very easily implemented using Endeca's product suite. Please reach out to Oracle's Endeca product team and they might guide you on how this product suite would help you.

  • Need to load large data set from Oracle table onto desktop using ODBC

    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.

    hillelhalevi wrote:
    I don't have TOAD nor any other tool for querying the database.  I'm wondering how I can load a large data set from an Oracle table onto my desktop using Excel or Access or some other tool using ODBC or not using ODBC if that's possible.  I need results to be in a .csv file or something similar. Speed is what is important here.  I'm looking to load more than 1 million but less than 10 million records at once.   Thanks.
    Use Oracle's free Sql Developer
    http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
    You can just issue a query like this
    SELECT /*csv*/ * FROM SCOTT.EMP
    Then just save the results to a file
    See this article by Jeff Smith for other options
    http://www.thatjeffsmith.com/archive/2012/05/formatting-query-results-to-csv-in-oracle-sql-developer/

  • XML Solutions for Large Data Sets

    Hi,
    I'm working with a large data set (9 million records comprising 36 gigabytes) and am exploring the use of XML with it.
    I've experimented with a JDBC app (taken straight from Steve Muench's excellent <i>Oracle_XML_Applications</i>) for writing to CLOBS, but achieve throughputs of much less than 40k/s (the minimum speed required to process the data in < 10 days).
    What kind of throughputs are possible loading XML records from CLOBs into multiple tables (using server-side Java apps)?
    Could anyone comment whether XML is a feasible possibility for this size data set?
    Regards,
    Mike

    Just would like to identify myself (I'm the submitter):
    Michael Driscoll <[email protected]>.
    null

  • How to handle large data sets?

    Hello All,
    I am working on a editable form document. It is using a flowing subform with a table. The table may contain up to 50k rows and the generated pdf may even take up to 2-4 Gigs of memory, in some cases adobe reader fails and "gives up" opening these large data sets.
    Any suggestions? 

    On 25.04.2012 01:10, Alan McMorran wrote:
    > How large are you talking about? I've found QVTo scales pretty well as
    > the dataset size increases but we're using at most maybe 3-4 million
    > objects as the input and maybe 1-2 million on the output. They can be
    > pretty complex models though so we're seeing 8GB heap spaces in some
    > cases to accomodate the full transformation process.
    Ok, that is good to know. We will be working in roughly the same order
    of magnitude. The final application will run on a well equipped server,
    unfortunately my development machine is not as powerful so I can't
    really test that.
    > The big challenges we've had to overcome is that our model is
    > essentially flat with no containment in it so there are parts of the
    We have a very hierarchical model. I still wonder to what extent EMF and
    QVTo at least try to let go of objects which are not needed anymore and
    allow them to be garbage collected?
    > Is the GC overhead limit not tied to the heap space limits of the JVM?
    Apparently not, quoting
    http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html:
    "The concurrent collector will throw an OutOfMemoryError if too much
    time is being spent in garbage collection: if more than 98% of the total
    time is spent in garbage collection and less than 2% of the heap is
    recovered, an OutOfMemoryError will be thrown. This feature is designed
    to prevent applications from running for an extended period of time
    while making little or no progress because the heap is too small. If
    necessary, this feature can be disabled by adding the option
    -XX:-UseGCOverheadLimit to the command line."
    I will experiment a little bit with different GC's, namely the parallel GC.
    Regards
    Marius

  • How to i add an image path with spry data set

    hi
    how to i add an image path with spry data set. I made a xml file and then created a data set in html but image won't load
    this is my XML
    <?xml version="1.0" encoding="UTF-8"?>
    <banner width = "185" height = "400">
        <item>
            <image scr = "nui-panforte-recipe_01.jpg" ></image>
            <description>CHOC-COCONUT PANFORTE</description>      
            <text1>Try this delicious GLUTEN FREE Christmas treat</text1>
            <text2>CHOC-COCONUT PANFORTE</text2>
        </item>
    </banner>
    this is my HTML
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml" xmlns:spry="http://ns.adobe.com/spry">
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <title>Untitled Document</title>
    <script src="../../SpryAssets/xpath.js" type="text/javascript"></script>
    <script src="../../SpryAssets/SpryData.js" type="text/javascript"></script>
    <script type="text/javascript">
    <!--
    var ds1 = new Spry.Data.XMLDataSet("recipe_banner.xml", "banner/item");
    //-->
    </script>
    </head>
    <body>
    <div spry:region="ds1">
      <table>
        <tr spry:repeat="ds1">
          <td>{image}</td>
          <td>{description}</td>
          <td>{text1}</td>
          <td>{text2}</td>
          <td>{text3}</td>
          <td>{text4}</td>
          <td>{link}</td>
          <td>{url}</td>
          <td>{target}</td>
        </tr>
      </table>
    </div>
    </body>
    </html>

    It would be helpfull if you actually created an <img> tag to start with
    <img src="{image/@src}" />
    would work.

  • Target Spry RowID on page with Multiple data sets from another page

    Hi all,
    I am trying to target a specific data item, on a page with
    multiple data sets, from a link on another page. (I also have to
    pass the link through Flash, but lets start with the simple
    part...)
    You can take a look at the site in progress here:
    http://www.3andband.com/TestSite/iframeTest3.html
    From the Home page I want to link to specific news or concert
    items on the News page
    I have been trying to get SpryURLUtils to do it but I can't
    seem to get it working.
    Any help would be greatly appreciated.
    Thanks!
    Ben

    did u try if it even passes the row value?? with a simple
    alert? alert(params.row)
    Also maby u need to reorder the scripts to this;
    <script src="../SpryAssets/SpryURLUtils.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/xpath.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryData.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryCollapsiblePanel.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryEffects.js"
    type="text/javascript"></script>
    <script src="../SpryAssets/SpryAccordion.js"
    type="text/javascript"></script>
    and your js script
    var params = Spry.Utils.getLocationParamsAsObject();
    var dsConcerts = new
    Spry.Data.XMLDataSet("includes/concerts.xml", "Concerts/concert");
    dsConcerts.setColumnType("image", "image");
    var dsNews = new Spry.Data.XMLDataSet("includes/news.xml",
    "News/item");
    //Set an observer so that when the data is loaded, we update
    the current row to the url param value
    dsNews.addObserver({ onPostLoad: function(ds, type) {
    dsNews.setCurrentRow(params.row); }
    function MM_effectBlind(targetElement, duration, from, to,
    toggle)
    Spry.Effect.DoBlind(targetElement, {duration: duration,
    from: from, to: to, toggle: toggle});
    So url params get loaded before the data

  • Problem with a data set: DIAdem crashes

    Hi,
    I've got a problem with a data set. When I want to zoom in DIAdem-View, DIAdem crashes with the following message (translated from German ;-):
    error type: FLOAT INEXACT RESULT or FLOAT INVALID OPERATION or FLOAT STACK CHECK
    error address: 00016CB8
    module name: gfsview.DLL
    I've got some similar data set not showing such problems. Further on I scanned the data a bit, but in the 59000 points I didn't see anything special. I did try to delete "NOVALUE"s as well, but after that there still exist "NOVALUE"s.
    Does anyone have an idea what to look for?
    Thanks,
    Carsten

    Carsten,
    Could you please upload you Citadel database to the following FTP site:
    ftp.ni.com/incoming
    If you want to compress (ZIP) and/or put a password on the data, that's fine. Please send me a private email at [email protected] (with the file name and password if you put one on the file) once you have uploaded the file and I will check it out.
    Otmar
    Otmar D. Foehner
    Business Development Manager
    DIAdem and Test Data Management
    National Instruments
    Austin, TX - USA
    "For an optimist the glass is half full, for a pessimist it's half empty, and for an engineer is twice bigger than necessary."

  • Problem with large data report

    I tried to run a template I got from release 12 using data from the release we are using (11i). The xml file is about 13,500 kb. when i run it from my desktop.
    I get the following error (mostly no output is generated sometimes its generated after a long time).
    Font Dir: C:\Program Files\Oracle\BI Publisher\BI Publisher Desktop\Template Builder for Word\fonts
    Run XDO Start
    RTFProcessor setLocale: en-us
    FOProcessor setData: C:\Documents and Settings\skiran\Desktop\working\2648119.xml
    FOProcessor setLocale: en-us
    I assumed there may be compatibility issues between 12i and 11i hence tried to write my own template and ran into same issue
    when i added the third nested loop.
    I also noticed javaws.exe runs in the background hogging a lot of memory. I am using Bi version 5.6.3
    I tried to run the template through template viewer. The process never completes.
    The log file is
    [010109_121009828][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setData(InputStream) is called.
    [010109_121014796][][STATEMENT] Logger.init(): *** DEBUG MODE IS OFF. ***
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setTemplate(InputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutput(OutputStream)is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setOutputFormat(byte)is called with ID=1.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.setLocale is called with 'en-US'.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.process() is called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] FOProcessor.generate() called.
    [010109_121014796][oracle.apps.xdo.template.FOProcessor][STATEMENT] createFO(Object, Object) is called.
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] oracle.xdo Developers Kit 10.1.0.5.0 - Production
    [010109_121318828][oracle.apps.xdo.common.xml.XSLT10gR1][STATEMENT] Scalable Feature Disabled
    End of Process.
    Time: 436.906 sec.
    FO Formatting failed.
    I cant seem to figure out whether this is a looping or large data or BI version issue. Please advice
    Thank you

    The report will probably fail in a production environment if you don't have enough heap. 13 megs is a big xml file for the parsers to handle, it will probably crush the opp. The whole document has to be loaded into memory and perserving the relationships in the documents is probably whats killing your performance. The opp or foprocessor is not using the sax parser like the bursting engine does. I would suggest setting a maximum range on the amount of documents that can be created and submit in a set of batches. That will reduce your xml file size and performance will increase.
    An alternative to the pervious approach would be to write a concurrent program that merges the pdfs using the document merger api. This would allow you to burst the document into a temp directory and then re-assimilate them it one document. One disadvantage of this approach is that the pdf is going to be freakin huge. Also, if you have to send that piggy to the printer your gonna have some problems too. When you convert it pdf to ps the files are going to be massive because of the loss of compression, it's gets even worse if the pdf has images......Then'll you have a more problems with disk on the server and or running out of memory on ps printers.
    All of things I have discussed I have done in some sort of fashion. Speaking from experience your idea of 13 meg xml file is just a really bad idea. I would go with option one.
    Ike Wiggins
    http://bipublisher.blogspot.com

  • Conditional format with large data fails and show error as "Selection is too large" in Excel 2007

    I am facing a issue in paste special operation using conditional formats for large data in Excel 2007
    I have uploaded a file at below given location. 
    http://sdrv.ms/1fYC9qE
    The file contains two sheets, Sheet "Data" contains the data on which formats are to be applied and sheet "FormatTables" contains the format tables which contains conditional formating.
    There are two table in "FormatTables" sheet. Both have some conditional formats applied on it. 
    Case 1: 
    1. Select the table range of Table1 i.e $A$2:$AV$2
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    It throws error as "Selection is too large"
    Case 2:
    1. Select the table range of Table2 i.e $A$5:$AV$5
    2. Copy it
    3. Goto Sheet "Data" 
    4. Select data area i.e $A$1:$AV$20664
    5. Perform a paste special operation on full range and select "Formats" option while performing paste special.
    Result:
    Formats get applied successfully.
    Both are the same format tables with same no of column and applied to same data range($A$1:$AV$20664) where one of the case works and another fails.
    The only diffrence is Table1 has appliesTo range($A$2:$T$2) as partial of total table range($A$2:$AV$2) whereas the Table2 has appliesTo range($A$5:$AV$5) same as of its total table range($A$5:$AV$5)
    NOTE : This issue is only in Excel 2007

    Excel 2007 No Supporting formating to take a formatting form another if source table has more then 16000 rows and if you want to do that in more then it then you have ot inset 1 more row in your format table to have 3 rows
    like: A1:AV3
    then try to copy that formating and apply
    Solution Case 1: 
    1.Select the table range of Table1 i.e AV21 and drage it down to one row down
    2. Select the table range of Table1 i.e $A$2:$AV$3
    3. Copy it
    4. Goto Sheet "Data" 
    5. Select data area i.e $A$1:$AV$20664
    6. Perform a paste special operation on full range and select "Formats" option while performing paste special

  • How to improve performance(insert,delete and search) of table with large data.

    Hi,
    I am having a table which is used for maintaining history and have a large data and that keeps on increasing or decreasing based on the business rules.
    I am getting performance issues with this table which searching for any records or while inserting new data into it. I have already used index in this table but still I am facing lot of issues related to performance.
    Also, we used to insert bulk data into this table.
    Can we have any solution to achieve this, any solutions are greatly appreciated.
    Thanks in Advance!

    Please do not duplicate your posts across forums.  It's considered bad practice and rude, as people will not know what answers you've already received and may end up duplicating the effort.
    Locking this thread - answer on other thread please

Maybe you are looking for

  • Problems with 3 people using phones with 1 apple ID

    My SiL and her two kids all use the daughter's ID. They have issues with shared texts and phone calls. They would like to segregate, and will set up family share so they can share music and photos, etc, but if they set their phones up with their own

  • SMARTFORM:  How to print on document per row in an internal table

    Hi. I have created a SmatForm that is a one-page document to be printed once for every row in an itab I'm sending it via the Table Interface. I hope I didn't waste my time designing this form but I created several Windows for each section (header, re

  • PM Error Message " FM Account assignment is incomplete (Funds center) messa

    Dear PM Experts. We use PM module for both Preventive maintenance and Breakdown maintenace. While creating an breakdown maintenance order, the system gives me an error message: " FM Account assignment is incomplete (Funds center), error  message numb

  • Will not launch

    It all started when skype decided it needed to update. That's when the dooties hit the fan. Hi, my name is Brennan, and my skype has been broken for about 3-4 months now. My friends refuse to use google hangout and or Raidcall. I have tried every tri

  • Linux 920 Database Disk1 Problems

    Hi After spending nearly 3 days downloading 9i for linux I ended up with Disk1 being a corrupted gzip.... Very annoying. Don't know whether this was my problem or not, but now I can't download lnx_920_disk1.cpio.gz again with resume supported. disk2