SQL Adapter Crashes with large XML set returned by SQL stored procedure

Hello everyone. I'm running BizTalk Server 2009 32 bit on Windows Server 2008 R2 with 8 GB of memory.
I have a Receive Port with the Transport Type being SQL and the Receive Pipeline being XML Receive.
I have a Send Port which processes the XML from this Receive Port and creates an HIPAA 834 file.
Once a large file is created (approximately 1.6 GB in XML format, 32 MB in EDI form), a second file 1.7 GB fails to create.
I get the following error in the Event Viewer:
Event Type: Warning
Event Source: BizTalk Server 2009
Event Category: (1)
Event ID: 5740
Date:  10/28/2014
Time:  7:15:31 PM
User:  N/A
The adapter "SQL" raised an error message. Details "HRESULT="0x80004005" Description="Unspecified error"
Is there a way to change some BizTalk server settings to help in the processing of this large XML set without the SQL adapter crashing?
Paul

Could you check Sql Profiler to trace or determine if you are facing deadlock?
Is your Adapter running under 64 bits?
Have you studied the possibility of using SqlBulkInser Adapter?
http://blogs.objectsharp.com/post/2005/10/23/Processing-a-Large-Flat-File-Message-with-BizTalk-and-the-SqlBulkInsert-Adapter.aspx

Similar Messages

  • How to handle large result set of a SQL query

    Hi,
    I have a question about how to handle large result set of a SQL query.
    My query returns more than a million records. However, the Query Template has a "row count" parameter. If I don't specify it, it by default returns only 100 lines of records in the query result. If I specify it, then it's limited to a specific number.
    Is there any way to get around of this row count issue? I don't want any restriction on the number of records returned by a query.
    Thanks a lot!

    No human can manage that much data...in a grid, a chart, or a direct-connected link to the brain. 
    What you want to implement (much like other customers with similar requirements) is a drill-in and filtering model that helps the user identify and zoom in on data of relevance, not forcing them to scroll through thousands or millions of records.
    You can also use a time-based paging model so that you only deal with a time "slice" at one request (e.g. an hour, day, etc...) and provide a scrolling window.  This is commonly how large datasets are also dealt with in applications.
    I would suggest describing your application in more detail, and we can offer design recommendations and ideas.
    - Rick

  • Asc 2.0 crash with large embedded video file

    Hello,
    asc 2.0 crash with large embedded video files
    package{ 
      import flash.display.Sprite;
      public class Main extends Sprite{
        [Embed(source="video/myVideo.mp4", mimeType="application/octet-stream")]
        public var video:Class;
        public function Main(){
    on Windows with a small video, it compile fine, with a large video (525 Mo) the compiler crash
    internal error : java.lang.NullPointerException
            at com.adobe.flash.swf.SWF.addFrame(SWF.java:91)
            at com.adobe.flash.compiler.internal.targets.SWFTarget$FramesInformation
    .createFrames(SWFTarget.java:876)
            at com.adobe.flash.compiler.internal.targets.AppSWFTarget$AppFramesInfor
    mation.createFrames(AppSWFTarget.java:386)
            at com.adobe.flash.compiler.internal.targets.SWFTarget.build(SWFTarget.j
    ava:243)
            at com.adobe.flash.compiler.clients.MXMLC.buildSWFModel(MXMLC.java:674)
            at com.adobe.flash.compiler.clients.MXMLC.buildArtifact(MXMLC.java:660)
            at com.adobe.flash.compiler.clients.MXMLC.compile(MXMLC.java:541)
            at com.adobe.flash.compiler.clients.MXMLC.mainNoExit(MXMLC.java:230)
            at com.adobe.flash.compiler.clients.MXMLC.mainNoExit(MXMLC.java:184)
            at com.adobe.flash.compiler.clients.MXMLC.staticMainNoExit(MXMLC.java:15
    6)
            at com.adobe.flash.compiler.clients.MXMLC.main(MXMLC.java:143)
    Thanks

    java.lang.NullPointerException
    i met this error every time if i compile a "BIG " project.(with 1G+  image resources).   And also ,if you  import 500+ images into Flash CS ...you  know ,its prefix is  "Adobe".
    i guess , it's  because somebody not like "BIG " thing.
    So..on  32bit windows  platform ..i  upgrade my JVM to  1.7 ,and..modified  -XSS value . it will help us ..but not 100% resolved.
    Finally ,i upgrade to  64bit platform ..this problem seems  gone.
    I think ,it's  because of the bad RAM Manager  of  JVM. on 32bit windows machine .it's only can manager 1024M ram .(such a stupid VM).   And  on 64bit platform ..JVM seems  to be d normal.

  • OCI8: returning cursors from stored procedures

    The short version of my question is:
    In OCI8, how do open a cursor from the database stored procedure, return it to my C++ program and fetch from it, given that in OCI8 cursors and cursor functions are becoming obsoleted?
    The long version of the same question is:
    I am converting my C++ code from the Oracle 7.3 OCI driver to the Oracle8 OCI driver. One thing I did very frequently in Oracle 7.3 OCI code is open a multi-row select cursor within a stored procedure and return that cursor to my program. In the program, I would then do the fetching with the returned cursor. This was very useful, as it allows me to change the queries in the stored procedure (for example, to append information to certain columns or make some data in all uppercase) without recompiling the application due to a changed SQL string.
    My 7.3 psuedocode is as follows:
    stored procedure def:
    TYPE refCurTyp IS REF CURSOR;
    FUNCTION LoadEmployeeData RETURN refCurTyp;
    stored procedure body:
    FUNCTION LoadEmployeeData RETURN refCurTyp IS
    aCur refCurTyp;
    BEGIN
    OPEN aCur FOR
    SELECT emp_id, emp_name
    FROM employee_table
    ORDER BY emp_name;
    return aCur;
    END;
    OCI code: // all functions are simplified, not actual parameter listing
    // declare main cursor variable #1 and return cursor variable #2
    Cda_Def m_CDAstmt, m_CDAfunction;
    // open both cursors
    oopen(m_CDAstmt, ...);
    oopen(m_CDAfunction, ...);
    // bind cursor variable to cursor #2
    oparse(&m_CDAstmt, "BEGIN :CUR := MYPACKAGE.LoadEmployeeData; END;");
    obindps(&m_CDAstmt, SQLT_CUR, ":CUR", &m_CDAfunction);
    // run cursor #1
    oexn(&m_CDAstmt);
    // bind variables from cursor #2, and fetch
    odefineps(&m_CDAfunction, 1, SQLT_INT, &m_iEmpId);
    odefineps(&m_CDAfunction, 2, SQLT_CHAR, &m_pEmpName);
    while (!ofen(&m_CDAfunction))
    // loop: do something with fetch
    // values placed in m_iEmpID and m_pEmpName
    This works perfectly, and has really helped to make my code more maintainable. Problem is, in Oracle 8 OCI both cursors and the cursor functions (such as oopen()) are becoming obsoleted. Now it uses statement and environment handles. I know I can still use Cda_Def and cursors--for a while--within OCI8, but I need to know the official up-to-date method of returning a cursor from the database and fetching within my C++ code. Any code fragment, or explanation of what I need to do in OCI8 would be appreciated (perhaps I need to bind to a statement handle instead? But the stored procedure still returns a cursor.)
    The Oracle8 OCI has a new SQLT_ type, SQLT_RSET, which the header file defines as "result set type". Unfortunately, it's almost completely undocumented in the official documentation. Am I supposed to use this instead of the obsolete SQLT_CUR?
    Thanks,
    Glen Mazza

    Email me diorectly and I will get you some code that might help. I fail to see the relevance of posting this type of information in the JDeveloper forum.

  • Applying sort to a query returned by a stored procedure

    I am looking for some advice on the best approach for applying a dynamic sort to a query returned by a stored procedure.
    We have a stored procedure that has 3 inputs fields which are used to specify sort columns and it has an additional 3 fields to indicate if the corresponding input column is to be sorted in ascending or descending order. We presently accomplish this by using dynamic SQL in the procedure but this approach has some drawbacks. Ideally we would like these queries to compile just like any other cursor. We have tried using decodes but this does not seem practical or easy to maintain.
    This procedure is used by a web application that allows the user to click on a column header to specify their sort preference. The previous sort selection becomes the second sort field and the one before that the third.
    Your advice is much appreciated!

    I see, so you want to be able to sort by "name desc, age asc, salary asc", for example.
    there is no built in option. it's either dynamic sql, or decodes.
    the decodes could still work, with only 6 lines, but the problem is handling mixed data types. I'll stick with sort col of 1=name (char), 2=number_col, 3=date_col
    order by
      decode(sort_order1, 'A', decode (sort_col1, 1,name_col, 2,to_char(number_col,'9999999999.00000'), 3,to_char(date_col,'yyyymmddhh24miss'), 'X'),'X'),
      decode(sort_order1, 'D', decode (sort_col1, 1,name_col, 2,to_char(number_col,'9999999999.00000'), 3,to_char(date_col,'yyyymmddhh24miss'), 'X'),'X') DESC,
      decode(sort_order2, 'A', decode (sort_col2, 1,name_col, 2,to_char(number_col,'9999999999.00000'), 3,to_char(date_col,'yyyymmddhh24miss'), 'X'),'X'),
      decode(sort_order2, 'D', decode (sort_col2, 1,name_col, 2,to_char(number_col,'9999999999.00000'), 3,to_char(date_col,'yyyymmddhh24miss'), 'X'),'X') DESC,
      decode(sort_order3, 'A', decode (sort_col3, 1,name_col, 2,to_char(number_col,'9999999999.00000'), 3,to_char(date_col,'yyyymmddhh24miss'), 'X'),'X'),
      decode(sort_order3, 'D', decode (sort_col3, 1,name_col, 2,to_char(number_col,'9999999999.00000'), 3,to_char(date_col,'yyyymmddhh24miss'), 'X'),'X') DESC,Message was edited by:
    shoblock
    forgot to make 3 asc/desc variables

  • Updatable ADO recordset returned by a stored procedure

    I am trying to have an updatable ADO recordset returned by a stored procedure.
    However, LockType for this recordset is always adLockReadOnly.
    What needs to be done to have LockType changed?
    I am using the following simplified example from Oracle doc. According to Oracle® Provider for OLE DB Developer's Guide 10g Release 2,
    the following ADO code sample sets the Updatability property on a command object to allow insert, delete, and update operations on the rowset object.
    Dim Cmd As New ADODB.Command
    Dim Rst As New ADODB.Recordset
    Dim Con As New ADODB.Connection
    Cmd.ActiveConnection = Con
    Cmd.CommandText = "SELECT * FROM emp"
    Cmd.CommandType = adCmdText
    cmd.Properties("IRowsetChange") = TRUE
    Cmd.Properties("Updatability") = 7
    ' creates an updatable rowset
    Set Rst = cmd.ExecuteHowever, the result is not updatable. Can you please advise.

    Returning a REF CURSOR is certainly the easiest of the options, particularly if you're trying to use ADO. Without doing something really klunky, all your options are going to result in read-only result sets.
    Assuming you have a procedure-based interface to your data, the easiest option is generally to do your own updates by explicitly calling the appropriate stored procedures.
    As a bit of an aside, in order to offer updatable result sets, the ODBC/ OLE DB/ etc provider generally has to do something along the lines of
    1) Take the SQL statement you pass in
    2) Modify it to select the ROWID in addition to the other columns you're selecting
    3) Store the ROWID internally and use that as a key to figure out which row to update
    Once you eliminate the ability of the client to manipulate the query, you've pretty well eliminated the ability of the driver to implement generic APIs for updates. The client at that point has no idea which row(s) in which table(s) a particular value is coming from, so it has no idea how to do an update. You generally have to provide that knowledge by coding explicit updates.
    Justin

  • IN PRODCUTION ORDER,REAMRKS FILED IS SET AS MANDATORY USING STORED PROCEDURE..HOW TO REMOVE IT?

    IN PRODCUTION ORDER,REAMRKS FILED IS SET AS MANDATORY USING STORED PROCEDURE..HOW TO REMOVE IT?

    Hi,
    Please try to simply your subject of posting. It is not necessary your subject and body of discussion should be same.
    Yes possible to remove under SQL management studio provided you have authorization to access.
    Thanks & Regards,
    Nagarajan

  • Accessing XML API's from Java Stored Procedures in DB

    I am working in an environment that does not contain any Oracle applications and we have been looking at XML publisher as a stand alone service. I have successfully configured the UI and created some command line java programs to produce documents and deliver these documents.
    How do I install (do I need to install) XML Publisher java in the database in order to access XML Publisher API's from stored procedures. Any clues or help would be gratefully appreciated.
    George

    Hello Chris,
    I have been able to create a PDF from the database. I loaded the following jar files and removed any java class that could not compile.
    activation.jar, axis-ant.jar, axis.jar, axis-schema.jar, bicmn.jar, bipres.jar, collections.jar,
    commons-beanutils.jar, commons-collections-3.1.jar, commons-collections.jar, commons-dbcp-1.1.jar commons-digester.jar, commons-discovery.jar, commons-el.jar, commons-fileupload.jar, commons-logging-api.jar commons-logging.jar, commons-pool-1.1.jar, http_client.jar, i18nAPI_v3.jar, javamail.jar, jaxrpc.jar,
    jewt4.jar, jsp-el-api.jar, log4j-1.2.8.jar, logkit-1.2.jar, ojpse.jar, oracle-el.jar, oraclepki.jar,
    orai18n.jar, quartz-1.5.1.jar, quartz-oracle-1.5.1.jar, regexp.jar, saaj.jar, service-gateway.jar, share.jar, uix2.jar, uix2tags.jar, versioninfo.jar, wsdl4j.jar, xdocore.jar, xdoparser.jar, xdo-server-delivery-1.0-SNAPSHOT.jar, xdo-server-kernel-0.1.jar, xdo-server-kernel-impl-0.1.jar, xdo-server-scheduling-1.0-SNAPSHOT.jar, xercesImpl.jar, xmlparserv2-904.jar, xmlpserver.jar, xsu12.jar
    I needed to copy the XML Publisher fonts to the database server and ran the following java grants, note my $ORACLE_HOME is /opt/app/oracle/product/10.1.0/
    dbms_java.grant_permission('XMLP', 'java.util.PropertyPermission', '*', 'read,write');
    dbms_java.grant_permission('XMLP', 'java.net.SocketPermission', '*', 'connect, resolve');
    dbms_java.grant_permission('XMLP', 'java.io.FilePermission', '/tmp/*', 'read, write, delete');
    dbms_java.grant_permission('XMLP', 'java.io.FilePermission', '/opt/app/oracle/product/10.1.0/javavm/lib/*', 'read');
    dbms_java.grant_permission('XMLP', 'java.io.FilePermission', '/opt/app/oracle/product/10.1.0/javavm/lib/fonts/*', 'read');
    dbms_java.grant_permission('XMLP', 'java.lang.RuntimePermission', 'setFactory', '');
    George

  • Web Services with Large Result Sets

    Hi,
    We have an application where in a call to a web service could potentially yield a large result set. For the sake of argument, lets say that we cannot limit the result set size, i.e., by criteria narrowing or some other means.
    Have any of you handled paging when using Web Services? If so can you please share your experiences considering Web Services are stateless? Any patterns that have worked? I am aware of the Value List pattern but am looking for previous experiences here.
    Thanks

    Joseph Weinstein wrote:
    Aswin Dinakar wrote:
    I ran the test again and I removed the ResultSet.Fetch_Forward and it
    still gave me the same error OutOfMemory.
    The problem to me is similar to what Slava has described. I am parsing
    the result set in memory storing the results in a hash map and then
    emptying the post processed results into a table.
    The hash map turns out to be very big and jvm throws a OutOfMemory
    Exception.
    I am not sure how I can turn this around -
    I can partition my query so that it returns smaller chunks or "blocks"
    of data each time(say a page of data or two pages of data). Then I can
    store a page of data in the table. The problem with this approach is
    that it is not exactly transactional. Recovery would be very difficult
    in this approach.
    I could do this in a try catch block page by page and then the catch
    could go ahead and delete the rows that got committed. The question then
    becomes what if that transaction fails ?It sounds like you're committing the 'cardinal performance sin of DBMS processing',
    of shovelling lots of raw data out of the DBMS, processing it in some small way,
    and sending it (or some of it) back. You should instead do this processing in
    a stored procedure or procedures, so the data is manipulated where it is. The
    DBMS was written from the ground up to be a fast efficient set-based processor.
    Using clever SQL will pay off greatly. Build your saw-mills where the trees are.
    JoeYes we did think of stored procedures. Like I mentioned yesterday some of the post
    processing depends on unicode and specific character sets. Java seemed ideally suited
    to this since it handles these unicode characters very well and has all these libraries
    we can use. Moving this to DBMS would mean we would make that proprietary (not that we
    wont do it if it became absolutely essential) but its one of the reasons why the post
    processing happens in java. Now that you mention it stored procedures seem the best
    option.

  • ES 2 and JDBC: Query with a multi set return

    I know that ES does not have a Query that returns multiple result sets, is there one in ES2?
    If not, what are people using for this functionality?
    Cheers

    I'm looking for this functionality as well.
    While using scripting is a solution it's not exactly a viable long term solution.  I'm sure the scripting option was intended to cover off potential one off situations that required a bit more programmatic control than the default toolbox supplied.  We are going to have to create a custom script for every process because of the multi query limitation.  We don't even have the option of a JDBC component that returns results from a stored procedure as LC doesn't even have that.  If that was available we could at least work around the limited result sets.
    Limiting the number of result sets for the multi query to one is extremely short sighted.  Not to mention the inabilility for the multi query to return a hierarchal xml payload.
    In general the LC database connectivity options are puzzling.  Why is there no JDBC connector for stored procedures that returns data? Why is the multi query limited to a flat structure and one result set?  Surely there are PDF's that need to be populated with complex data structures?
    Perhaps even more frustratiing is that it isn't possible to reuse existing legacy data stores/stored procedures in a new process without creating a new approach to data access to comply with LC's deliberately limited JDBC options.  Why have the data integration capabilities of JBOSS (in our case) been handicapped?
    Overall, LC has been a great help in providing a shared business workflow option for our enterprise but the data access needs to be improved to enable LC to operate with existing systems and to provide more options for transforming business process on paper into business processes in LC.

  • PostMethod with large XML passed in setRequestEntity is truncated

    Hi,
    I use PostMethod to transfer large XML to a servlet.
    CharArrayWriter chWriter = new CharArrayWriter();
    post.marshal(chWriter);
    //The chWriter length is 120KB
    HttpClient httpClient = new HttpClient();
    PostMethod postMethod = new PostMethod(urlStr);
    postMethod.setRequestEntity(new StringRequestEntity(chWriter.toString(), null, null));
    postMethod.setRequestHeader("Content-type", "text/xml");
    int responseCode = httpClient.executeMethod(postMethod);
    String responseBody = postMethod.getResponseBodyAsString();
    postMethod.releaseConnection();
    When I open the request in doPost method in sevlet:
    Reader inpReader = request.getReader();
    char[] chars = MiscUtils.ReaderToChars(inpReader);
    inpReader = new CharArrayReader(chars);
    //The data is truncated (in chars[]) ???
    static public char[] ReaderToChars(Reader r) throws IOException, InterruptedException {
    //IlientConf.logger.debug("Start reading from stream");
    BufferedReader br = new BufferedReader(r);
    StringBuffer fileData = new StringBuffer("");
    char[] buf = new char[1024];
    int numRead=0;
    while((numRead=br.read(buf)) != -1){
    String readData = String.valueOf(buf, 0, numRead);
    fileData.append(readData);
    buf = new char[1024];
    return fileData.toString().toCharArray();
    Any idies what can be the problem??
    Lior.

    Hi,
    I use the same code and have 2 tomcats running with the same servlet on each of them.
    One running on Apache/2.0.52 (CentOS) Server and the second running on Apache/2.0.52 (windows XP) Server.
    I managed to post large XML from (CentOS) Server to (windows XP) Server successfully and failed to post large XML
    from (windows XP) Server to (CentOS) Server.
    I saw somthing called mod_isapi that might be disabling posting large XML files.
    Can anyone help me on going over that limitation?
    Thanks,
    Lior.

  • SQL Toolkit crashing with multiple threads

    Hello everyone and Happy New Year!
    I was hoping someone might be able to shed some light on this problem. I am updating an older application to use multiple threads. Actually the thread that is causing a problem right now is created by using an Asynchronous timer.
    I am using CVI 2010, and I think the SQL toolkit is version 2.2.
    If I execute an SQL statement from the main thread, there is no problem.
    stat = DBInit (DB_INIT_MULTITHREADED);
    hdbc = DBConnect( "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=sample.mdb;Mode=ReadWrite|Share Deny None" );
    hstmt = DBActivateSQL( hdbc, "SELECT * FROM SAMPLES" );
    DBDeactivateSQL(hstmt);
    DBDisconnect(hdbc);
    If I add code to do the same functions in a timer callback, it causes a stack overflow error.
    .. start main thread
    stat = DBInit (DB_INIT_MULTITHREADED);
    NewAsyncTimer (5.0, -1, 1, &gfn_quicktest, 0);
     .. end main thread
    .. and then the timer callback
    int CVICALLBACK gfn_quicktest (int reserved, int timerId, int event, void *callbackData, int eventData1, int eventData2)
    int hdbc = DBConnect( "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=params\\sample.mdb;Mode=ReadWrite|Share Deny None" );
    int hstmt = DBActivateSQL( hdbc, "SELECT * FROM SAMPLES" );
    DBDeactivateSQL(hstmt);
    DBDisconnect(hdbc);
    return 0;
    The program crashes with a stack overflow error when the DBActivateSQL statement is called.
    I understand that the ODBC driver for Access may not support multithreading, but I am only connecting to that database from the same thread with those 2 statements only so it should be fine ?
    Any insight would be appreciated,
    Thanks,
    Ed.
    Solved!
    Go to Solution.

    I just tried this using the sample access database that comes with CVI. It uses a DSN instead of mdb. It worked fine though. I don't see any reason multithreading would be a problem here if you are opening and closing the connection in the same code segment. I do notice that you are using params in the asyn callback connection string. Where does this come from? Maybe try using the sample database and see if that works.
    National Instruments
    Product Support Engineer

  • Adobe Illustrator crashes with large files

    when i open illustrator to view or edit a larger file i get
    once i hit ok, this is what i get
    this particular file is 588MB and it is an iAI file
    the computer im running this on is spec out to the following
    Win7 Ult 64bit
    I7-3770K cpu
    32 gb ram
    WD 500 GB Veloci Raptor
    samsung ssd 840 evo 120 gb scratch drive
    paging file set up as  min 33068 max 98304
    i tried with and with out extra scratch disk, with paging file set and with paging file set to system managed.
    i have uninstalled ai and re installed same response

    A 588 MB AI File? From where? CS5 is 32bit and expanding such a large file may simply go way beyond the measly 3GB you have in AI plus if it was generated in another program there may be al lsorts of otehr issues at play. This may be a hopeless cause and require you to at least instal lCS6 or CC 64bit as a trial...
    Mylenium

  • ArrayIndexOutOfBoundsException with large XML

    Hello,
    I have some java code that queries the DB and displays the XML in a browser. I am using the oracle.jbo.ViewObject object with the writeXML() method. Everything works well until I try to process a large XML file, the I get the following error:
    java.lang.ArrayIndexOutOfBoundsException: 16388
         at oracle.xml.io.XMLObjectOutput.writeUTF(XMLObjectOutput.java:215)
         at oracle.xml.parser.v2.XMLText.writeExternal(XMLText.java:354)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1459)
         at oracle.xml.parser.v2.XMLElement.writeExternal(XMLElement.java:1414)
         at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1267)
         at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1245)
         at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1052)
         at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:278)
         at com.evermind.server.ejb.EJBUtils.cloneSerialize(EJBUtils.java:409)
         at com.evermind.server.ejb.EJBUtils.cloneObject(EJBUtils.java:396)
    etc...
    I can put in the query to only allow a specific size to be displayed, but the users need to be able to access the larger XML files also. Has anyone else run into this issue?
    Oracle 10g
    Any help or pointers are greatly appreciated.
    Thank you.
    S

    No. We are not limiting the size in our code. Here is a snip of the offending code. The exception occurs on the " results = batchInterfaceView.writeXML(0, 0);" line, but only with larger files.
    <pre>
    try {
    // Request and response helper classes
    XMLHelper request = new XMLHelper(inputXML);
    response = new ResponseXMLHelper();
    if (request.doesValueExist(APP_ERROR_ID)) {
    //get input parameter
    strAppErrorId = request.getValue(APP_ERROR_ID);
    appErrorId = NumberConverter.toBigDecimal(strAppErrorId);
    //get Pos location view
    ViewObject batchInterfaceView =
    findViewObject(GET_ERROR_VIEW, PACKAGE_NAME);
    // get data for selected BatchInterface
    batchInterfaceView.setWhereClauseParam(0, appErrorId);
    batchInterfaceView.executeQuery();
    results = batchInterfaceView.writeXML(0, 0);
    response.addView(results);
    } catch (JboException e) {
    </pre>
    Thank you again for any help.
    S

  • 64-bit LabVIEW - still major problems with large data sets

    Hi Folks -
    I have LabVIEW 2009 64-bit version running on a Win7 64-bit OS with Intel Xeon dual quad core processor, 16 gbyte RAM.  With the release of this 64-bit version of LabVIEW, I expected to easily be able to handle x-ray computed tomography data sets in the 2 and 3-gbyte range in RAM since we now have access to all of the available RAM.  But I am having major problems - sluggish (and stoppage) operation of the program, inability to perform certain operations, etc.
    Here is how I store the 3-D data that consists of a series of images. I store each of my 2d images in a cluster, and then have the entire image series as an array of these clusters.  I then store this entire array of clusters in a queue which I regularly access using 'Preview Queue' and then operate on the image set, subsets of the images, or single images.
    Then enqueue:
    I remember talking to LabVIEW R&D years ago that this was a good way to do things because it allowed non-contiguous access to memory (versus contigous access that would be required if I stored my image series as 3-D array without the clusters) (R&D - this is what I remember, please correct if wrong).
    Because I am experiencing tremendous slowness in the program after these large data sets are loaded, and I think disk access as well to obtain memory beyond 16 gbytes, I am wondering if I need to use a different storage strategy that will allow seamless program operation while still using RAM storage (do not want to have to recall images from disk).
    I have other CT imaging programs that are running very well with these large data sets.
    This is a critical issue for me as I move forward with LabVIEW in this application.   I would like to work with LabVIEW R&D to solve this issue.  I am wondering if I should be thinking about establishing say, 10 queues, instead of 1, to address this.  It would mean a major program rewrite.
    Sincerely,
    Don

    First, I want to add that this strategy works reasonably well for data sets in the 600 - 700 mbyte range with the 64-bit LabVIEW. 
    With LabVIEW 32-bit, I00 - 200 mbyte sets were about the limit before I experienced problems.
    So I definitely noticed an improvement.
    I use the queuing strategy to move this large amount of data in RAM.   We could have used other means such a LV2 globals.  But the idea of clustering the 2-d array (image) and then having a series of those clustered arrays in an array (to see the final structure I showed in my diagram) versus using a 3-D array I believe even allowed me to get this far using RAM instead of recalling the images from disk.
    I am sure data copies are being made - yes, the memory is ballooning to 15 gbyte.  I probably need to have someone examine this code while I am explaining things to them live.  This is a very large application, and a significant amount of time would be required to simplify it, and that might not allow us to duplicate the problem.  In some of my applications, I use the in-place structure for indexing
    data out of arrays to minimize data copies.  I expect I might have to
    consider this strategy now here as well.  Just a thought.
    What I can do is send someone (in US) via large file transfer a 1.3 - 2.7 gbyte set of image data - and see how they would best advise on storing and extracting the images using RAM, how best to optimize the RAM usage, and not make data copies.  The operations that I apply on the images are irrelevant.  It is the storage, movement, and extractions that are causing the problems.  I can also show a screen shot(s) of how I extract the images (but I have major problems even before I get to that point),
    Can someone else comment on how data value references may help here, or how they have helped in one of their applications?  Would the use of this eliminate copies?   I currently have to wait for 64-bit version of the Advanced Signal Processing Toolkit for LabVIEW 2010 before I can move to LabVIEW 2010.
    Don

Maybe you are looking for