OutOfMemoryError when retrieving large resultset

In my application I have one type of object (Call it O for now) that has
a lot of data in the database (+50000 rows). Now I need to create an
object of class T that has a m-n binding to 40000 instances of O. In the
database this is mapped to a link table between the table for O and T.
Now I get an OutOfMemoryError when I perform the following code to add
one T to the aforementioned O:
PersistenceManager pm = VRFUtils.getPM(getServlet());
T t = new T();
//Fill arbitrary fields
t.setToelichting(mtForm.getToelichting());
//Add T to a set of O's
Set os = new HashSet();
Query q = pm.newQuery(O.class,"aangemaakt==parmaanmaak");
q.declareParameters("java.util.Date parmaanmaak");
os.addAll(q.execute(field));
t.setOs(os);
//Make T persistent
pm.currentTransaction().begin();
pm.makePersistent(toelichting);
pm.currentTransaction().commit();
pm.close();
After debugging I've found that the OutOfMemoryError occurs even when I
don't make anything persistent, but simply run the query that retrieves
40000 records and do a c.size() on the result.
I'm running Kodo against MySQL using Optimistic transactions.
What I must appreciate, is that the OutOfMemoryError does not upset the
rest of the Kodo system.
Please advise,
Martin van Dijken
PS: The c.size() issue I've been able to resolve using KodoQuery's
setResult("count(this)"), but I can't think of anything like that, that
would apply to this situation.

As you may know, Kodo has several settings that allow you to use large
result sets without running out of memory (assuming your driver supports
advanced features):
http://www.solarmetric.com/Software/Documentation/latest/docs/ref_guide_dbsetup_lrs.html
Kodo also allows you to apply these settings to persistent fields, so
that the entire collection or map field does not reside in memory:
http://www.solarmetric.com/Software/Documentation/latest/docs/ref_guide_pc_scos.html#ref_guide_pc_scos_proxy_lrs
Unfortunately, you are trying to create a relation containing more
objects than your system can handle, apparently, and when creating a
relation you pretty much have to load all the related objects into
memory at once right now.
In the next beta release of 3.1, there is a solution to this problem.
Our next release no longer holds hard references to objects that have
been flushed. So by using the proper large result set settings as noted
above, and by making your relation a large result set relation also as
noted above, and by adding only a few hundred objects at a time from the
query result to the relation and then manually flushing, you should be
able to perform a transaction in which all the queried objects are
transferred from the query result to the large result set relation
without exhausting memory.

Similar Messages

  • IOS packager throws java.lang.OutOfMemoryError when packaging large projects

    Crosspost from stackoverflow, I figured this forum might have some insights too!
    I've been porting a Flex 4 codebase to iOS using the adobe packager, but have run into a snag when trying to package our whole codebase. The packager runs for a while and then throws an OutOfMemoryError - even if I increase the java heap size to 4GB.
    No single piece of code seems to be causing the problem, as it compiles successfully if I cut out large chunks of code, and I can change which chunks I'm omitting. It might be related to the size of the code itself.
    I've logged a very detailed bug report with adobe here: http://bugs.adobe.com/jira/browse/FB-32192 . It includes an AIRI file that you can package to reproduce the issue, a ruby script that generates actionscript code to generate that AIRI file, and a summary of all of the things I tried before logging the bug.
    Has anyone else tried compiling large projects with the iOS packager? Are there any known workarounds?

    Thanks for reporting the issue, we are working on it, hopefully it will be available in next major version of AIR.
    To know more about it, you might want nominate yourself for our prerelease program at this link.
    http://labs.adobe.com/technologies/flashplatformruntimes/air3/
    Thanks,
    Amish.

  • OutOfMemoryError When Sending Large Image

    Hi All,
    We are developing a midlet that is esentially a photo blogging app. We are using an HTTP POST to send the image to the web server. The code is working properly as we are able to send images up to about 80KB.
    However, when sending images larger than ~80KB, the midlet gets an OutOfMemoryError.
    The image is being sent in chunked data packets, so shouldn't this mean we could send any size file since it would just keep sending more data chunks until it has reached the end of the file?...
    Has anyone else out there encountered this or perhaps know of a work around?
    Any help would be greatly appreciated.
    Thanks!
    Jim

    We are currently loading the entire image into memory at the moment which is
    probably causing the OutOfMemory exceptions. Would you know how native
    phone applications send images through HTTP which are many times the
    size of available memory?
    The only way I could think of is to somehow connect the input stream
    which gets the image from the phone's memory and the output stream
    which writes out the HTTP data. By doing this no new byte array will get
    declared (explicitly anyway) to temporarily hold the entire image in the phone's
    memory. I'm wondering could I accomplish something like that by somehow
    collapsing these two chunks of code into one that declares no arrays to hold
    the entire image in memory?
    Here's our current code for reference:
    Getting the image from the phone
    theFile is a FileConnection object which references the image file we want
                        InputStream fileInputStream = theFile.openInputStream();
                        fileContent = new byte[(int)filesize];
                        fileInputStream.read(fileContent);
                        fileInputStream.close();Writing image to HTTP output stream
    data is a byte[] array holding the entire image in memory
    httpOut is a DataOutputStream
    SuperViewerMidlet.httpOut.write(data);

  • Error "java.lang.OutOfMemoryError" When Returning Large Number of Docs

    In our SES implementation, we have a custom search interface that allows users to search for documents and then add them to a "shopping cart". Users add then to their shopping cart from search results, by adding docs one-by-one or an Add All option. Once they are done "shopping" they create a report.
    Here is the scenario...
    Users are saerching for documents and seeing that on page 1, there is 1 - 10 of about 300 results. They clicked Add All and want all 300 docs added to their cart.
    What we do under the covers is we execute another search and set the docs requested to 200. We get the array of docs, iterate over them and add the keys to a list. We found 200 docs at a time to be a safe number. However, there are still 100 docs that were not added to their cart and users want all 300 added. In other words, when they click Add All, they want to add all docs, 300, 500, 5000, etc.
    I set the "Maximum Number of Results" to 500 and found that I can safely add up to ~ 350 docs at one time. However, going past this point throws the following error:
    [SOAPException: faultCode=SOAP-ENV:Server; msg= [java.lang.OutOfMemoryError]]
         at oracle.search.query.webservice.client.OracleSearchService.makeSOAPCallRPC(OracleSearchService.java:941)
         at oracle.search.query.webservice.client.OracleSearchService.doOracleSearch(OracleSearchService.java:469)
    After this error is thrown, SES was unable to recover and searching would not work anymore, even return 10 docs at a time. We had to restart SES to resolve the issue.
    1. What is throwing this error? Is it the amount of XML being returned?
    2. What is the maximum number of results we can get back at any one time? Is it based on the amount of data being returned or the number of attributes?
    We are running 10.1.8 with plans to upgrade soon.
    Thanks in advance.

    I know it may be hard to throw away all this code, but consider using the jakarta fileupload component.
    I think it would simplify your code down to
    // Create a factory for disk-based file items
    FileItemFactory factory = new DiskFileItemFactory();
    // Create a new file upload handler
    ServletFileUpload upload = new ServletFileUpload(factory);
    // Parse the request
    List /* FileItem */ items = upload.parseRequest(request);
    // Process the uploaded items
    Iterator iter = items.iterator();
    while (iter.hasNext()) {
        FileItem item = (FileItem) iter.next();
        if (item.isFormField()) {
            processFormField(item);
        } else {
            // item is a file.  write it
            File saveFolder = application.getRealPath("/file");          
            File uploadedFile = new File(saveFolder, item.getName());
            item.write(uploadedFile);
    }Most of this code was hijacked from http://jakarta.apache.org/commons/fileupload/using.html
    Check it out. It will solve your memory problem by writing the file to disk temporarily if necessary.
    Cheers,
    evnafets

  • Clients Time-Out When Retrieving Large Files  From Our PureFTPd Server

    Hi, not sure if this is the right place to get help, but didn't know where else to look. And let me begin by saying I'm not a server admin by trade, but that is the role I've fell into at a design firm. So please bear with my lack of knowledge on some of the more technical details.
    We have OS X Server 10.4.11 running here with PureFTPd installed and managed through PureFTPd Manager application. It has been fairly reliable for us for the last 2-3 years once the initial setup craziness was complete.
    But lately we've had 2 clients say that they've had difficulty downloading larger files from us. They get a time-out or error message on their end. They are most likely using Windows Explorer on a PC to do the transfer, which has always been the easiest method for our clients.
    One client had no problem getting files up to 14MB or so. But when trying to download a certain file that was about 80MB, it would not work. I tried different things here, and she tried multiple times, but no luck. I tried zipping the file to see if that would help, but no luck.
    Then another client had a similar problem today with a file that was only 25MB.
    We don't send large files very often, so I'm not sure if this is a recent thing or not. It seems like we've received some large files recently though.
    In PureFTPd manager, I usually leave all of the fields in the Transfers tab empty for all users. I didn't see anywhere else that would seem to impart a file-size limitation.
    • Any ideas at all?
    • If not, any other forums that I could search for help on? It looks like the developers site for PureFTPd Manager hasn't been updated in a year or two.
    Thanks in advance!

    Some years back I had trouble at a customer that had a ADSL using PPPoE connection to their ISP.
    Their router/modem (Speedtouch) couldn't cope with the LAN MTU when they sent files out so I had to lower it on their computers (running Panther?) ethernet interface from 1500 to 1492 (PPPoE overhead was 8 byte). The communication used to stall at about 25MB but the MTU change helped resolving that.
    My guess is that the router had to fragment (split in two) all outgoing packets it recived from the LAN computers and it just couldn't cope after a while.
    In your case it can be other network related things too like maybe needing traffic prioritizing, if receiving a lot of traffic from Internet and trying to send at the same time.

  • Problem retrieving large amount of data!

    Hi,
    I'm currently working with a database application accessing very big amount of data. One query could result in 500 000+ hits!
    I would like to present this data in a JTable.
    When the query is executed, I create a two-dimensional array and store the data in it. The problem is that I get OutOfMemoryError when I reach the 150 000th row and add it to the array.
    I've looked into the design pattern "Value List Handler" and it seems it could be of use. But still I need to populate an array with all the data, and then I get the error.
    Is there somehow I could query the database, populate part of the data in a smaller array, use the "Value List Handler" pattern to access small portions of the complete resultset?
    Another problem is that the user wants ability to sort asc/desc by clicking columnheaders in the JTable. The I need to access all data in that table to make it be sorted right. Could re-query the database with a "ORDER BY <column> ASC" and use a modification of the value list handler pattern?
    I'm a bit confused, please help!
    Kind regards, Andreas

    The only chance that I you have: only select as many rows as you display on the screen. When the user hits "next page" retrieve the next rows.
    You might be able to do this with a scrollable resultset, but with that you are left to the mercy of the JDBC driver concerning memory management.
    So you need to think about a solution where you issue a new SELECT narrowing down the result set depending on the first/last row displayed
    Search this forum for pagewise navigation this question gets asked about 5 times a week (mostly in conjunction with web applications, but the logic behind it, should be the same).
    Tailoring your TableModel might be tricky as well.
    Thomas

  • Unable to fetch large resultset

    I got this error when i was trying to retreive records from a sql server table which cosists of around 2 lakhs of records
    com.microsoft.sqlserver.jdbc.SQLServerException: The system is out of memory. Use server side cursors for large result sets:Java heap space. Result set size:78,585,571. JVM total memory size:66,650,112.
    how can i fetch the above said records

    3 choices:
    1) make your java heap size big enough to hold the entire ResultSet. If you're using Sun's Java (almost everyone does), then you increase the maximum heap with the -Xmx parameter whent the JVM is started. It looks like you're using the default maximum, which is pretty small. See:
    http://java.sun.com/j2se/1.3/docs/tooldocs/solaris/java.html
    2) retrieve the ResultSet from the database incrementally, and process it incrementally. Statement.setFetchSize() suggests to the driver how much of the ResultSet to keep in memory (and to retrieve in one chunk from the database), but it's common for scrollable ResultSets to ignore this hint and try to keep everything in memory. A ForwardOnly ResultSet (the default) is more likely to work incrementally, but it depends totally on your driver.
    3) break the data into multiple queries....

  • How do I handle large resultsets in CRXI without a performance issue?

    Hello -
    Problem Definition
    I have a performance problem displaying large/huge resultset of data on a crystal report.  The report takes about 4 minutes or more depending on the resultset size.
    How do you handle large resultsets in Crystal Reports without a performance issue?
    Environment
    Crystal Reports XI
    Apache WebSvr 2.X, Jboss 4.2.3, Struts
    Java Reporting Component (JRC),Crystal Report Viewer (CRV)
    Firefox
    DETAILS
    I use the CRXI thick client to build my report (.rpt) and then use it in my webapplication (webapp) under Jboss.
    User specifies the filter criteria to generate a report (date range etc) and submits the request to the webapp.  Webapp  queries the database, gets a "resultset".
    I initialize the JRC and CRV according to all the specifications and finally call the "processHttpRequest" method of Crystal Report Viewer to display the report on browser.
    So.....
    - Request received to generate a report with a filter criteria
    - Query DB to get resultset
    - Initialize JRC and CRV
    - finally display the report by calling
        reportViewer.processHttpRequest(request, response, request.getSession().getServletContext(), null);
    The performance problem is within the last step.  I put logs everywhere and noticed that database query doesnt take too long to return resultset.  Everything processes pretty quickly till I call the processHttpRequest of CRV.  This method just hangs for a long time before displaying the report on browser.
    CRV runs pretty fast when the resultset is smaller, but for large resultset it takes a long long time.
    I do have subreports and use Crystal report formulas on the reports.  Some of them are used for grouping also.  But I dont think Subreports is the real culprit here.  Because I have some other reports that dont have any subreports, and they too get really slow displaying large resultsets.
    Solutions?
    So obviously I need a good solution to this generic problem of "How do you handle large resultsets in Crystal Reports?"
    I have thought of some half baked ideas.
    A) Use external pagination and fetch data only for the current page being displayed.  But for this, CRXI must allow me to create my own buttons (previous, next, last), so I can control the click event and fetch data accordingly.  I tried capturing events by registering event handler "addToolbarCommandEventListener" of CRV.  But my listener gets invoked "after" processHttpRequest method completes, which doesnt help.
    Some how I need to be able to control the UI by adding my own previous page, next page, last page buttons and controlling it's click events. 
    B) Automagically have CRXI use a javascript functionality, to allow browser side page navigation.  So maybe the first time it'll take 5 mins to display the report, but once it's displayed, user can go to any page without sending the request back to server.
    C) Try using Crystal Reports 2008.  I'm open to using this version, but I couldnt figureout if it has any features that can help me do external pagination or anything that can handle large resultsets.
    D) Will using the Crystal Reports Servers like cache server/application server etc help in any way?  I read a little on the Crystal Page Viewer, Interactive Viewer, Part Viewer etc....but I'm not sure if any of these things are going to solve the issue.
    I'd appreciate it if someone can point me in the right direction.

    Essentialy the answer is use smaller resultsets or pull from the database directly instead of using resultsets.

  • Error: java.lang.OutOfMemoryError when uploading CSV files to web server

    Hi experts,
    I have made a JSP page from which clients load csv files to web server. I am using Tomca 4.1 as my web server and JDK 1.3.1_09.
    The system works fine when uploadiing small csv files, but it crashes when uploading large CSV files.
    It gives me the following error:
    java.lang.OutOfMemoryError
         <<no stack trace available>>
    This is the code that I used to load files....
    <%
    String saveFile = "";
    String contentType = request.getContentType();
    if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0))
         DataInputStream in = new DataInputStream(request.getInputStream());
         int formDataLength = request.getContentLength();
         byte dataBytes[] = new byte[formDataLength];
         int byteRead = 0;
         int totalBytesRead = 0;
         while (totalBytesRead < formDataLength)
              byteRead = in.read(dataBytes, totalBytesRead, formDataLength);
              totalBytesRead += byteRead;
         String file = new String(dataBytes);
         saveFile = file.substring(file.indexOf("filename=\"") + 10);
         saveFile = saveFile.substring(0, saveFile.indexOf("\n"));
         saveFile = saveFile.substring(saveFile.lastIndexOf("\\") + 1,saveFile.indexOf("\""));
         int lastIndex = contentType.lastIndexOf("=");
         String boundary = contentType.substring(lastIndex + 1,contentType.length());
         int pos;
         pos = file.indexOf("filename=\"");
         pos = file.indexOf("\n", pos) + 1;
         pos = file.indexOf("\n", pos) + 1;
         pos = file.indexOf("\n", pos) + 1;
         int boundaryLocation = file.indexOf(boundary, pos) - 4;
         int startPos = ((file.substring(0, pos)).getBytes()).length;
         int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length;
         String folder = "f:/Program Files/Apache Group/Tomcat 4.1/webapps/broadcast/file/";
         //String folder = "10.28.12.58/bulksms/";
         FileOutputStream fileOut = new FileOutputStream(folder + saveFile);
         //out.print("Saved here: " + saveFile);
         //fileOut.write(dataBytes);
         fileOut.write(dataBytes, startPos, (endPos - startPos));
         fileOut.flush();
         fileOut.close();
         out.println("File loaded successfully");
    //f:/Program Files/Apache Group/Tomcat 4.1/webapps/sms/file/
    %>
    Please can anyone help me solve this problem for me...
    Thanx...
    Deepak

    I know it may be hard to throw away all this code, but consider using the jakarta fileupload component.
    I think it would simplify your code down to
    // Create a factory for disk-based file items
    FileItemFactory factory = new DiskFileItemFactory();
    // Create a new file upload handler
    ServletFileUpload upload = new ServletFileUpload(factory);
    // Parse the request
    List /* FileItem */ items = upload.parseRequest(request);
    // Process the uploaded items
    Iterator iter = items.iterator();
    while (iter.hasNext()) {
        FileItem item = (FileItem) iter.next();
        if (item.isFormField()) {
            processFormField(item);
        } else {
            // item is a file.  write it
            File saveFolder = application.getRealPath("/file");          
            File uploadedFile = new File(saveFolder, item.getName());
            item.write(uploadedFile);
    }Most of this code was hijacked from http://jakarta.apache.org/commons/fileupload/using.html
    Check it out. It will solve your memory problem by writing the file to disk temporarily if necessary.
    Cheers,
    evnafets

  • PL/SQL Dev query session lost on large resultsets after db update

    We have a problem with our PL/SQL Developer tool (www.allroundautomations.nl) since updating our Database.
    So far we had Oracle DB 10.1.0.5 Patch 2 on W2k3 and XP-Clients using Instant Client 10.1.0.5 and PL/SQL Developer 5.16 or 6.05 to query your DB. This scenario worked well.
    Now we upgraded to ORACLE 10G 10.1.0.5 PATCH 25 and now our PL/SQL Developer 5.16 or 6.05 (on IC 10.1.0.5) can logon the db and also query small tables. But as soon as the resultset reaches a certain size, the query on a table won't come to an end and is always showing "Executing...". We can only press "BREAK" what the results in a "ORA-12152: TNS: unable to send break message" and "ORA-03114: not connected to ORACLE".
    If i narrow the resultset down on the same table it works like before.
    If i watch the sessions on small resultset-queries, i see the corresponding session, but on large resultset-queries the session seem to close immediately.
    To solve this issue a already tried to install the newest PL/SQL developer 7.1.5(trail) or/and installing a newer instant client version (10.2.0.4), which both did not solve the problem.
    Is there a new option in 10.1.0.5 Patch 25 (or before) which closes sessions if the resultsets getting to large over a slower internet connection?
    btw. using sqlplus in the instantclient directory or even excel over odbc on the same client, returns the full resultset without problems. Could this be some kind of timeout problem ?
    Edit:
    Here is a snippet of the tracefile on the client right after Executing the select-statement. Some data seems to be retrieved and than it ends with these lines:
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 2D 20 49 6E 74 72 61 6E |-.Intran|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 74 2D 47 72 75 6E 64 |et-Grund|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 6C 61 67 65 6E 02 C1 04 |lagen...|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 02 C1 03 02 C1 0B 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 51 00 02 C1 03 02 C1 2D |Q......-|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 05 48 4B 4F 50 50 01 80 |.HKOPP..|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 03 3E 64 66 01 80 07 78 |.>df...x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 07 78 |.......x|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 65 0B 0F 01 01 01 07 76 |e......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 09 01 01 07 76 |.......v|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: C7 01 01 18 01 01 02 C1 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 3B 02 C1 02 01 80 00 00 |;.......|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 00 00 00 00 00 00 |........|
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: 00 00 01 80 15 0C 00 |....... |
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: got NSPTDA packet
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: NSPTDA flags: 0x0
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: what=1, bl=2001
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: exit
    (1992) [20-AUG-2008 17:13:00:953] nioqrc: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: entry
    (1992) [20-AUG-2008 17:13:00:953] nsdo: cid=0, opcode=85, bl=0, what=0, uflgs=0x0, cflgs=0x3
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: rank=64, nsctxrnk=0
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: nsctx: state=8, flg=0x100400d, mvd=0
    (1992) [20-AUG-2008 17:13:00:953] nsdo: gtn=127, gtc=127, ptn=10, ptc=2011
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: acquired the bit
    (1992) [20-AUG-2008 17:13:00:953] snsbitts_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: entry
    (1992) [20-AUG-2008 17:13:00:953] snsbitcl_ts: normal exit
    (1992) [20-AUG-2008 17:13:00:953] nsdo: switching to application buffer
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: entry
    (1992) [20-AUG-2008 17:13:00:953] nsrdr: recving a packet
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: entry
    (1992) [20-AUG-2008 17:13:00:953] nsprecv: reading from transport...
    (1992) [20-AUG-2008 17:13:00:968] nttrd: entry
    Message was edited by:
    vhbtech

    Found nothing in the \bdump alert.log or \bdump trace files. I only have the DEFAULT profile and everything is set to UNLIMITED there.
    But the \udump generates a trace file the moment i execute the query:
    Dump file <path>\udump\<sid>ora4148.trc
    Fri Aug 22 09:12:18 2008
    ORACLE V10.1.0.5.0 - Production vsnsta=0
    vsnsql=13 vsnxtr=3
    Oracle Database 10g Release 10.1.0.5.0 - Production
    With the OLAP and Data Mining options
    Windows Server 2003 Version V5.2 Service Pack 2
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:898M/3071M, Ph+PgF:2675M/4967M, VA:812M/2047M
    Instance name: <SID>
    Redo thread mounted by this instance: 1
    Oracle process number: 33
    Windows thread id: 4148, image: ORACLE.EXE (SHAD)
    *** 2008-08-22 09:12:18.731
    *** ACTION NAME:(SQL Window - select * from stude) 2008-08-22 09:12:18.731
    *** MODULE NAME:(PL/SQL Developer) 2008-08-22 09:12:18.731
    *** SERVICE NAME:(<service-name>) 2008-08-22 09:12:18.731
    *** SESSION ID:(145.23131) 2008-08-22 09:12:18.731
    opitsk: network error occurred while two-task session server trying to send break; error code = 12152
    This trace is only generated if the query with a expected large resultset fails. If i narrow down the resultset no trace is written, well and the query then works of course.

  • ADF how to display a processing page when executing large queries

    ADF how to display a processing page when executing large queries
    The ADF application that I have written currently has the following structure:
    DataPage (search.jsp) that contains a form that the user enters their search criteria --> forward action (doSearch) --> DataAction (validate) that validates the inputted values --> forward action (success) --> DataAction (performSearch) that has a refresh method dragged on it, and an action that manually sets the itterator for the collection to -1 --> forward action (success) --> DataPage (results.jsp) that displays the results of the then (hopefully) populated collection.
    I am not using a database, I am using a java collection to hold the data and the refresh method executes a query against an Autonomy Server that retrieves results in XML format.
    The problem that I am experiencing is that sometimes a user may submit a query that is very large and this creates problems because the browser times out whilst waiting for results to be displayed, and as a result a JBO-29000 null pointer error is displayed.
    I have previously got round this using Java Servlets where by when a processing servlet is called, it automatically redirects the browser to a processing page with an animation on it so that the user knows something is being processed. The processing page then recalls the servlet every 3seconds to see if the processing has been completed and if it has the forward to the appropriate results page.
    Unfortunately I can not stop users entering large queries as the system requires users to be able to search in excess of 5 million documents on a regular basis.
    I'd appreciate any help/suggestions that you may have regarding this matter as soon as possible so I can make the necessary amendments to the application prior to its pilot in a few weeks time.

    Hi Steve,
    After a few attempts - yes I have a hit a few snags.
    I'll send you a copy of the example application that I am working on but this is what I have done so far.
    I've taken a standard application that populates a simple java collection (not database driven) with the following structure:
    DataPage --> DataAction (refresh Collection) -->DataPage
    I have then added this code to the (refreshCollectionAction) DataAction
    protected void invokeCustomMethod(DataActionContext ctx)
    super.invokeCustomMethod(ctx);
    HttpSession session = ctx.getHttpServletRequest().getSession();
    Thread nominalSearch = (Thread)session.getAttribute("nominalSearch") ;
    if (nominalSearch == null)
    synchronized(this)
    //create new instance of the thread
    nominalSearch = new ns(ctx);
    } //end of sychronized wrapper
    session.setAttribute("nominalSearch", nominalSearch);
    session.setAttribute("action", "nominalSearch");
    nominalSearch.start();
    System.err.println("started thread calling loading page");
    ctx.setActionForward("loading.jsp");
    else
    if (nominalSearch.isAlive())
    System.err.println("trying to call loading page");
    ctx.setActionForward("loading.jsp");
    else
    System.err.println("trying to call results page");
    ctx.setActionForward("success");
    Created another class called ns.java:
    package view;
    import oracle.adf.controller.struts.actions.DataActionContext;
    import oracle.adf.model.binding.DCIteratorBinding;
    import oracle.adf.model.generic.DCRowSetIteratorImpl;
    public class ns extends Thread
    private DataActionContext ctx;
    public ns(DataActionContext ctx)
    this.ctx = ctx;
    public void run()
    System.err.println("START");
    DCIteratorBinding b = ctx.getBindingContainer().findIteratorBinding("currentNominalCollectionIterator");
    ((DCRowSetIteratorImpl)b.getRowSetIterator()).rebuildIteratorUpto(-1);
    //b.executeQuery();
    System.err.println("END");
    and added a loading.jsp page that calls a new dataAction called processing every second. The processing dataAction has the following code within it:
    package view;
    import javax.servlet.http.HttpSession;
    import oracle.adf.controller.struts.actions.DataForwardAction;
    import oracle.adf.controller.struts.actions.DataActionContext;
    public class ProcessingAction extends DataForwardAction
    protected void invokeCustomMethod(DataActionContext actionContext)
    // TODO: Override this oracle.adf.controller.struts.actions.DataAction method
    super.invokeCustomMethod(actionContext);
    HttpSession session = actionContext.getHttpServletRequest().getSession();
    String action = (String)session.getAttribute("action");
    if (action.equalsIgnoreCase("nominalSearch"))
    actionContext.setActionForward("refreshCollection.do");
    I'd appreciate any help or guidance that you may have on this as I really need to implement a generic loading page that can be called by a number of actions within my application as soon as possible.
    Thanks in advance for your help
    David.

  • Intermittent errors when  retrieving through smart view (11.1.2.5.210) in Oracle 11.1.2.3.500

    We are getting the below errors when retrieving from huge smart view sheets for couple of our applications. We have increased the time out settings from the local registry, server registry, mod_wl_ohs file, Web logic console, http.conf, Essbase.properties. Also, increased the JVM from opmn.xml & setCustomParamsAnalyticProviderServices.bat. However we still face the error.
    Note: this is an intermittent error and does not happen all the time.
    Cannot connect to the provider. Make sure it is running in the specified host/port. Error(12152).
    Cannot connect to the provider. Make sure it is running in the specified host/port. Error(12031).
    Also, we are facing  retrieval performance issues after we upgraded to 11.1.2.3.500 after we migrated from 11.1.1.3.500. Has anyone faced this issue? if so, any suggestions in improving the retrieval performance (especially with MDX member formulas in ASO applications)  in 11.1.2.3.500 version?
    Smart view version: 11.1.2.5.210

    Is there a load balancer being used?
    Does the issue common on both Shared and Private connection?
    Smartview client
         A)    add a essbase.cfg  to the client machine in smartview\bin
                 the file will have 3 entries
                NETDELAY 2000
                NETRETRYCOUNT 2000
                NETTCPCONNECTRETRYCOUNT  200
    You can give it a try on a single client machine and check is the result is improved.

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Finder issues when copying large amount of files to external drive

    When copying large amount of data over firewire 800, finder gives me an error that a file is in use and locks the drive up. I have to force eject. When I reopen the drive, there are a bunch of 0kb files sitting in the directory that did not get copied over. This is happens on multiple drives. I've attached a screen shot of what things look like when I reopen the drive after forcing an eject. Sometime I have to relaunch finder to get back up and running correctly. I've repaired permissions for what it's worth.
    10.6.8, by the way, 2.93 12-core, 48gb of ram, fully up to date. This has been happening for a long time, just now trying to find a solution

    Scott Oliphant wrote:
    iomega, lacie, 500GB, 1TB, etc, seems to be drive independent. I've formatted and started over with several of the drives and same thing. If I copy the files over in smaller chunks (say, 70GB) as opposed to 600GB, the problem does not happen. It's like finder is holding on to some of the info when it puts it's "ghost" on the destination drive before it's copied over and keeping the file locked when it tries to write over it.
    This may be a stretch since I have no experience with iomega and no recent experience with LaCie drives, but the different results if transfers are large or small may be a tip-off.
    I ran into something similar with Seagate GoFlex drives and the problem was heat. Virtually none of these drives are ventilated properly (i.e, no fans and not much, if any, air flow) and with extended use, they get really hot and start to generate errors. Seagate's solution is to shut the drive down when not actually in use, which doesn't always play nice with Macs. Your drives may use a different technique for temperature control, or maybe none at all. Relatively small data transfers will allow the drives to recover; very large transfers won't, and to make things worse, as the drive heats up, the transfer rate will often slow down because of the errors. That can be seen if you leave Activity Monitor open and watch the transfer rate over time (a method which Seagate tech support said was worthless because Activity Monitor was unreliable and GoFlex drives had no heat problem).
    If that's what's wrong, there really isn't any solution except using the smaller chunks of data which you've found works.

  • Media Encoder CS4 hangs when opening large XDCAM HD MXF files

    I am having a problem with Media Encoder CS4 hanging consistently when importing large (18GB) XDCAM HD (25mbps, MPE-2 HD) MXF files.  Smaller files are not a problem.  This is on a soupped up system, HP XW8600 Workstation / Windows Vista Pro.  The same files work fine in After Effect CS4.
    Any idea what could cause this?
    Thanks.
    T.

    Yes, I did.  Thanks for the tip though.
    Thierry Humeau, DoP
    Télécam Films, LLC
    4400 MacArthur Blvd. NW
    suite 201
    Washington DC, 20007
    o: +1 202.298.0030
    c: +1 202-255-8696
    www.telecamfilms.com

Maybe you are looking for