Out Of Memory Issue while dowloading infomraiton from DB to XML

Hi,
I am converting the Database tables into XML file format
using Java IO and SAXP. I am keeping all XML files in
download folder. But, while downloading process
i am getting Out Of Memory Error.
When i tried to download the real data, the download
folder size is going tp 50 MB and i am getting Out Of
memory error?
Is that related to JVM Memory or System Memory. What
would be the solution for this?
Awaiting your answers

By default the JVM appropriates 96Mb of memory for it's heap. You can increase the allocation by putting, say -Xmx128m on the Java command line, though perhaps a better solution is to avoid loading all your data at one time and write the XML out while reading the database.

Similar Messages

  • Getting 'Out of memory' error while opening the file. I have tried several versions of Adobe 7.0,9.0,X1. It is creating issue to convert PDF into TIFF. Please provide the solution ASAP

    Hello All,
    I am getting 'Out of memory' error while opening the file. I have tried several versions of Adobe 7.0,9.0,X1.
    Also, it is creating issue to convert PDF into TIFF. Please provide the solution ASAP.

    I am using Adobe reader XI. When i open PDF it gives "OUT of memory" error after scrolling PDF gives another alert "Insufficient data for an image". after clicking both alerts it loads full data of PDF. It is not happening with all PDFs. couple of PDFs are facing this issue. Because of this error my software is not able to print these PDFS into TIFF. My OS in window7*64. I tried it on win2012R2 and XP. Same issue is generating there.
    It has become critical issue for my production.

  • Memory issues while fetching content

    HI all,
    I am using SAP KM Apis. however i am facing some memory issues while using them.
    I am using getContent() api on a resource. and then getInputStream on the Icontent object. I am using a ByteArrayOutputStream for writing the data. here is the code snippet:
    byte[] buf= new byte[4096];
    InputStream inputStream = content.getInputStream();
    ByteArrayOutputStream os = new ByteArrayOutputStream(1024)
    while ((count = inputStream.read(buf)) != -1)
        os.write(buf, 0, count);
    byte[] arr=os.toByteArray();
    THis code consuimes a lot of memory . we need to optimize this. we also tried removing BYteArrayOutputSteam and directly read the bytes in chunk :
    int n; 
    do 
         n = inputStream.read(buf, 0,(int)content.getContentLength()); 
    while (n != -1); 
    however this approach has not helped us.
    please suggest an approach where memory consumption is less and the entire content is fed.
    thanks

    Hi
    This code worked for me for reading from a KM res...
    BufferedReader reader = new BufferedReader(new InputStreamReader(is));
    StringBuilder sb = new StringBuilder();
    String line = null;
            try {
                while ((line = reader.readLine()) != null) {
                    sb.append(line + "\n");
            } catch (IOException e) {
                e.printStackTrace();
            } finally {
                try {
                    is.close();
                } catch (IOException e) {
                    e.printStackTrace();
    return sb.toString();
    Regards
    BP

  • Hash Table Infrastructure ran out of memory Issue

    I am getting ORA-32690 : Hash Table Infrastructure ran out of memory error, while executing an Informatica mapping using Oracle Database ( Test Environment)
    The partition creation is as shown below.
    TABLESPACE MAIN_LARGE_DATA1
    PARTITION BY LIST (MKTCD)
    PARTITION AAM VALUES ('AAM')
    TABLESPACE MAIN_LARGE_DATA1,
    PARTITION AHT VALUES ('AHT')
    TABLESPACE MAIN_LARGE_DATA1,
    PARTITION GIM VALUES ('GIM')
    TABLESPACE MAIN_LARGE_DATA1,
    PARTITION CNS VALUES ('CNS')
    TABLESPACE MAIN_LARGE_DATA1,
    PARTITION AOBE VALUES ('AOBE')
    TABLESPACE MAIN_LARGE_DATA1,
    PARTITION DBM VALUES ('DBM')
    TABLESPACE MAIN_LARGE_DATA1
    Could you please provide me with a solution to this problem asap?

    SQL statement and execution plan? Is there a server-side trace file created for the session?
    From the brief description, it sounds like bug 6471770. See Metalink for details. The workaround for this particular bug is to either disable hash group-by, by setting +"_gby_hash_aggregation_enabled"+ to FALSE (using an ALTER SESSION statement . Or by using a NO_USE_HASH_AGGREGATION hint.
    Suggest you research this problem on Metalink (aka MyOracleSupport at https://support.oracle.com)

  • Out of memory Issues

    Hi,
    Weblogic version is 10.3, DB - Oracle
    We have environment like 4 servers One server with Admin server 4 managed servers remaining 3 servers with each server with 4 managed servers.
    Each managed server have 2 GB memory.
    Connection pools are setup Initial capacity 0 maximum capacity 15
    Our applications are developed on Pega, Currently we are getting Out of memory issues. F5 node send alerts like
    SEVERITY: Error
    Alert(432526): Trap received from ttnny-cse-f5node1: bigipServiceDown -- Bindings: sysUpTimeInstance = 1589988172, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status down., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 15:01 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    SEVERITY: Error
    Alert(432524): Trap received from ttnny-cse-f5node2: bigipServiceDown -- Bindings: sysUpTimeInstance = 1589982333, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status down., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 14:59 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    SEVERITY: Error
    Alert(432527): Trap received from ttnny-cse-f5node1: bigipServiceUp -- Bindings: sysUpTimeInstance = 1589988572, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status up., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 15:01 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    SEVERITY: Error
    Alert(432525): Trap received from ttnny-cse-f5node2: bigipServiceUp -- Bindings: sysUpTimeInstance = 1589982733, bigipNotifyObjMsg = Pool member 172.22.110.45:8002 monitor status up., bigipNotifyObjNode = 172.22.110.45, bigipNotifyObjPort = 8002 (Fri. 02/12/2010 14:59 America/New_York - Sat. 02/13/2010 15:59 America/New_York)
    When we checked at that time server is up and running with some pega exceptions JVM Shows 10 % after some it will go 30 % .
    Can you see below alert confirms JVM down so we are restarting the server at this point.
    SEVERITY: Alert
    Alert(432565): Threshold triggered -- ttappapp01's 8003's Port Availability: 0.00 Percent < 100 Percent averaged over 1.00 minutes (Fri. 02/12/2010 17:15 America/New_York - Fri. 02/12/2010 17:15 America/New_York)
    SEVERITY: Alert
    Alert(432564): Threshold triggered -- ttappapp01's 8003's Port Availability: 0.00 Percent != 100 Percent averaged over 1.00 minutes (Fri. 02/12/2010 17:15 America/New_York - Fri. 02/12/2010 17:15 America/New_York)
    We took thread dump and heap dump at that time can any one please give some suggestions. why server going out of memory.
    *1, Any issue with connection pools*
    *2, Please give suggestion on design.*
    Thanks,
    Raj.

    Hi Raj,
    Did you checked the system.out and Weblogic managed server logs?
    Also you have to check the GC logs, to see if there is a memory problem or not.

  • Lightroom 3.2 out of memory issues

    I had been using the beta version of Lightroom 3 without issues.  Once I installed the shipping version I get out of memory messages all the time.  First I noticed this when I went to export some images.  I can get this message when I export just one image or part way though a set of images ( this weekend it made it though 4 of 30 images before it died ).  If I restart Lightroom it's a hit or miss if I can proceed or not. I've even tried restarting the box and only having Lightroom running and still get the out of memory issue.
    I've also had problems printing.  I go to print an image and it looks like it will print but nothing does.  This does not generate an error message it just doesn't do anything.  So far restarting Lightroom seems to fix this problem.
    When in the develop module and click on an image to see it 1:1 at times the image is out of focus.  If I click on another image and then go back to the original it might be in focus.
    I have no idea if any of this is related but I thought I'd throw it out there.  I've been using Lighroom since version 1.0 and have had very good luck with the program.  It is getting very frustrating trying to get anything done.  I search though the forum but the memory issues I found were with older versions. I'd be very grateful if anyone could point me in the right direction.
    Ken
    System:
    i7 860
    4g memory
    XP SP3

    Hi,
    You can get the HeapDump Analyzer for analyzing IBM AIX heapdumps from the below mentioned link.
    http://www.alphaworks.ibm.com/tech/heapanalyzer
    http://www-1.ibm.com/support/docview.wss?uid=swg21190608
    Prerequistes for obtaining a heapdump:
    1.You have to add -XX:+HeapDumpOnOutOfMemoryError to the java options of the server (see note 710146,1053604) to get a heap dump on its occurance, automatically.
    2.You can also generate heapdumps on request :
    Add -XX:+HeapDumpOnCtrlBreak to the java options of the server
    (see note 710146).
    Send signal SIGQUIT to the jlaunch process representing the
    server e.g. using kill -3 <jlaunch pid> (see note 710154).
    The heap dump will be written to output file java_pid<pid>.hprof.<millitime> in:
    /usr/sap/<SID>/<instance>/j2ee/cluster/server<N> directory.
    Both these parameters can be set together too to get the benefit of both the approaches.
    Regards,
    Sandeep.
    Edited by: Sandeep Sehgal on Mar 25, 2008 6:51 PM

  • Getting an Out of memory exception while validating XML against XSD

    Hello friends,
    I am getting an Out Of Memory exception while validating my XML against a given XSd which is huge.
    SAXParserFactory saxParserFactory = SAXParserFactory.newInstance();
            saxParserFactory.setValidating(true);
              SAXParser saxParser = saxParserFactory.newSAXParser();
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaLanguage", "http://www.w3.org/2001/XMLSchema");
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaSource",new File("C:/todelxsd.xsd")); as u may see the darkened code. this basically Loads the XSD in Memmory , and JVM throws an out of Memory exception. is there any other way round of validating an XML against an XSD where i dont have to load my XSD if not then kindly let me know the solution for above problem .
    Thanks.

    Yes, but increasing the heap size is a temporary solution , isnt there a way where the XML can be validated against an XSD without having to load XSD in memory

  • Getting an out of memory exception while validating my XML against a XSD

    Hello friends,
    I have asked this question in following thread too. Pasting it again here just to saye your time
    http://forum.java.sun.com/thread.jspa?threadID=690812&tstart=0
    I am getting an Out Of Memory exception while validating my XML against a given XSd which is huge.
    SAXParserFactory saxParserFactory = SAXParserFactory.newInstance();
            saxParserFactory.setValidating(true);
              SAXParser saxParser = saxParserFactory.newSAXParser();
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaLanguage", "http://www.w3.org/2001/XMLSchema");
             saxParser.setProperty("http://java.sun.com/xml/jaxp/properties/schemaSource",new File("C:/todelxsd.xsd")); as u may see the darkened code. this basically Loads the XSD in Memmory , and JVM throws an out of Memory exception. is there any other way round of validating an XML against an XSD where i dont have to load my XSD if not then kindly let me know the solution for above problem .
    Thanks.

    Yes, but increasing the heap size is a temporary solution , isnt there a way where the XML can be validated against an XSD without having to load XSD in memory

  • CC 2014 Progs leave me plagued with out of memory issues which the previous versions don't exhibit.

    The new CC 2014 suite of progs seem rather memory hungry? I am plagued with out of memory issues trying to use them. The old CC progs work just fine! Is there a new minimum spec for memory now? For now I am forced to use the old versions as the new ones are just un useable...some 'upgrade'!
    Phil

    Me too!  Seems whenever I run more than one CC app I get out of memory errors.  I have Win 7 with 32GB ram.  Only have this problem with CC 2014, not CS6.

  • Special character issue while loading data from SAP HR through VDS

    Hello,
    We have a special character issue, while loading data from SAP HR to IdM, using a VDS and following the standard documentation: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e09fa547-f7c9-2b10-3d9e-da93fd15dca1?quicklink=index&overridelayout=true
    French accent like (é,à,è,ù), are correctly loaded but Turkish special ones (like : Ş, İ, ł ) are transformed into u201C#u201D in Idm.
    The question is : does someone know any special setting to do in the VDS or in IdM for special characters to solve this issue??
    Our SAP HR version is ECC6.0 (ABA/BASIS7.0 SP21, SAP_HR6.0 SP54) and we are using a VDS 7.1 SP5 and SAP NW IdM 7.1 SP5 Patch1 on oracle 10.2.
    Thanks

    We are importing directly to the HR staging area, using the transactions/programs "HRLDAP_MAP", "LDAP" and "/RPLDAP_EXTRACT", then we have a job which extract data from the staging area to a CSV file.
    So before the import, the character appears correctly in SAP HR, but by the time it comes through the VDS to the IDM's temporary table, it becomes "#".
    Yes, our data is coming from a Unicode system.
    So, could it be a java parameter to change / add in the VDS??
    Regards.

  • Out of Memory error while builng HTML String from a Large HashMap.

    Hi,
    I am building an HTML string from a large map oject that consits of about 32000 objects using the Transformer class in java. As this HTML string needs to be displayed in the JSP page, the reponse time was too high and also some times it is throwing out of memory error.
    Please let me know how i can implement the concept of building the library tree(folder structure) HTML string for the first set of say 1000 entries and then display in the web page and then detect an onScroll event and handle it in java Script functions and come back and build the tree for the next set of entries in the map and append this string to the previous one and accordingly display it.
    please let me know whether
    1. the suggested solution was the advisable one.
    2. how to build tree(HTML String) for a set of entries in the map while iterating over the map.
    3. How to detect a onScroll event and handle it.
    Note : Handling the events in the JavaScript functions and displaying the tree is now being done using AJAX.
    Thanks for help in Advance,
    Kartheek

    Hi
    Sorry,
    I haven't seen any error in the browser as this may be Out of memory error which was not handled. I got the the following error from the web logic console
    org.apache.struts.actions.DispatchAction">Dispatch[serviceCenterHome] to method 'getUserLibraryTree' returned an exceptionjava.lang.reflect.InvocationTargetException
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:276)
         at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:196)
         at org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.java:421)
         at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:226)
         at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1164)
         at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:415)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:760)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:853)
         at weblogic.servlet.internal.ServletStubImpl$ServletInvocationAction.run(ServletStubImpl.java:996)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:419)
         at weblogic.servlet.internal.ServletStubImpl.invokeServlet(ServletStubImpl.java:315)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:6452)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:118)
         at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:3661)
         at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:2630)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: java.lang.OutOfMemoryError
    </L_MSG>
    <L_MSG MN="ILHD-1109" PID="adminserver" TID="ExecuteThread: '14' for queue: 'weblogic.kernel.Default'" DT="2012/04/18 7:56:17:146" PT="WARN" AP="" DN="" SN="" SR="org.apache.struts.action.RequestProcessor">Unhandled Exception thrown: class javax.servlet.ServletException</L_MSG>
    <Apr 18, 2012 7:56:17 AM CDT> <Error> <HTTP> <BEA-101017> <[ServletContext(id=26367546,name=fcsi,context-path=/fcsi)] Root cause of ServletException.
    *java.lang.OutOfMemoryError*
    Please Advise.
    Thanks for your help in advance,
    Kartheek

  • Hyperion IR : Getting out of memory error while fetching data for whole year through web client (wrokspace)

    Hi,
    While fetching data though IR wen client from workspace for a year(all 12 months) I am getting error as ("Out of Memory .Advice : Close other applications or windows and try again").
    If I am trying same through IR studio it does not give any output and show me same repoting front page.
    If i am selecting periods till 8 months it is giving the required data in both IR web client and IR studio.
    Could you please suggest how can we resolve this issue.
    Thanks,
    D.N.Rana

    Issue Cause :
    Sometimes this is due to excessive data which brings the size of the BQY file up around one gigabyte uncompressed in size (for processing may take twice as actual RAM, plus the memory space space for the plugin, and the typical memory limit on a 32-bit system is 2 gigabytes).
    Solution :
    To avoid excessive BQY size exceeding memory availability:
    Ensure that your computer has at least 2Gb of free RAM before he runs IR Studio.
    Put a limit to the number of rows that can be pulled down: Right click on Request label of Query section and put a value in Return First xxx Rows (and check the check box).
    Do not pull down more than 750 MB of data (remember it may be duplicated while processing).
    Place limits or aggregations in Query section (as opposed to Result section) to limit data entering the BQY.

  • Result Set Causing out of memory issue

    Hi,
    I am having trouble to fix the memory issue caused by result set.I am using jdk 1.5 and sql server 2000 as the backend. When I try to execute a statement the result set returns minimum of 400,000 records and I have to go through each and every record one by one and put some business logic and update the rows and after updating around 1000 rows my application is going out of memory. Here is the original code:
    Statement stmt = con.createStatement();
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                System.out.println("doing some logic here");
    rs.close();
    st.close();
    I am planning to fix the code in this way:
    Statement stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY,
                          ResultSet.CONCUR_UPDATABLE);
    stmt.setFetchSize(50);
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                System.out.println("doing some logic here");
    rs.close();
    st.close();But one of my colleague told me that setFetchSize() method does not work with sql server 2000 driver.
    So Please suggest me how to fix this issue. I am sure there will be a way to do this but I am just not aware of it.
    Thanks for your help in advance.

    Here is the full-fledged code.There is Team Connect and Top Link Api being used. The code is already been developed and its working for 2-3 hours and then it fails.I just have to fix the memory issue. Please suggest me something:
    Statement stmt = con.createStatement();
    ResultSet   rs = st.executeQuery("Select * from  database tablename where field= 'done'");
                while(rs!=null && rs.next()){
                 /where vo is the value object obtained from the rs row by row     
                if (updateInfo(vo, user)){
                               logger.info("updated : "+ rs.getString("number_string"));
                               projCount++;
    rs.close();
    st.close();
    private boolean updateInfo(CostCenter vo, YNUser tcUser) {
              boolean updated;
              UnitOfWork unitOfWork;
              updated = false;
              unitOfWork = null;
              List projList_m = null;
              try {
                   logger.info("Before vo.getId() HERE i AM" + vo.getId());
                   unitOfWork = FNClientSessionManager.acquireUnitOfWork(tcUser);
                   ExpressionBuilder expressionBuilder = new ExpressionBuilder();
                   Expression ex1 = expressionBuilder.get("application")
                             .get("projObjectDefinition").get("uniqueCode").equal(
                                       "TABLE-NAME");
                   Expression ex2 = expressionBuilder.get("primaryKey")
                             .equal(vo.getPrimaryKey());// primaryKey;
                   Expression finalExpression = ex1.and(ex2);
                   ReadAllQuery projectQuery = new ReadAllQuery(FQUtility
                             .classForEntityName("EntryTable"), finalExpression);
                   List projList = (List) unitOfWork.executeQuery(projectQuery);
                   logger.info("list value1" + projList.size());
                   TNProject project_hist = (TNProject) projList.get(0); // primary key
                   // value
                   logger.info("vo.getId1()" + vo.getId());
                   BNDetail detail = project_hist.getDetailForKey("TABLE-NAME");
                   project_hist.setNumberString(project_hist.getNumberString());
                   project_hist.setName(project_hist.getName());
                   String strNumberString = project_hist.getNumberString();
                   TNHistory history = FNHistFactory.createHistory(project_hist,
                             "Proj Update");
                   history.addDetail("HIST_TABLE-NAME");
                   history.setDefaultCategory("HIST_TABLE-NAME");
                   BNDetail histDetail = history.getDetailForKey("HIST_TABLE-NAME");
                   String strName = project_hist.getName();
                   unitOfWork.registerNewObject(histDetail);
                   setDetailCCGSHistFields(strNumberString, strName, detail,
                             histDetail);
                   logger.info("No Issue");
                   TNProject project = (TNProject) projList.get(0);
                   project.setName(vo.getName());
                   logger.info("vo.getName()" + vo.getName());
                   project.setNumberString(vo.getId());
                   BNDetail detailObj = project.getDetailForKey("TABLE-NAME"); // required
                   setDetailFields(vo, detailObj);//this method gets the value from vo and sets in the detail_up object
                   FNClientSessionManager.commit(unitOfWork);
                   updated = true;
                   unitOfWork.release();
              } catch (Exception e) {
                   logger.warn("update: caused exception, "
                             + e.getMessage());
                   unitOfWork.release();
              return updated;
         }Now I have tried to change little bit in the code. And I added the following lines:
                        updated = true;
                     FNClientSessionManager.release(unitOfWork);
                     project_hist=null;
                     detail=null;
                     history=null;
                     project=null;
                     detailObj=null;
                        unitOfWork.release();
                        unitOfWork=null;
                     expressionBuilder=null;
                     ex1=null;
                     ex2=null;
                     finalExpression=null;
    and also I added the code to request the Garbage collector after every 5th update:
    if (updateInfo(vo, user)){
                               logger.info("project update : "+ rs.getString("number_string"));
                               projCount++;
                               //call garbage collector every 5th record update
                               if(projCount%5==0){
                                    System.gc();
                                    logger.debug("Called Garbage Collectory on "+projCount+"th update");
                          }But now the code wont even update the single record. So please look into the code and suggest me something so that I can stop banging my head against the wall.

  • Out of Memory issue in crystal report 2008 SP1

    Hi ALL,
    I am facing serious issue in crystal report 2008 SP1.
    When i am click the page setup in crystal report 2008 ,it is prompting like "Out of Memory".
    Because  of this i am not able to see the default printer in the page setup.
    Please give the suggestions to resolve this issue.
    Thanks and Regards,
    Vinay

    Hi Ed,
    What printer are you using as your default printer?
    What happens if you change your default printer to Microsofts generic print driver? Only as a test to remove the printer being the cause.
    Also, Go into Page Setup and check on Dissociate Printer..... and see if that fixes the issue.
    Also, include all your OS version and patch level, Status of DEP, anti-virus turned off ( disconnect from your network while doing this test ) and Windows Firewall or any third party firewall and close all other software running.
    Thanks again
    Don

  • Top Link Causes out of memory issue when millions of records need to update

    Hello everyone,
    I am using TopLink 9.0.4 in a batch process. The batch process reads from the temp table(temp table has millions of records one month worth of data which need be updated). The database being used is sqlserver 2005. Below is the snippet of code. It works for 6-7 hours and crashes after that due of out of memory:
    ExpressionBuilder expressionBuilder = new ExpressionBuilder();
    Statement stmt = con.createStatement();
    ResultSet rs = st.executeQuery("Select * from database tablename where field= 'done'");
    while(rs!=null && rs.next()){
    *//where vo is the value object obtained from the rs row by row*     
    if (updateInfo(vo, user,expressionBuilder )){
                   logger.info("updated : "+ rs.getString("col_name"));
                   projCount++;
    rs.close();
    st.close();
    private boolean updateInfo(ProjectVO vo, YNUser tcUser,expressionBuilder ) {
              boolean updated;
              updated = false;
              try {
                   updated = true;
              } catch (Exception e) {
                   logger.warn("update: caused exception, "
                             + e.getMessage());
              return updated;
    Edited by: user8981696 on Jan 14, 2010 1:00 PM

    Thanks for your reply.
    Please find below the answers to you suggestions/concerns:
    You seem to be using raw JDBC to select all of the records in a single result set, not sure if this may be causing a memory issue. You could try paging through the results instead.
    Ans: I have modified the code to get me 1000 records each time and I am getting the ResultSet by using PrepartedStatement instead of regular Statement object.
    What type of caching are you using?
    Ans: No caching is being used. If you have some thoughts on caching please suggest or put some sample code. Again there is no AppServer is being used, its just a regular java process(Batch process) so I dont know how to do caching in a simple java process.
    You may also wish to try the latest 9.0.4 patch release, or try the 10.1.3 version, or the latest EclipseLink 2.0 release.
    Ans: Where can I find the latest patch release 9.0.4?
    Any help/suggestion is really appreciated!

Maybe you are looking for