Out Of Memory Problem With Creator 2

Dear Community,
I'm having a strange problem with JSC2. I have a not so big project (15 pages, more or less). I added a new page, and it raised a NullPointerException. Then the IDE stopped showing the outline window when selecting the Design View, and started to show Out of Memory Errors (java heap).
At this time the IDE refuses to start. It gets stuck in the splash screen window, when appears the "opening main window" message. Then "java.exe" process starts to eat RAM up to the limit set in -Xmx startup switch. No matter how high I set this value, it will be taken, always showing the "Out of Memory (java heap)" error dialog. I have 1 Gb of RAM.
It seems like a memory leak, but I suspect about corruption of either userdir or the project.
Any hint will be welcome.
Thank you in advance.
Antonio.

Got this before , it could be something to do with a dead Data Source.
How i fixed the problem was to delete the following folder.
C:\Documents and Settings\Administrator\.Creator\2_0
Make sure you delete the .Creator folder not Creator
Before you delete this folder make a backup of content.xml
and jdbc-drivers folder so you dont ahve to and paste the back in once creator and recreate the folder
So basically delete the folder , start creator and leave rebuild close and paste the files back into folder.
Obviusly this is a drastic measure and your defaults will be set back to factory. But by keeping the backed up files as above you will not need to set up your datasources again. Perform at your own risk. Worked for me may not for you

Similar Messages

  • JDBC ResultSet out of memory problem with Scrollable one

    Hey guys,
    I'm facing the following problem when accessing an Oracle 10g database over oracle jdbc driver 1.4.
    I need to access the rows of a resultset (millions of rows) at least twice. Forward only doesn't need much memory - but I can't do a rs.beforeFirst()
    Switching to ResultSet.TYPE_SCROLL_SENSITIVE gives me always an java.lang.OutOfMemoryError: Java heap space exception after rs.next() and I can see the used memory constantly increasing.
    Here is my test code:
    conn = DriverManager.getConnection
    ("jdbc:oracle:thin:@host:1521:ORCL", "user", "pass");
    Statement my_stmt = conn.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE,
    ResultSet.CONCUR_READ_ONLY);
    ResultSet result = my_stmt.executeQuery("select * from stats_big");
    System.out.println("Query back :)");
    while(result.next()) {      //here happens the error after 200 000 rows
    //make statistics
    //ask user what to do
    result.beforeFirst();
    while(result.next()) {
         //apply function and deliver new values
    conn.close();
    Is the implementation caching all the already read rows in memory?
    Any help would be great,
    Alex

    if and how there are ways to read a Resultset 2 or more times
    forward-only and the data is not completelly cached on the client If the client reads a million rows, how can it read them a second time without either storing them somewhere on the client as in scrollable cursors, or rereading them from the server as in regular cursors. To me this looks like the laws of physics, rather than something that can be changed in a software release.
    In order of preference I would
    1. Not process a million rows on the client.
    2. Use a regular cursor and fetch it twice, the second fetch will be faster as the database already caches as much of it as it can.
    Fetching that much data somewhere is going to take some time anyway, which is why I prefer the first option..

  • Memory problem with jdk/jre 1.1.8

    My name is BERGMANN Yannick.
    I'm working for IRM in Li?ge and we developped an application (user interface for an industrial measurement system) in Java (JDK/JRE version : 1.1.8).
    We have a big memory problem with this application :
    - This user interface is running on a WINDOWS NT PC with 128MB.
    - This is the command to lanch our application :
    C:\Program Files\JavaSoft\JRE\1.1\bin\jrew.exe" -ms32m -mx32m -cp "\Program Files\HMI\HMI.zip;\Velocis\Add_On\Jdbc\raima.jar;\Program Files\Swing-1.1.1\swingall.jar" be.irm.hmi.kernel.HMI -t15 -d"Velocis rdstcp" -newdb -mf1m -mr20
    - When our application is running, everything seems to be OK in memory for it. The garbage collector seems to work properly and our application has always at least 5MB free memory (We use the java instruction "Runtime.getRuntime().freeMemory()" to know this).
    - But when we look in the "Windows NT task manager" for the "jrew" application, the memory increases ALWAYS.
    - After 5 days our application is completely frozen and blocked ...???
    - here is a memory map of our Windows NT PC :
         "jre.exe"     "commit total"     "commit limit"     "commit peak"     "physical total"     "physical available"     "physical file cache"     
    Monday     92264     109256     194944     109424     130484     19492     6216     
    Thuesday     106196     123072     194944     123348     130484     6072     5840     
    Wednesday     110836     132288     194944     132416     130484     4408     5140     
    Thursday     108200     144980     194944     145140     130484     4888     5148     
    Friday     109440     158319     194944     161334     130484     4911     4992     
    Monday     111600     209060     228548     209148     130484     5184     3484     
    Have you any idea of what is happening with "jrew" in memory ?
    We have had this problem for six month and we are totaly out of idea.
    If you can give us any idea, we'll appreciate a lot.
    Thanks in advance,
    BERGMANN Yannick
    IRM SA - Software Engineer
    Tel. (32)4/239.90.10
    Tel. (32)4/239.90.74 (direct)
    Fax (32)4/263.40.97
    E-mail [email protected]

    We had a memory problem with a swing applet in our company. The major reason for this was that we added new components in a JTree and removed them later again, and the components we removed were never garbage collected. This was because with these components we added different listeners, and we didn't remove the listeners after we didn't need the components anymore. After we corrected this, the components where garbage collected.
    Perhaps it's a similar problem you have, or I have no idea. Check that you remove actionlisteners, mouselisteners etc from components you want to be garbage collected.
    You could also test your application with OptimizeIt to see what objects you create and how many you get of them over time: http://www.vmgear.com

  • Out of memory problem using the API

    Hi all,
    I need your assistance, we are working with CDB 10.2 making searches and retrieving the documents with all their attributes.
    In our actual scenario we have a single user (which represents an application) accessing CDB. This user use several persistent sessions simultaneously. I mean, several thousands of final users connect to an application that uses one user of CDB to connect to CDB with several persistent sessions.
    To simulate this scenario we wrote java code that open five threads and make several searches (requesting all the attributes) using the same user on cdb.
    Retrieving a considerable amount of data found on the search (~5000), we found a “Out of memory” problem when we made these tests:
    -     5 threads obtaining 100 documents (and all their attributes) / search
    -     1 thread obtaining 500 documents (and all their attributes) / search
    -     Also we have same problem if we make several searches with less results
    We suppose it’s a configuration or code issue so we ask for your assistance and experience to solve it.
    Thanks for your help,
    Dani
    import java.sql.Connection;
    import oracle.ifs.examples.api.constants.AttributeRequests;
    import oracle.ifs.examples.api.util.CommonUtils;
    import oracle.ifs.fdk.Attributes;
    import oracle.ifs.fdk.ClientUtils;
    import oracle.ifs.fdk.FdkConstants;
    import oracle.ifs.fdk.FdkCredential;
    import oracle.ifs.fdk.ManagersFactory;
    import oracle.ifs.fdk.NamedValue;
    import oracle.ifs.fdk.Options;
    import oracle.ifs.fdk.SearchExpression;
    import oracle.ifs.fdk.SearchManager;
    import oracle.ifs.fdk.SimpleFdkCredential;
    import oracle.jdbc.pool.OracleDataSource;
    public class Prueba {
         public static void main(String args[]) {
              Thread thread = new BasicThread1();
              Thread thread1 = new BasicThread1();
              Thread thread2 = new BasicThread1();
              Thread thread3 = new BasicThread1();
              Thread thread4 = new BasicThread1();
              thread.start();
              thread1.start();
              thread2.start();
              thread3.start();
              thread4.start();
    class BasicThread1 extends Thread {
         public void run() {
              ManagersFactory session = null;
              try {
                   System.out.println(this.getName() + "-->init");
                   session = getSession();
                   SearchManager sManager = session.getSearchManager();
                   SearchExpression srchExpr = new SearchExpression(Attributes.SIZE,
                             new Integer(20000000), FdkConstants.OPERATOR_LESS_THAN);
                   NamedValue[] res = null;
                   for (int i = 0; i < 100000; i++) {
                        res = sManager.search(srchExpr, basicSearchOptions2,
                                  AttributeRequests.DOCUMENT_CATEGORY_ATTRIBUTES);
                   System.out.println(this.getName()+" --> fin sin error: " + res.length);
                   } catch (Throwable t) {
    t.printStackTrace();
    System.out.println("<--"+this.getName());
              } finally {
                   CommonUtils.bestEffortLogout(session);
         static NamedValue[] basicSearchOptions2 = new NamedValue[] {
                   ClientUtils.newNamedValue(
                                       Options.MULTILEVEL_FOLDER_RESTRICTION,
                                       Boolean.TRUE),
                   ClientUtils.newNamedValue(Options.SEARCH_FOR_DOCUMENTS,
                             Boolean.TRUE),
                   ClientUtils.newNamedValue(Options.SEARCH_FOR_FOLDERS,
                             Boolean.FALSE),
                   ClientUtils.newNamedValue(Options.RETURN_COUNT,
                             new Integer(500)) //<<Maximo nº de elementos
         private static ManagersFactory getSession() throws Exception {
         OracleDataSource ods = new OracleDataSource();
         ods.setURL("URL");
         Connection conn = ods.getConnection();
         FdkCredential credential = new SimpleFdkCredential("USER","PSW");
         ManagersFactory session = ManagersFactory.login(credential,
         "SERVER");
    return session;
    }

    re-Post

  • Flex 4 RichEditableText out of memory problems

    Hello, we're conducting performance testing on the UI of a productivity application we're developing using Adobe AIR. We are using Flash Builder 4 public beta. One part of the performance test is updating the textFlow property of a RichEditableText control every 5 seconds with random data that contains several paragraphs and several images (to mimic a typical news article). We used TextFlowUtil.importFromString(str) to convert the raw data to a textFlow object.
    We found that the above test with the RichEditableText control would quickly crash the AIR runtime with presumably out-of-memory problems. If we switch to using the (read-only) RichText control, the problem went away. The only material difference between the two controls we could find in the documentation is that the RichEditableText control supports unlimited undo/redos as long as it retains focus. Could this be the source of the memory problem? Regardless, can its behavior be modified to avoid the out-of-memory problems?
    The use case we are trying to simulate is having the productivity app open for weeks at a time, with constant editing and going back and forth between different workflows.
    Thanks for your help, and please let me know if you need additional information

    Try a more recent build.
    Alex Harui
    Flex SDK Developer
    Adobe Systems Inc.
    Blog: http://blogs.adobe.com/aharui

  • Memory problems with PreparedStatements

    Driver: 9.0.1 JDBC Thin
    I am having memory problems using "PreparedStatement" via jdbc.
    After profiling our application, we found that a large number oracle.jdbc.ttc7.TTCItem objects were being created, but not released, even though we were "closing" the ResultSets of a prepared statements.
    Tracing through the application, it appears that most of these TTCItem objects are created when the statement is executed (not when prepared), therefore I would have assumed that they would be released when the ResultSet is close, but this does not seem to be the case.
    We tend to have a large number of PreparedStatement objects in use (over 100, most with closed ResultSets) and find that our application is using huge amounts of memory when compared to using the same code, but closing the PreparedStatement at the same time as closing the ResultSet.
    Has anyone else found similar problems? If so, does anyone have a work-around or know if this is something that Oracle is looking at fixing?
    Thanks
    Bruce Crosgrove

    From your mail, it is not very clear:
    a) whether your session is an HTTPSession or an application defined
    session.
    b) What is meant by saying: JSP/Servlet is growing.
    However, some pointers:
    a) Are there any timeouts associated with session.
    b) Try to profile your code to see what is causing the memory leak.
    c) Are there references to stale data in your application code.
    Marilla Bax wrote:
    hi,
    we have some memory - problems with the WebLogic Application Server
    4.5.1 on Sun Solaris
    In our Customer Projects we are working with EJB's. for each customer
    transaction we create a session to the weblogic application server.
    now there are some urgent problems with the java process on the server.
    for each session there were allocated 200 - 500 kb memory, within a day
    the JSP process on our server is growing for each session and don't
    reallocate the reserved memory for the old session. as a work around we
    now restart the server every night.
    How can we solve this problem ?? Is it a problem with the operating
    system or the application server or the EJB's ?? Do you have problems
    like this before ?
    greetings from germany,

  • Memory problem with 5330 XpressMusic

    Hi,
    I seem to have a memory problem with my 5330, I have a 1Gb micro disc installed. My son sent me a photo from his phone, when it arrived I got a warning on the scren that 'there is not enough memory to receive messages', I have tried to delete as much as I can, a lot tells me I can't delete it, but I still get the same warning. This warnig shows up each time I switch on the phone. How can I get myself more space? Can I move things from the phone memory to the memory card?
    Gerald
    Message Edited by warmbells on 01-Nov-2009 06:19 PM

    Are you sure you're using a 5330 XpressMusic? Or is it a 5320 XpressMusic? The former is yet to be released, AFAIK.
    Assuming that you are using a 5320, try these steps:
    -> Move all photos, music, videos stored in the phone memory to the memory card. You can use the built-in File Manager to accomplish this.
    -> Delete all files that you would've received via bluetooth, present in your Inbox. You can save them to the mass storage.
    -> Clear the browser cache.
    -> Clear the Sent Items folder.
    -> Make sure you don't have too many messages in your inbox.
    Hope this helps
    Cheers,
    DeepestBlue
    5800 XpressMusic (Rock Stable) | N73 Music Edition (Never Say Die) | 1108 (Old and faithful)
    If you find any post useful, click on the Green "Kudos" Button on the left to say Thank You...

  • Memory problem with ITS

    Hello friends,
    we are having memory problem with integrated ITS.. on one application server all memory of ITS is getting exhausted. We think few users take lots of memory and it never gets released.
    I checked Note 742048 - Integrated ITS, memory requirement in application server but parameters looks ok. Sometime, when i kill a user session in SM04, some part of memory was released. However i am not able to find which sessions are taking max memory as i do not see any workproess active in SM50.
    in SITSPMON :
    Memory Consumption: Overview
    Sessions:     27      24,710,431 Bytes       915,201 Bytes/Session       2,433,8
    Templates:            14,793,306 Bytes
        Sess. & Templ.    39,503,737 Bytes     Currently available to ITS: 81.92MB o
    ITS Session     memory type     Peak          Memory     Total     Current
    USER 001 2463     Session Memory     1,955,757     938,597     7,574     723
    USER 001 2503     Session Memory     2,008,621     965,245     9,181     930
    USER 001 2523     Session Memory     2,412,477     856,925     11,327     82
    thanks
    ashish

    and what is 2463 is line : USER(user ID) 001(Client) 2463(??) Session Memory 1,955,757 938,597 7,574 723 as this is not a work process ID.]
    Basically i am trying to correlate ITS session with SM04 Session of user.

  • Memory problem with my Nokia 3220

    Hi,I am new here and I have a memory problem with my Nokia 3220.I deleted all of my stuff in galery exept the BlueSquare theme and Nokia Tune but it says that i only have 483kb free memory and galery has 1,7mb memory taken. What should i do ?...I hope you understood my problem and i hope you can help me!
    Shibuy
    Thank You!!

    I have the same problem recently. I bought my nokia 3220, 3 years ago and all was ok but now I download some free themes from a web site and suddenly I realized that my cellphone has less memory. In the phone options says that I have 2,2 MB in my Gallery when actually I have only 800 KB in my Gallery. I don't know what happen!! Maybe it's a virus or what is the solution?? Can I  restart the cellphone ?

  • Memory Problem With 4gb Crucial Ballistix and Asus M4A785TD-V EVO

    motherBoard: ASUS M4A785TD-V EVO
    bios Version :2005
    Video: amd hd5750
    Processor: AMD PHENON II X2 550 3.100MHZ
    Memory: Crucial
    Modele Memory: blt4g3d1608dt1tx0
    Capacity: 4GB
    greetings to all
    i have asus m4a785td-v evo mobo. i ve bought 4 gb crucial ballistix ram today. it says on product its 1600 mhz and cl8 but it shows on my pc 1333 mhz and cl9.  when i was searching for this problem i came across this topic: http://forum.crucial.com/t5/Crucial-Ballistix-gaming-memory/Memory-Probleme-With-8GO-Crucial-Ballistix-and-Asus-M4A785TD-V/td-p/9464 can i apply the same settings? how can i fix that?  is it possible via bios settings? thx in advance and sry for my bad english. here is some screenshots: 

    I am not good at RAM setting but the 'Timings Table' shows there is XMP-1600 profile available so I believe it would be enough to turn on that memory profile in BIOS.

  • Has anyone out there experienced problems with launching apps after downloading the iOS 5.1?

    Has anyone out there experienced problems with launching apps after downloading the iOS 5.1?

    I tried calling support and they want to sell me $79 worth of support in order to get it straightened out!

  • For everyone having memory problems with this board

    I have been working on my memory problems with this board. Memtest failed every pass yet memory was fine and board was fine. Was using Kingstone Value Ram
    Set bios version back to 1.7
    set preformance to fast
    set memory to
    2.5
    4
    4
    8
    Set voltage to 2.7 if that does not work go to 2.75
    now working great no more blue screens or os getting corupt it seems to run like a charm
    Thanks to the people who have posted sefull iformation even know you have to dig deep to find it.

    i was having mem problems with this board. But i set up bios v2.2 - ram voltage 2.75 - performance mode: slow and ram timing by SPD.
    Seems to work great for me but my video card is messed up so my comp dies every ones in a while.  

  • Advice needed: The way to solve out of memory problem (or the way to work with big csv files)

    Hello:)
    I'm in trouble: I have a big csv file (over 5gb of web-analytics data) and my 64 bit excel (and 6gb ram)
    I cant load file to data model because of it's size. There is an error "out of memory" in power query. 
    This is the first time when I encountered such a problem.
    What options do I have to work with such a file? To increase memory in my computer? Would it solve the problem? How much do I need to work with 6gb csv? 
    Or may be I can upload my data somewhere to azure and work with it there? 
    So the problem - is there any way to deal with big files using power query? Or I need to become a developer and learn sql or other languages? 
    Thanks in advance.
    Max

    Hi Miguel!
    Thanks for your answer. 
    I've tried to load this file on virtual pc from azure cloud with this config:
    I have increased memory limit in power query settings:
    And still, the proble is the same:
    What I do wrong? 

  • Memory Problems with Adobe PDF iFilter for 64-bit

    In preparation to rebuild my Windows Search Index, I installed the Adobe PDF iFilter for 64-bit on my system (Vista Business 64).  When I finally rebuilt the index, I wasn't too surprised by what I saw happen, namely, the SearchFilter.exe process would kick in whenever I wasn't using the system and just eat RAM.  One time I turned it on and it had allocated over 4,000 MB (and my system only has 4,030 MB available) so of course it was forcing all the other processes to hard fault (ie. everything was moving like molasses--for example, it took 20 minutes to put the thing to sleep).  But I just let it do it's work, figuring that perhaps this was to be expected relative to the small library of PDF's that I've accumulated on my computer, ranging from LaTeX generated text files, to containers for hi-res scans.  So, after a day and a half of basically not using my laptop, everything finally calmed down and I enjoyed the benefits of searching the content of my library from the Windows Start menu--for a short while.
    However, to my dismay I've encountered the problem that this freezing of my computer would now occur after everytime I download a new PDF (in this particular case they were Google Books scans) and then left the computer to idle.  Again, the SearchFilter.exe would allocate all of my RAM for itself and just push everything else onto the Virtual RAM, which means the SLOWEST possibly fetching you can get.  I had to uninstall as this was making my computer unusable for 15-30 minutes after each idle. Everything is back in working order without the iFilter, but I would like to know if anyone has reported such problems on x64 systems.  Obviously, I will also report the problem to Microsoft, since the search engine should certainly have the precaution to handle such memory problems.   However, it is a problem that is created by the Adobe PDF iFilter interacting with the Windows Search engine.

    Hello,
    We believe we have figured this out.  It looks like it has to do with the length of the default folder location for the Adobe iFilter.
    I was able to reproduce the issue and the following resolved it for me.  See if this resolves it for you all as well.
    Here is how to get Adobe Version 11 PDF filter to work.
     1 . If you haven’t already, run the following in SQL Server:
    Sp_fulltext_service ‘Load_os_resources’, 1
    Go
    --you might also need to run: 
    sp_fulltext_service ‘Verify_signature’,0  --This is used to validate trusted iFilters. 0 disables it. So use with caution.
    --go
    2. Stop SQL Server.  (Make sure FDHost.exe stops)
    3.  
    Uninstall the Adobe ifilter (because it defaulted to having spaces or the folder name is too long).
    4.  
    Reinstall the Adobe iFilter and when it prompts for where to install it, change it to: C:\Program Files\Adobe\PDFiFilter
    5.  Once the installation finishes, go the computer’s Environment variables. Add the following to the PATH.
    C:\Program Files\Adobe\PDFiFilter\BIN
    NOTE: it must include the BIN folder
    NOTE: If you had the OLD location that included spaces, remove it from the path environment variable.
    6. Start SQL Server
    7.  IF you had an existing Full-text index on PDFs, drop the full-text index and recreate it.
    8. You should now get results when you run sys.dm_fts_index_keywords('db','tblname')  --Note: Change db to be the actual database name and tblname to be the actual table name.
     Give this a try and see if this fixes yours. 
    Sincerely,
    Rob Beene, MSFT

  • Memory problems with my a215?

    Hi, for more than a year  i bought (unfortunately) an a215-4817, and since that day i am having anything else but problems. Now i am having problems with the memory i think. At first my laptop freezed, keyboard was not responding or came out those famous blue screens. In all cases i must shutdown the computer pressing down the power button for a few secs, after that when i tried to start it again didnt do anything, i could only hear the fan working, some disk reading noise but the screen was off and never went on and show me at least the toshiba logo.  The only way i had to fix this was first taking the battery out and pressing the power button for more than 10 secs. Later, i had to take both memory modules out besides the battery. At that moment i came to the windows event viewer and see some "WHEA-logger" warnings that talk about "TLB erros", "bus or interconection error" and ''Hierarchy memory error". Today the only way i can boot up my computer is using only one memory module placed on the first slot. That is the reason i am using it with only 1GB. I am really lost with this, run a lot memory tests ( microsoft Vista has one and Memtest+86 that comes in Ubuntu CD) and any of them trow me some error. 
    Might have broken the memory controller of slot 2?or maybe something with the dual channel controller? can be a problem with the bios? i have version 2.0, never gave me problem but could it be corrupt? I have tried to reinstall it but the installer hasnt the option to reinstall, where can i download the old version and install it again so i can reinstall de versiin 2.0?. And my last question is if somebody knows where to toshiba should i write to tell them how mad i am with them, this laptop was the worst thing i have ever bought, and their customer service in latin america (i am from argentina) is terrible.
    So if somebody have any idea of what the problem could be, i will be grateful. Sorry about my english.
    Thanks, and greetings from argentina! 
    Solved!
    Go to Solution.

    YOUR SPECS:AMD Turion™ 64 X2 Dual-Core Mobile Technology TL58• AMD M690VMemory4  • Configured with 2048MB DDR2 SDRAM 
    You have a really good unit mobo wise to utilize.  With those initial problems you should have taken it to an authorized service provider and have unit checked out. Be that as it may-why you took out the memory modules is beyond me, and yes it is feasible that damage to the board housing or module may have occurred.  Look at the module(s) and see if you observe cracks in the gold thread or board crack, especially where the split notch is. Suggest that you recheck the seating of the memory module until you hear a click noise. Check within the housing area for a small sinle cable and ensure if there it is connected.
     WHEA-Logger situation..... response is as follows.
    A corrected hardware error occurred.
    Error Source: Corrected Machine Check
    Error Type: Memory Hierarchy Error
    TLB = translation lookaside buffer, part of the MMU used to perform virtual mapping).  Machine check errors are hardware errors - this error is telling you that the CPU detected a hardware error but was able to correct it.  This is not a good sign - check temperatures and voltages.   Disabling windows reporting of machine check errors will simply ignore the problem - it will still exist! (and its highly likely that if you are getting correctable errors you are also getting uncorrectable errors sometimes).
    Too resolve: Locate the Recovery disc and reset to factory specs, with a backup of all items you wish to save before attempting.

Maybe you are looking for