[SOLVED] ffmpeg 1:1.0.1-1 memory leak? it was the file nevermind

Running devede with ffmpeg and my memory usage with 1:1.0.1-1 went from about 1.29 GB to 18 GB while encoding a film. Thought it may be my setup so I downgraded ffmpeg to try a test and see and sure enough no large memory needed (about 2.19 GB at max). Back to normal. Can anyone else verify this? I searched on the bug tracker and didn't see anything posted. Is there something new ffmpeg does that requires more space?
Edit: Nope. It was the file I was working on. Should have tried another file I guess. It is a little disturbing that a video encode could do this but nonetheless it was an issue on my end.
Mod: Please throw this post in the trash if you want.
Last edited by dodo3773 (2013-01-28 00:56:55)

hjorthboggild wrote:Sunnemer, one more question for you: Can you have the Subversion package excisting on your system when using Eclipse and SVN (with the JavaSVN option)
I've installed subversion and eclipse via pacman, the subclipse-plugin via the eclipse-update-manager. I set up an local repository and where able connecting my running svnserve without any problem. If i remember correctly JavaSVN doesn't support accessing the repository with file://path/to/repository/project.
Set-up steps:
1. edit /etc/conf.d/svnserve to set the repository root (-r /path/to/repository)
2a. svnadmin create add /path/to/my/repository/root/project
2b. edit /path/to/my/repository/root/project/conf/*
3. svn co svn://localhost/project
4. create some files and add them to your repository
svn add filenames
5. svn commit
How did you set up your subversion repository? Have you ever tried to checkout an existing project with:
svn co svn://localhost/my/repository
Maybe there is something wrong with you server configuration. Have a look at http://svnbook.red-bean.com/ .
Greets
Sunnemer

Similar Messages

  • Not enough available memory. Cannot upload the file. [Android]

    Why do I get an error message that says: "Not enough available memory. Cannot upload the file." When trying to save a pdf to the document cloud?  The pdf is 105MB and I am using a Samsung Galaxy Note 3.

    tried un syning books from itunes as well as deleting app, reinstalled app and re-syned  books - appears to be ok now

  • Getting 'Out of memory' error while opening the file. I have tried several versions of Adobe 7.0,9.0,X1. It is creating issue to convert PDF into TIFF. Please provide the solution ASAP

    Hello All,
    I am getting 'Out of memory' error while opening the file. I have tried several versions of Adobe 7.0,9.0,X1.
    Also, it is creating issue to convert PDF into TIFF. Please provide the solution ASAP.

    I am using Adobe reader XI. When i open PDF it gives "OUT of memory" error after scrolling PDF gives another alert "Insufficient data for an image". after clicking both alerts it loads full data of PDF. It is not happening with all PDFs. couple of PDFs are facing this issue. Because of this error my software is not able to print these PDFS into TIFF. My OS in window7*64. I tried it on win2012R2 and XP. Same issue is generating there.
    It has become critical issue for my production.

  • Detect memory leak in JNI so files for linux and Solaris

    I have to find the memory leaks in the JNI for solaris and linux but the issue is
    i need to find the leaks in the so files.I have solved the issues of leaks using Purify
    on windows but not getting appropriate support for linux. Any pointers to tools will help.I tried Valgrind on linux but it is not giving me the exact location of leak as in purify and also the support for purify is for 32 bit only.Valgrind is not showing any functions in .so files.JNI is not supported in Purify for Solaris? Please Help.

    amol28 wrote:
    I have to find the memory leaks in the JNI for solaris and linux but the issue is
    i need to find the leaks in the so files.I have solved the issues of leaks using Purify
    on windows but not getting appropriate support for linux. Any pointers to tools will help.I tried Valgrind on linux but it is not giving me the exact location of leak as in purify and also the support for purify is for 32 bit only.Valgrind is not showing any functions in .so files.JNI is not supported in Purify for Solaris? Please Help.If you have written the JNI, the JNI itself (java calls, methods, etc) to be OS agnostic then it shouldn't matter. In that case you check the windows code (not jni), the linux code (not jni) and the jni code itself independent of each other.
    If you haven't made the JNI OS agnostic the question would be why not?

  • Editing in aperture is doing crazy things. I've read that it is memory leaks. Whats the cure?

    Aperture is doing crazy things while editing like leaving spots on faces when using skin smoothing tool among other things. Closing and reopening fixes it but how annoying. I've read that it is caused from memory leaks. What do I do to fix this?

    What is your Aperture version?
    And your MacOS X version?
    With Aperture 3 make sure you have the latest versions, Aperture 3.5.1 and OS 10.9.3.
    The memory leaks you are referring to occurred with MacOS 10.9.1 and onscreen proofing enabled.
    If yor versions are current, check your plug-ins for compatibility.

  • Mdworker memory leak when indexing m4a files?

    Hello!
    I just upgraded to ML from SL and I really like it so far. In fact I have only one problem that I can't seem to find any solution to and I'm almost certain it's probably a bug. But I also can't really find any other information anywhere which seems strange if it was a real bug.
    Anyway, the problem is with spotlight. After I installed a clean copy of ML it did it's first indexing with no problems. Then I started copying data from my Time Machine backup and all was well until I reached my iTunes library. At this point my mac almost froze, I could barely move the mouse or issue any command. I had a Terminal window open so I could force a reboot without forcing a shutdown.
    I started investigating and as far as I can tell indexing has no problems with any file except m4a files. Normally during indexing both CPU and mem increase slightly for a brief period and then go back down. When indexing an m4a file (seen on the Open Files tab in Activity Monitor) one mdworker almost immediately eats all free memory (around 3GB usually) and then the kernel starts paging GB of data for around 30 min per file during which the OS is almost frozen. Then this mdworker disappears, mds takes over for a while until another mdworker appears and repeats the cycle.
    This wouldn't be a problem except my iTunes library is pretty large and it would take a looooong time to index everything. For now I simply added my library to the privacy tab in Spotlight which solved the problem until I tried to do a Time Machine backup that seems to use Spotlight in some fashion and this triggers indexing again.
    In the mean time system.log is full of messages like these:
    Aug 28 22:27:44 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10130]): Exited: Killed: 9
    Aug 28 22:27:44 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10130 [SleepServicesD]
    Aug 28 22:27:56 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10131]): Exited: Killed: 9
    Aug 28 22:27:57 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10131 [SleepServicesD]
    Aug 28 22:28:10 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10132]): Exited: Killed: 9
    Aug 28 22:28:11 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10132 [SleepServicesD]
    Aug 28 22:28:24 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10133]): Exited: Killed: 9
    Aug 28 22:28:25 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10133 [SleepServicesD]
    Aug 28 22:28:36 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10135]): Exited: Killed: 9
    Aug 28 22:28:37 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10135 [SleepServicesD]
    Aug 28 22:28:49 macbookpro kernel[0]: (default pager): [KERNEL]: ps_select_segment - send HI_WAT_ALERT
    Aug 28 22:28:49 macbookpro kernel[0]: (default pager): [KERNEL]: ps_vstruct_transfer_from_segment - ABORTED
    Aug 28 22:28:49 macbookpro kernel[0]: macx_swapoff FAILED - 35
    Aug 28 22:29:04 macbookpro WindowServer[82]: CGXDisableUpdate: UI updates were forcibly disabled by application "Finder" for over 1.00 seconds. Server has re-enabled them.
    Aug 28 22:29:04 macbookpro WindowServer[82]: reenable_update_for_connection: UI updates were finally reenabled by application "Finder" after 9.48 seconds (server forcibly re-enabled them after 9.29 seconds)
    Aug 28 22:29:07 macbookpro kernel[0]: macx_swapon SUCCESS
    Aug 28 22:29:18 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10136]): Exited: Killed: 9
    Aug 28 22:29:18 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10136 [SleepServicesD]
    Aug 28 22:29:40 macbookpro kernel[0]: (default pager): [KERNEL]: default_pager_backing_store_monitor - send LO_WAT_ALERT
    Aug 28 22:29:40 macbookpro kernel[0]: macx_swapoff SUCCESS
    Aug 28 22:29:50 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10139]): Exited: Killed: 9
    Aug 28 22:29:51 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10139 [SleepServicesD]
    Aug 28 22:30:09 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10140]): Exited: Killed: 9
    Aug 28 22:30:09 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10140 [SleepServicesD]
    Aug 28 22:30:13 macbookpro kernel[0]: (default pager): [KERNEL]: default_pager_backing_store_monitor - send LO_WAT_ALERT
    Aug 28 22:30:11 macbookpro netbiosd[81]: name servers down?
    Aug 28 22:30:13 macbookpro WindowServer[82]: CGXDisableUpdate: UI updates were forcibly disabled by application "Finder" for over 1.00 seconds. Server has re-enabled them.
    Aug 28 22:30:25 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10141]): Exited: Killed: 9
    Aug 28 22:30:25 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10141 [SleepServicesD]
    Aug 28 22:30:39 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10144]): Exited: Killed: 9
    Aug 28 22:30:39 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10144 [SleepServicesD]
    Aug 28 22:30:51 macbookpro WindowServer[82]: disable_update_likely_unbalanced: UI updates still disabled by application "Finder" after 41.72 seconds (server forcibly re-enabled them after 1.22 seconds). Likely an unbalanced disableUpdate call.
    Aug 28 22:30:52 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10146]): Exited: Killed: 9
    Aug 28 22:30:52 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10146 [SleepServicesD]
    Aug 28 22:31:06 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10147]): Exited: Killed: 9
    Aug 28 22:31:07 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10147 [SleepServicesD]
    Aug 28 22:31:20 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10148]): Exited: Killed: 9
    Aug 28 22:31:21 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10148 [SleepServicesD]
    Aug 28 22:31:26 macbookpro WindowServer[82]: reenable_update_for_connection: UI updates were finally reenabled by application "Finder" after 76.64 seconds (server forcibly re-enabled them after 1.22 seconds)
    Aug 28 22:31:29 macbookpro com.apple.launchd[1] (com.apple.cfprefsd.xpc.daemon[10150]): Exited: Killed: 9
    Aug 28 22:31:30 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10150 [cfprefsd]
    Aug 28 22:31:30 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10149 [cfprefsd]
    Aug 28 22:31:31 macbookpro com.apple.launchd.peruser.501[133] (com.apple.cfprefsd.xpc.agent[10149]): Exited: Killed: 9
    Aug 28 22:31:34 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10151]): Exited: Killed: 9
    Aug 28 22:31:35 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10151 [SleepServicesD]
    Aug 28 22:31:46 macbookpro com.apple.launchd[1] (com.apple.sleepservicesd[10154]): Exited: Killed: 9
    Aug 28 22:31:47 macbookpro kernel[0]: memorystatus_thread: idle exiting pid 10154 [SleepServicesD]
    Aug 28 22:31:53 macbookpro mdworker[10116]: (Normal) Import: Spotlight giving up on importing file after 304.275 seconds, (300.442 seconds in Spotlight importer plugin) - diagnostic:0 - find suspect file using: sudo mdutil -t 817999
    Any ideas or just wait until 10.8.2?

    I have this problem too, exactly same thing but on a fresh install. Copied over my music from a time machine backup and suddenly machine is no longer usable, Activity Monitor shows mdworker using >3GB of real memory. I force quit the process and it just starts again a few seconds later.
    I added Music to my privacy list in the Spotlight system preferences and it's indexing fine now.
    BTW - bug reported to Apple. For others with this problem, let Apple know about it: http://www.apple.com/feedback/macosx.html

  • Adobe Version 9: Memory Leak Issue causing the application to crash .

    We are facing an issue with Adobe Acrobat Reader Version 9. The application is getting crashed with an unhandled system exception (Access violation reading location 0x1003a24e).
    Our application is a .NET Windows based application, where we are using webbrowser control to view the pdf files.
    The application works fine with Adobe Version 8 and 10. But there is an issue specific to Version 9, even described in the Adobe Forum
    We tried using CoFreeUnusedLibraries Function on the form close event of the document viewer as a work around, but it didn’t help.
    The client is not willing to move to the Adobe Reader Version 10. Can anyone facing similar issue with Adobe Version 9, please assist.
    Thanks,
    Dev

    We are having the same issue. Have you found a way to fix this or to work around it?
    Thank you,
    Susan

  • I am trying to save files on an external memory but when  drop the file on the memory icon does not aloud me to do it, is this normal?

    do all the external memorys work on the macbookair

    Your problem may be that the external drive has not been formatted for your MBA.  Open Disk Utility>Erase and format it to Mac OS Extended (Journaled).  Note that this will ERASE ALL DATA on that device.
    Ciao.

  • Oracle JDBC Thin Driver Memory leak in scrollable result set

    Hi,
    I am using oracle 8.1.7 with oracle thin jdbc driver (classes12.zip) with jre 1.2.2. When I try to use the scrollable resultset and fetch records with the default fetch size, I run into memory leaks. When the records fetched are large(10000 records) over a period of access I get "outofmemory" error because of the leak. There is no use increasing the heap size as the leak is anyhow there.
    I tried using optimizeit and found there is a huge amout of memory leak for each execution of scrollable resultsets and this memory leak is propotional to the no of records fetched. This memory leak is not released even when i set the resultset,statement objects to null. Also when i use methods like scrollabelresultset.last() this memory leak increases.
    So is this a problem with the driver or i am doing some wrong.
    If some of you can help me with a solution to solve this it would be of help. If needed i can provide some statistics of these memory leaks using optimize it and share the code.
    Thanks
    Rajesh

    This thread is ancient and the original was about the 8.1.7 drivers. Please start a new thread. Be sure to include driver and database versions, stack traces, sample code and why you think there is a memory leak.
    Douglas

  • Marshelling Web Service Response Memory Leak

    I believe I have found a memory leak when using the configuration below.
    The memory leak occurs when calling a web service. When the web service function is marshelling the response of the function call, an "500 Internal Server Error ... java.lang.OutOfMemoryError" is returned from OC4J. This error may be seen via the TCP Packet Monitor in JDeveloper.
    Unfortunately no exception dump is outputted to the OC4J log.
    Configuration:
    Windows 2000 with 1 gig ram
    JDeveloper 9.0.5.2 with JAX/RPC extension installed
    OC4J 10.0.3
    Sun JVM version 1.4.2_03-b02
    To demonstrate the error I created a simple web service and client. See below the client and web service function that demonstrates it.
    The web service is made up of a single function called "queryTestOutput".
    It returns an object of class "TestOutputQueryResult" which contains an int and an array.
    The function call accepts a one int input parameter which is used to vary the size of array in the returned object.
    For small int (less than 100). Web service function returns successfully.
    For larger int and depending on the size of memory configuration when OC4J is launched,
    the OutOfMemoryError is returned.
    The package "ws_issue.service" contains the web service.
    I used the Generate JAX-RPC proxy to build the client (found in package "ws_issue.client"). Package "types" was
    also created by Generate JAX-RPC proxy.
    To test the web service call execute the class runClient. Vary the int "atestValue" until error is returned.
    I have tried this with all three encodings: RPC/Encoded, RPC/Literal, Document/Literal. They have the
    same issue.
    The OutOfMemory Error is raised fairly consistently using the java settings -Xms386m -Xmx386m for OC4J when 750 is specified for the input parameter.
    I also noticed that when 600 is specified, the client seems to hang. According to the TCP Packet Monitor,
    the response is returned. But, the client seems unable to unmarshal the message.
    ** file runClient.java
    // -- this client is using Document/Literal
    package ws_issue.client;
    public class runClient
    public runClient()
    * @param args
    * Test out the web service
    * Play with the atestValue variable to until exception
    public static void main(String[] args)
    //runClient runClient = new runClient();
    long startTime;
    int atestValue = 1;
    atestValue = 2;
    //atestValue = 105; // last one to work with default memory settings in oc4j
    //atestValue = 106; // out of memory error as seen in TCP Packet Monitor
    // fails with default memory settings in oc4j
    //atestValue = 600; // hangs client (TCP Packet Monitor shows response)
    // when oc4j memory sessions are -Xms386m -Xmx386m
    atestValue = 750; // out of memory error as seen in TCP Packet Monitor
    // when oc4j memory sessions are -Xms386m -Xmx386m
    try
    startTime = System.currentTimeMillis();
    Ws_issueInterface ws = (Ws_issueInterface) (new Ws_issue_Impl().getWs_issueInterfacePort());
    System.out.println("Time to obtain port: " + (System.currentTimeMillis() - startTime) );
    // call the web service function
    startTime = System.currentTimeMillis();
    types.QueryTestOutputResponse qr = ws.queryTestOutput(new types.QueryTestOutput(atestValue));
    System.out.println("Time to call queryTestOutput: " + (System.currentTimeMillis() - startTime) );
    startTime = System.currentTimeMillis();
    types.TestOutputQueryResult r = qr.getResult();
    System.out.println("Time to call getresult: " + (System.currentTimeMillis() - startTime) );
    System.out.println("records returned: " + r.getRecordsReturned());
    for (int i = 0; i<atestValue; i++)
    types.TestOutput t = r.getTestOutputResults();
    System.out.println(t.getTestGroup() + ", " + t.getUnitNumber());
    catch (Exception e)
    e.printStackTrace();
    ** file wsmain.java
    package ws_issue.service;
    import java.rmi.RemoteException;
    import javax.xml.rpc.ServiceException;
    import javax.xml.rpc.server.ServiceLifecycle;
    public class wsmain implements ServiceLifecycle, ws_issueInterface
    public wsmain()
    public void init (Object p0) throws ServiceException
    public void destroy ()
    System.out.println("inside ws destroy");
    * create an element of the array with some hardcoded values
    private TestOutput createTestOutput(int cnt)
    TestOutput t = new TestOutput();
    t.setComments("here are some comments");
    t.setConfigRevisionNo("1");
    t.setItemNumber("123123123");
    t.setItemRevision("arev" + cnt);
    t.setTestGroup(cnt);
    t.setTestedItemNumber("123123123");
    t.setTestedItemRevision("arev" + cnt);
    t.setTestResult("testResult");
    t.setSoftwareVersion("version");
    t.setTestConditions("conditions");
    t.setStageName("world's a stage");
    t.setTestMode("Test");
    t.setTestName("test name");
    t.setUnitNumber("UnitNumber"+cnt);
    return t;
    * Web service function that is called
    * Create recCnt number of "records" to be returned
    public TestOutputQueryResult queryTestOutput (int recCnt) throws java.rmi.RemoteException
    System.out.println("Inside web service function queryTestOutput");
    TestOutputQueryResult r = new TestOutputQueryResult();
    TestOutput TOArray[] = new TestOutput[recCnt];
    for (int i = 0; i< recCnt; i++)
    TOArray = createTestOutput(i);
    r.setRecordsReturned(recCnt);
    r.setTestOutputResults(TOArray);
    System.out.println("End of web service function call");
    return r;
    * @param args
    public static void main(String[] args)
    wsmain wsmain = new wsmain();
    int aval = 5;
    try
    TestOutputQueryResult r = wsmain.queryTestOutput(aval);
    for (int i = 0; i<aval; i++)
    TestOutput t = r.getTestOutputResults();
    System.out.println(t.getTestGroup() + ", " + t.getUnitNumber());
    catch (Exception e)
    e.printStackTrace();
    ** file ws_issueInterface.java
    package ws_issue.service;
    import java.rmi.Remote;
    import java.rmi.RemoteException;
    public interface ws_issueInterface extends java.rmi.Remote
    public TestOutputQueryResult queryTestOutput (int recCnt) throws java.rmi.RemoteException;
    ** file TestOutputQueryResult.java
    package ws_issue.service;
    public class TestOutputQueryResult
    private long recordsReturned;
    private TestOutput[] testOutputResults;
    public TestOutputQueryResult()
    public long getRecordsReturned()
    return recordsReturned;
    public void setRecordsReturned(long recordsReturned)
    this.recordsReturned = recordsReturned;
    public TestOutput[] getTestOutputResults()
    return testOutputResults;
    public void setTestOutputResults(TestOutput[] testOutputResults)
    this.testOutputResults = testOutputResults;
    ** file TestOutput.java
    package ws_issue.service;
    public class TestOutput
    private String itemNumber;
    private String itemRevision;
    private String configRevisionNo;
    private String testName;
    private String testConditions;
    private String stageName;
    private String testedItemNumber;
    private String testedItemRevision;
    private String unitNumber;
    private String testStation;
    private String testResult;
    private String softwareVersion;
    private String operatorID;
    private String testDate; // to be datetime
    private String comments;
    private int testGroup;
    private String testMode;
    public TestOutput()
    public String getComments()
    return comments;
    public void setComments(String comments)
    this.comments = comments;
    public String getConfigRevisionNo()
    return configRevisionNo;
    public void setConfigRevisionNo(String configRevisionNo)
    this.configRevisionNo = configRevisionNo;
    public String getItemNumber()
    return itemNumber;
    public void setItemNumber(String itemNumber)
    this.itemNumber = itemNumber;
    public String getItemRevision()
    return itemRevision;
    public void setItemRevision(String itemRevision)
    this.itemRevision = itemRevision;
    public String getOperatorID()
    return operatorID;
    public void setOperatorID(String operatorID)
    this.operatorID = operatorID;
    public String getSoftwareVersion()
    return softwareVersion;
    public void setSoftwareVersion(String softwareVersion)
    this.softwareVersion = softwareVersion;
    public String getStageName()
    return stageName;
    public void setStageName(String stageName)
    this.stageName = stageName;
    public String getTestConditions()
    return testConditions;
    public void setTestConditions(String testConditions)
    this.testConditions = testConditions;
    public String getTestDate()
    return testDate;
    public void setTestDate(String testDate)
    this.testDate = testDate;
    public String getTestName()
    return testName;
    public void setTestName(String testName)
    this.testName = testName;
    public String getTestResult()
    return testResult;
    public void setTestResult(String testResult)
    this.testResult = testResult;
    public String getTestStation()
    return testStation;
    public void setTestStation(String testStation)
    this.testStation = testStation;
    public String getTestedItemNumber()
    return testedItemNumber;
    public void setTestedItemNumber(String testedItemNumber)
    this.testedItemNumber = testedItemNumber;
    public String getTestedItemRevision()
    return testedItemRevision;
    public void setTestedItemRevision(String testedItemRevision)
    this.testedItemRevision = testedItemRevision;
    public String getUnitNumber()
    return unitNumber;
    public void setUnitNumber(String unitNumber)
    this.unitNumber = unitNumber;
    public int getTestGroup()
    return testGroup;
    public void setTestGroup(int testGroup)
    this.testGroup = testGroup;
    public String getTestMode()
    return testMode;
    public void setTestMode(String testMode)
    this.testMode = testMode;

    Many thanks for your help.  This solved the issue for our .NET code, however the leak is still present in the report designer.  I was also wondering if you could help further: because of the limits on the java memory process is there a way to ensure that a separate java process is started for each report that is loaded in my report viewers collection?  Essentially the desktop application that i have created uses a tab control to display each type report, so each tab goes through the following code when displaying a report and closing a tab:
    Is there a way to ensure that a different Java process is kicked off each time that I display a different report?  My current code in c# always uses the same Java process so the memory ramps up.  The code to load the report and then dispose of the report through closing the tab (and now the Java process) looks like this:
        private void LoadCrystalReport(string FullReportName)
          ReportDocument reportDoc = new ReportDocument();
          reportDoc.Load(FullReportName, OpenReportMethod.OpenReportByTempCopy);
          this.crystalReportViewer1.ReportSource = reportDoc;
        private void DisposeCrystalReportObject()
          if (crystalReportViewer1.ReportSource != null)
            ReportDocument report = (ReportDocument)crystalReportViewer1.ReportSource;
            foreach (Table table in report.Database.Tables)
              table.Dispose();
            report.Database.Dispose();
            report.Close();
            report.Dispose();
            GC.Collect();
    Thanks

  • Memory Leak issues

    Hello. I am having an issue with Flash 9 and wasnt sure if
    this needed to be posted in here or general. I figured that I would
    need some kind of code to fix this so I posted it in here.
    This is my issue..the typical dreaded memory leak problem.
    The project looks very similar to this:
    We have, say, 100 individual images in the library. Each
    image is an 800x600 jpg file. We create a layer and put each image
    on a frame. About every so many frames, (lets say frame 33 or frame
    66) we stop at that point and ask a question and wait for some kind
    of response from the user. If its the correct response, we continue
    on, if not, we just repeat the question till they get the answer
    correctly. There are also fscommands that send out to an external
    application if the user clicked the correct or incorrect answer at
    these points.
    However, this is causing a rather massive memory leak. I was
    wondering if there is a solution either by code or we are just
    doing something wrong, to help fix our memory leak issue. Some
    files in our projects are small enough to not notice it but we have
    one particular file that is around 500 frames of animation and you
    can imagine how hungry flash got with this one.
    Any suggestions?

    Lots of ideas:
    First thing? Check the File Free at Options > Status, what is the number at File Free? Now, remove the battery of your device, hold a minute, replace and reboot. What is the File Free now?
    Read this: http://www.blackberryforums.com/general-blackberry-discussion/116396-managing-your-bb-memory-lost-ca...
    And this: http://www.blackberryforums.com/general-blackberry-discussion/112029-losing-call-logs-sms-emails-opt...
    Additional links to read:
    http://www.blackberry.com/btsc/search.do?cmd=displayKC&docType=kc&externalId=KB15345&sliceId=SAL_Pub...
    http://www.blackberry.com/btsc/search.do?cmd=displayKC&docType=kc&externalId=KB14320&sliceId=SAL_Pub...
    http://www.blackberry.com/btsc/dynamickc.do?externalId=KB14213&sliceId=SAL_Public&command=show&forwa...
    1. If any post helps you please click the below the post(s) that helped you.
    2. Please resolve your thread by marking the post "Solution?" which solved it for you!
    3. Install free BlackBerry Protect today for backups of contacts and data.
    4. Guide to Unlocking your BlackBerry & Unlock Codes
    Join our BBM Channels (Beta)
    BlackBerry Support Forums Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • Memory leak in Real-Time caused by VISA Read and Timed Loop data nodes? Doesn't make sense.

    Working with LV 8.2.1 real-time to develop a host of applications that monitor or emulate computers on RS-422 busses.   The following screen shots were taken from an application that monitors a 200Hz transmission.  After a few hours, the PXI station would crash with an awesome array of angry messages...most implying something about a loss of memory.  After much hair pulling and passing of the buck, my associate was able to discover while watching the available memory on the controller that memory loss was occurring with every loop containing a VISA read and error propogation using the data nodes (see Memory Leak.jpg).  He found that if he switched the error propogation to regular old-fashioned shift registers, then the available memory was rock-solid.  (a la No Memory Leak.jpg)
    Any ideas what could be causing this?  Do you see any problems with the way we code these sorts of loops?  We are always attempting to optimize the way we use memory on our time-critical applications and VISA reads and DAQmx Reads give us the most heartache as we are never able to preallocate memory for these VIs.  Any tips?
    Dan Marlow
    GDLS
    Solved!
    Go to Solution.
    Attachments:
    Memory Leak.JPG ‏136 KB
    No Memory Leak.JPG ‏137 KB

    Hi thisisnotadream,
    This problem has been reported, and you seem to be exactly reproducing the conditions required to see this problem. This was reported to R&D (# 134314) for further investigation. There are multiple possible workarounds, one of which is the one that you have already found of wiring the error directly into the loop. Other situations that result in no memory leak are:
    1.  If the bytes at port property node is not there and a read just happens in every iteration and resulting timeouts are ignored.
    2.  If the case structure is gone and just blindly check the bytes at port and read every iteration.
    3.  If the Timed Loop is turned into a While loop.
    Thanks for the feedback!
    Regards,Stephen S.
    National Instruments
    Applications Engineering

  • TestStand 3.1 Report Memory Leak

    Hi everyone,
    I been looking at the memory usage of my TestStand code lately.  I noticed I have a memory leak with On The Fly Reporting using XML report file.  I'm currenlty using labview 7.1.1 and TestStand 3.1f for development.   I came about this program from seeing test station go to a snails crawl.  The program starts out around 100MB and will keep growing using up all the systems memory, the largest I seen was 500MB.   What is happening is the testers like to keep the on the fly report screen up during the test  and not view the execusion window.  At the end of each run the program will grow 10MB to 30MB.    I know the problem can be solved by turning off on the fly reporting, but I would like to try find a programing option first.
    I have looked at the other forum entries about:
    Turning off database collection
    Editing  model options : ModelOptions.DiscardUnusedResults = True and Parameters.ModelOptions.NumTestSockets = 1
    Adjusted the array size of the report to 300
    None of the options seem to have worked for me, most the post say move to teststand 3.1 cause it an 3.0 issue.
    I was wondering if there are any other options to try  in teststand, or any way to turn off/close IE to release memory between tests.
    I'm currently running the code on Windows XP SP2 
    IE6/IE7 depending on the test station
    P4 or Centrino  with 512MB to 1GB of ram
    The GUI window is a modifed  example of the basic VI EXE that calls teststand
    The code is installed using labview and Teststand  deployment builders, so the test machine are using runtime labview 7.1.1  and runtime TestStand 3.1
    Thanks to any one who can offer alittle extra help on the topic

    Actually the reports are XML not HTML.
    There are alot of pass steps, usualy after a couple of failed steps, the sequence will terminate.
    The memory leak seem to exaggerated, since the technicians like to view the Reports Tab instead of Execution Tab when running program.  If the operator keeps the view on the execution tab during the program, the report screen is not being updated, so less memory is taken up.  Since the techs like to keep the report screen up during the entire test, the xml file constantly being update and  the program increases +50MB after the first run.   So turning off Reports on the fly, they can not view the report during the test and the memory usuage stays low.
    I do not see a leak at all if I do not click on the reports tab,  and the leak seems to only happen with XML files, TXT and HTML reports do not cause a memory leak, even when viewed on the fly for the entire test.   I wanted to keep XML-expandable cause it easier to see what sequences failed then scanning through the entire report.
    I'm using Test UUT call up the test.    The executable terminated if you press the exit button on the executable.  After test is finished, the standard Pass/Fail comes up then the standard  enter UUT serial number comes up.
    I'm including the reports option INI  I use .
    Thanks for you help in this matter, sorry being little late with replying. 
    Attachments:
    report.ini ‏4 KB

  • Firefox memory leak, how to fix?

    I've noticed i've been having memory leaks since quite a few versions before the latest one (33.1.1) and the problem still continues, i notice this more when i leave firefox open for awhile, sometimes only with youtube and couple of other "static" pages on, but i've got the same results without youtube.
    Today i decided to ask the question, since i haven't seen any memory leak bugfixes in the recent changelogs and it becomes quite a problem to me because i leave it all day open and when it happens it forces me to restart (not really forces but i don't like having an 1.3GB process when i can have the same for 500~ ...)
    I don't believe any of my extras are the reason for it, i've tested them over the pass'ed versions, removed a few, installed a few, always same problem.
    All of this running WIndows 7 Ultimate 64-bits .
    i've researched and found it's a "common" problem, but no really solution or expected bugfix soon or anything, i'm still to test it without any 3rd part extensions or plugins, but i don't (think) have any extensions that may cause this because like i said, i've changed them quite a few times.
    Here is my extension list: http://i.imgur.com/Rk80ajU.png
    Here is my plugin list: http://i.imgur.com/GBA0qgY.png
    And here is a "Measure" report from "about:memory" : http://privatepaste.com/bcd17c1093
    I just want to know why it leaks so badly, almost the double RAM it would normally use.
    Currently i'm like 3 tabs with 1.25 GB (Private Set) used, when i close a few tabs, it keeps stable or even rising by a few mb.
    I hope there's something useful with that report/my extensions|plugins.
    Thanks for the reading.

    I heard about that extension "problem" but, when i see on "about:addons-memory" from a third-party extension (that i can't remember but yo can see it in the link on OP, it only uses few megabytes (20~) , so, either it's a memory leak on the extension (if that's really the cause) a problem with my addon to view the extensions memory or a firefox one.
    But i seriously don't expect that extension eats up all of this memory, but i will test it in a couple of days.

  • My mac air does not have enough memory to hold all the music I want. Can I delete songs from my library before downloading new ones and still have the old songs there when I sync my iPod?

    I have a macbook air with 60GB memory. For work, I have used up all the memory and constantly use the files, so transfering them to a different location is not a possibilty now. I have about 5GB of music on it, but would like to transfer more on my iPod classic. I would like to know wether there is a possibility for me to delete old music, get new music into its place and then sync the iPod without losing the old music from it.
    Hope I was clear enough.

    See this older post from another forum member Zevoneer covering the different methods and software available to assist you with the task of copying content from your iPod back to your PC and into iTunes.
    https://discussions.apple.com/thread/2452022?start=0&tstart=0
    B-rock

Maybe you are looking for