JMap heap dump against gcore file

Hi,
We are running Java 1.5.0_06 on Solaris 9 on Sun v245, and are trying to analyze some memory issues we have been experiencing.
We have used Solaris gcore utility to take a complete core dump of the process. This dump file is ~1.8Gb.
We are now trying to create a heap dump from the core file using JMAP, so that we can use tools such as the Eclipse Memory Analyser (http://www.eclipse.org/mat) to examine our heap for memory issues.
However, we are having issues creating the heap dump, as JMAP appears to be running out of system memory (swap). The command we are running is "jmap -heap:format=b /usr/bin/java ms04Core.29918". We can run some other JMAP commands, such as "jmap -histo" against the core without encountering these issues. The server we are running on has 8Gb physical memory, but JMAP seems to crash when swap size reaches ~3.8Gb (according to Solaris PRSTAT command). The error returned by JMAP is -
Exception in thread "main" java.lang.OutOfMemoryError: requested 8192 bytes for jbyte in /BUILD_AREA/jdk1.5.0_06/hotspot/src/share/v
m/prims/jni.cpp. Out of swap space?
Would anyone have any ideas as to why we are seeing these issues?
Also, can anyone comment on if this approach makes sense as a way to analyze memory issues? Other suggestions would be welcome!
Thanks very much,
Adrian

Hi, we have solved this issue now - apparently this is a memory bug with the 1.5.0_06 version of JMAP. We installed a higher version JRE (5u15) and were able to complete the heap dump without issues.
Cheers,
Adrian

Similar Messages

  • EM Agent get restarted on heap dump generation with jmap command

    Hi all,
    To find the cause of OutOfMemory error we are generating heap dump of perticular JVM process by using the command "jmap -dump:format=b,file=heap.bin <pid>".
    But when we excute this command, then our all the other jvm process get crashed and it also crashed emagent process.
    I would like to know the reason why these processes and specially emagent process
    get crashed.
    Regards,
    $

    Hi Michal,
    that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
    Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
    Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
    Kind regards,
       - Andreas.
    Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

  • Heap dump file size vs heap size

    Hi,
    I'd like to clarify my doubts.
    At the moment we're analyzing Sun JVM heap dumps from Solaris platform.
    Observation is that heap dump file is around 1,1GB while after loading to SAP Memory Analyzer it displays statistics: "Heap: 193,656,968" which as I understood is size of heap.
    After I run:
    jmap -heap <PID>
    I get following information:
    using thread-local object allocation
    Parallel GC with 8 thread(s)
    Heap Configuration:
       MinHeapFreeRatio = 40
       MaxHeapFreeRatio = 70
       MaxHeapSize      = 3221225472 (3072.0MB)
       NewSize          = 2228224 (2.125MB)
       MaxNewSize       = 4294901760 (4095.9375MB)
       OldSize          = 1441792 (1.375MB)
       NewRatio         = 2
       SurvivorRatio    = 32
       PermSize         = 16777216 (16.0MB)
       MaxPermSize      = 67108864 (64.0MB)
    Heap Usage:
    PS Young Generation
    Eden Space:
       capacity = 288620544 (275.25MB)
       used     = 26593352 (25.36139678955078MB)
       free     = 262027192 (249.88860321044922MB)
       9.213949787302736% used
    From Space:
       capacity = 2555904 (2.4375MB)
       used     = 467176 (0.44553375244140625MB)
       free     = 2088728 (1.9919662475585938MB)
       18.27830779246795% used
    To Space:
       capacity = 2490368 (2.375MB)
       used     = 0 (0.0MB)
       free     = 2490368 (2.375MB)
       0.0% used
    PS Old Generation
       capacity = 1568669696 (1496.0MB)
       used     = 1101274224 (1050.2569427490234MB)
       free     = 467395472 (445.74305725097656MB)
       70.20434109284916% used
    PS Perm Generation
       capacity = 67108864 (64.0MB)
       used     = 40103200 (38.245391845703125MB)
       free     = 27005664 (25.754608154296875MB)
       59.75842475891113% used
    So I'm just wondering what is this "Heap" in Statistic Information field visible in SAP Memory Analyzer.
    When I go to Dominator Tree view, I look at Retained Heap column and I see that they roughly sum up to 193,656,968.
    Could someone put some more light on it?
    thanks
    Michal

    Hi Michal,
    that looks indeed very odd. First, let me ask which version do you use? We had a problem in the past where classes loaded by the system class loader were not marked as garbage collection roots and hence were removed. This problem is fixed in the current version (1.1). If it is version 1.1, then I would love to have a look at the heap dump and find out if it is us.
    Having said that, this is what we do: After parsing the heap dump, we remove objects which are not reachable from garbage collection roots. This is necessary, because the heap dump can contain garbage. For example, the mark-sweep-compact of the old/perm generation leaves some dead space in the form of int arrays or java.lang.Object to win time during the compacting phase: by leaving behind dead objects, not every live object has to be moved which means not every object needs a new address. This is the kind of garbage we remove.
    Of course, we do not remove objects kept alive only by weak or soft references. To see what memory is kept alive only through weak or soft references, one can run the "Soft Reference Statistics" from the menu.
    Kind regards,
       - Andreas.
    Edited by: Andreas Buchen on Feb 14, 2008 6:23 PM

  • Heap dump file - Generate to a different folder

    Hello,
    When the AS Java iis generating the heap dump file, is it possible to generate it to a different folder rather than the standard one: /usr/sap// ?
    Best regards,
    Gonçalo  Mouro Vaz

    Hello Gonçalo
    I don't think this is possible.
    As per SAP Note 1004255;
    On the first occurrence (only) of an OutOfMemoryError the JVM
    will write a heap dump in the
    /usr/sap/ directory
    Can i ask why you would like it in a different folder?
    Is it a space issue?
    Thanks
    Kenny

  • Heap Dump file generation problem

    Hi,
    I've configured configtool to have these 2 parameters:
    -XX:+HeapDumpOnOutOfMemoryError
    -XX:+HeapDumpOnCtrlBreak
    In my understanding, with these 2 parameters, the heap dump files will only be generated under 2 situations, ie, when out of memory occurred, or user manually click CLTR + BREAK in MMC.
    1) Unfortunately, there are many heap dump files (9 in total) generated when none of the above situation occured. I couldnt find "OutOfMemoryError" in the default trace, nor the shallow heap size of those heap dump files are anywhere near the memory limit. The consequences are our server run out of disk space.
    My question is, what are the other possibilities that heap dump file will be generated?
    2) In the Memory Consumption graph (NWA (http://host:port/nwa) -> System Management -> Monitoring -> Java Systems Reports), out of memory error occurred when the memory usage is reaching about 80% of the allocated memory. What are the remaining 20% or so reserved for ?
    Any help would be much appreciated.
    Thanks.

    Hi,
    Having the -XX:+HeapDumpOnCtrlBreak option makes the VM trigger a heap dump, whenever a CTRL_BREAK event appears. The same event is used also to trigger a thread dump, an action you can do manually from the SAP Management Console, I think it is called "Dump stacks". So if there was someone triggering thread dumps for analysis of other types of problems, this has the side effect of writing also a heap dump.
    Additionally, the server itself may trigger a thread dump (and by this also a heap dump if the option is present). It does this for example when a timeout appears during the start or stop of the server. A thread dump from such a moment allows us to see for example which service is unable to start.
    Therefore, I would recommend that you leave only the -XX:+HeapDumpOnOutOfMemoryError, as long as you don't plan to trigger any heap dumps on your own. The latter will cause the VM to write a heap dump only once - on the first appearance of an OutOfMemoryError.
    In case you need to trigger the heap dumps manually, leave the -XX:+HeapDumpOnCtrlBreak option for the moment of troubleshooting, but consider if you want to keep it afterwards.
    If heap dumps were written because of an OutOfMemoryError you should be able to see this in the dev_server file in /usr/sap/<SID>/<inst>/work/ . Also there you should be able to see if indeed thread dumps were triggered (just search for "Full Thread ").
    I hope this helps.
    Regards,
    Krum

  • Hprof heap dump not being written to specified file

    I am running with the following
    -Xrunhprof:heap=all,format=b,file=/tmp/englog.txt (java 1.2.2_10)
    When I start the appserver, the file /tmp/englog1.txt gets created, but
    when I do a kill -3 pid on the .kjs process, nothing else is being written to
    /tmp/englog1.txt. In the kjs log I do see the "Dumping java heap..." message
    and a core file is generated.
    Any ideas on why I'm not getting anything else written to /tmp/englog1.txt?
    Thanks.

    Hi
    It seems that the option you are using is correct. I may modify it to something like
    java -Xrunhprof:heap=all,format=a,cpu=samples,file=/tmp/englog.txt,doe=n ClassFile
    This seems to work on 1.3.1_02, so may be something specific to the JDK version that
    you are using. Try a later version just to make sure.
    -Manish

  • Analyse large heap dump file

    Hi,
    I have to analyse large heap dump file (3.6GB) from production environment. However if open it in eclipse mat, it is giving OutOfMemoryError. I tried to increase eclipse workbench java heap size as well. But it doesnt help. I also tried with visualVM as well. Can we split the heap dump file into small size? Or is there any way to set max heap dump file size for jvm options so that we collect reasonable size of heap dumps.
    Thanks,
    Prasad

    Hi, Prasad
    Have you tried open in 64-bit mat on a 64-bit platform with large heap size and CMS gc policy in MemoryAnalyzer.ini file ? the mat is good toolkit on analysing java heapdump file, if it cann't works, you can try Memory Dump Diagnostic for Java(MDD4J) in 64-bit IBM Support Assistant with large heap size.

  • JVMDG315: JVM Requesting Heap dump file

    Hi expert,
    Using 10.2.04 Db with 11.5.10.2 on AIX 64 bit.
    While running audit report which generate pdf format output gone in error....
    JVMDG217: Dump Handler is Processing OutOfMemory - Please Wait.
    JVMDG315: JVM Requesting Heap dump file
    ..............................JVMDG318: Heap dump file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/heapdump7545250.1301289300.phd
    JVMDG303: JVM Requesting Java core file
    JVMDG304: Java core file written to /d04_testdbx/appltop/testcomn/admin/log/TEST_tajorn3/javacore7545250.1301289344.txt
    JVMDG274: Dump Handler has Processed OutOfMemory.
    JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java(Compiled Code))
         at java.io.OutputStream.write(OutputStream.java(Compiled Code))
         at oracle.apps.xdo.common.log.LogOutputStream.write(LogOutputStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFStream.output(PDFStream.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.write(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator$ObjectManager.writeWritable(PDFGenerator.java(Inlined Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.closePage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.pdf.PDFGenerator.newPage(PDFGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.genNewPageWithSize(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommand(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.processCommandsFromDataStream(ProxyGenerator.java(Compiled Code))
         at oracle.apps.xdo.generator.ProxyGenerator.close(ProxyGenerator.java:1230)
         at oracle.apps.xdo.template.fo.FOProcessingEngine.process(FOProcessingEngine.java:336)
         at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:1043)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5888)
         at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3593)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.processForm(PayPDFGen.java:243)
         at oracle.apps.pay.pdfgen.server.PayPDFGen.runProgram(PayPDFGen.java:795)
         at oracle.apps.fnd.cp.request.Run.main(Run.java:161)
    oracle.apps.pay.pdfgen.server.PayPDFGen
    Program exited with status 1
    No any error in db alert ...
    Pleaes suggest where is prob..
    Thanks in ADV

    Hello,
    * JVMST109: Insufficient space in Javaheap to satisfy allocation request
    Exception in thread "main" java.lang.OutOfMemoryError
    This is your problem. There is no enough memory. Change Java heap settings
    Regards,
    Shomoos
    Edited by: shomoos.aldujaily on Mar 29, 2011 12:39 AM

  • How to take regular heap dumps using HPROF

    Hi Folks,
    I am using Oracle App server as my application server. I found that the memory is growing gradualy and gets maxed out with in 1 hour. I am using 1 GB of heap.
    I defently feel this is a memory leak issue. Once the Heap usage reaches 100%, I will start getting the FULL GCs and my whole server hangs and nothing will work. Some times even the JVM crashes and restarts again.
    I didn't find Out of Memory exception also in any of my logs.
    I came to know that we can use Hprof to deal with this.
    I use the below as my JVM agrs...
    -agentlib:hprof=heap=all,format=b,depth=10,file=$ORACLE_HOME\hprof\Data.hprof
    I run my load run for 10 mins, now my heap usage has been grown to some extent.
    My Questions:
    1. Why there are 2 files generated, one is with the name Data.hprof and another with Data.hprof.tmp. Which is what?
    2. How to get the dump at 2 different points. So that I can compare the the 2 dumps and I can say which object is growing more.
    I downloaded the HAT and If I use to open this Data.hprof file from HAT, I am getting this error. This error will come if I open the file with out stoping the JVM process.
    java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:152)
    at hat.parser.HprofReader.read(HprofReader.java:202)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    If I stop the JVM process, and then open through HAT I am getting this error,
    Started HTTP server on port 7000
    Reading from hprofData.hprof...
    Dump file created Wed Dec 13 02:35:03 MST 2006
    Warning: Weird stack frame line number: -688113664
    java.io.IOException: Bad record length of -1551478782 at byte 0x0008ffab of file.
    at hat.parser.HprofReader.read(HprofReader.java:193)
    at hat.parser.Reader.readFile(Reader.java:90)
    at hat.Main.main(Main.java:149)
    JVm version I am using is: Sun JVM 1.5.0_06
    I am seriously fed up of this memory leak issue... Please help me out folks... I need this as early as possible..
    I hope I get early replys...
    Thanks in advance...

    First, the suggestion of using jmap is an excellent one, you should try it. On large applications, using the hprof agent you have to restart your VM, and hprof can disturb your JVM process, you may not be able to see the problem as quickly. With jmap, you can get a heap snapshot of a running JVM when it is in the state you want to understand more of, and it's really fast compared to using the hprof agent. The hprof dump file you get from jmap will not have the stack traces of where objects were allocated, which was a concern of mine a while back, but all indications are that these stack traces are not critical to finding memory leak problems. The allocation sites can usually be found with a good IDE ot search tool,
    like the NetBeans 'Find Usages' feature.
    On hprof, there is a temp file created during the heap dump creation, ignore the tmp file.
    The HAT utility has been added to JDK6 (as jhat) and many problems have been fixed. But most importantly, this JDK6 jhat can read ANY hprof dump file, from JDK5 or even JDK1.4.2. So even though the JDK6 jhat is using JDK6 itself, the hprof dump file it is given could have come from pretty much anywhere, including jmap. As long as it's a valid hprof binary dump file.
    So even if it's just to have jhat handy, you should get JDK6.
    Also, the Netbeans profiler (http://www.netbeans.org) might be helpful too. But it will require a restart of the VM.
    -kto

  • JHAT unable to read heap dump

    I have a heap dump taken on a 64 bit linux 1.5.0_16 JVM (64bit).
    I'm unable to read the file using JHAT (latest 1.6.0 JDK), getting the following error: WARNING: Ignoring unrecognized record type 32
    Have tried on both Win32 and 64bit Linux platforms.
    Regards,

    Hi Mike,
    your heap dump is probably broken. Taking heap dump with jmap under JDK 1.5 is more suited for recovering a heap dump from a core dump file rather than taking the heap dump of a running java application. jmap can observe the heap in an inconsistent state and hence the dump file is not always useful. You can try to create core dump of the java application (if you can) and than use jmap to get heap dump from core dump. Note that there is no such from when you are running on JDK/JRE 6.
    Bye,
    Tomas Hurka <mailto:[email protected]>
    NetBeans Profiler http://profiler.netbeans.org
    VisualVM http://visualvm.dev.java.net
    Software Engineer, Developer Platforms Group
    Sun Microsystems, Praha Czech Republic

  • Dificulty generating Heap dump....

    Hi,
    We are on NW04s SPS 12 on JAVA with Solaris and Oracle.
    We get OutOfMemory on intermittent basis and I want to take a heap dump.
    It is taking forever to take the dump and it consumes lots of memory on the server to generate a dump.
    Last time it took almost 6GB and dump was not complete. It ran for 30 mnts. It dumped only 50MB. It starts writing to a file but eventually I kill it because it takes too long and lots of memory.
    I am taking a dump using a jmap command u2013
    /usr/sap/j2se/j2sdk1.4.2_13/bin/jmap -d64 -heap:format=b <pid>
    I am little afraid to put the parameter in Visual Admin to get the dump because it may crash the box or the dump may never complete.
    Please let me know if I am doing anything wrong.
    I will appreciate your help.
    Regards.
    SC

    Hi,
    you should create the DUMP using jcmon.
    Goto your SYS/profile directory:
    e.g. /usr/sap/<SID>/SYS/profile or X:\usr\sap.... on Windows
    Start jcmon using:
    jcmon pf=<PROFILE>
    The Profile to be used is <SID>_JC<InstanceNo>_<hostname>
    This will show up jcmon:
    ============================================================
    JControl Monitor Program - Main Menu
    ============================================================
    0 : exit
    10 : Cluster Administration Menu
    20 : Local Administration Menu
    30 : Shared Memory Menu (Solid Rock, experimental)
    Choose 20 for Local Administration Menu
    You see a list of processes:
    Idx
    Name
    PID
    State
    Error
    Restart
    0
    dispatcher
    16670
    Running
    0
    yes
    1
    server0
    16671
    Running
    0
    yes
    2
    SDM
    16672
    Running
    0
    yes
    ============================================================
    JControl Monitor Program - Administration Menu  (Local)
    Instance : JC_<hostname>_<SID>_<InstanceNo>
    ============================================================
    0  : exit
    1  : Refresh list
    2  : Shutdown instance
    3  : Enable process
    4  : Disable process
    5  : Restart process
    6  : Enable bootstrapping on restart
    7  : Disable bootstrapping on restart
    8  : Enable debugging
    9  : Disable debugging
    10 : Dump stacktrace
    11 : Process list
    12 : Port list
    13 : Activate debug session
    14 : Deactivate debug session
    15 : Increment trace level
    16 : Decrement trace level
    17 : Enable process restart
    18 : Disable process restart
    19 : Restart instance
    40 : Enable bootstrapping for all processes with specified process type
    41 : Enable bootstrapping for all processes excluding specified process type
    99 : Extended process list on/off
    Now use option 10 to Dump the stacktrace
    Hope this helps (Reward points for helpful answers are appreciated
    Cheers

  • Java 1.4.2. - full heap dump?

    Hello,
    Is there any possibility to generate full heap dump (core dump) in java 1.4.2. on demand?
    Best regards,
    Peter

    If you are in Unix platform, you can use this script to have thread dump.
    I am not sure whether you can generate coredump with out an application crash .
    kill -3 <java pid> will provide full stacktrace of the java process thread dump.
    Note: kill -3 will not terminate the java process, it will only generate full stacktrace in your log file and safe to use while the java process is running.
    you can get the java process id, using this unix cmd "ps -ef | grep java" .
    #!/bin/ksh
    [ $# -le 0 ] && echo "USAGE: $0 <pid>" && exit
    for i in 1 2
    do
    DT=`date +%Y%m%d_%H%M`
    prstat -Lmp $1 1 1 >> prstat_Lmp_$i_$DT.dmp
    pstack $1 >> pstack_$i_$DT.dmp
    kill -3 $1
    echo "prstat, pstack, and thread dump done. #" $i
    sleep 1
    echo "Done sleeping."
    done
    Pls go through some of this links, this will provide you on how to debug the issue with the logs generated by the scripts:
    http://support.bea.com/application_content/product_portlets/support_patterns/wls/UnexpectedHighCPUUsageWithWLSPattern.html
    http://www.unixville.com/~moazam/stories/2004/05/18/debuggingHangsInTheJvm.html

  • AD4J: Unable to start Heap Dump due to OS Error

    When Java 'Heap Dump' is requested, the following message is shown:
    Unable to start Heap Dump due to OS Error.
    Details of the monitored JDK and application server are given below:
    JDK: Sun JDK 1.4.2_08 on Solaris 9
    Application Server: WebLogic 8.1 SP4
    1. What could be the possible cause? No errors are logged in jamserv/logs/error_log. Is there any way to enable detailed logging?
    2. Each time the heap dump is requested, a file of the format heapdump<n> gets created in /tmp (e.g. /tmp/heapdump12.txt). If you see the file, it contains the following:
    a) a line containing summary of the heap usage, and
    b) stack traces of all the threads
    Thanks!

    Wrong Forum?

  • Internal error occured while parsing heap dump.

    Hello,
    I have a huge heap dump .hprof file (800 MB) in size and I tried to open it with memory analyzer in Eclipse. After parsing the file till 4%, I get internal error occurred. Java heap space. My system has 2GB memory. Below is the command I used to launch eclipse. Can someone help me with this ? Many thanks for your time.
    C:\eclipse\eclipse.exe -vmargs -Xms512M -Xmx512M -XX:PermSize=256M -XX:MaxPermSize=256M
    I have JRE 1.6.0_05-b13 installed on the system.
    Thanks,
    Hari

    Hello Hari,
    I would recommend that you first try to give more memory to eclipse and see if this helps. Try with 1200m for example.
    I can't give you a precise estimation how much memory will be needed, as the limiting factor is the number of objects in the heap dump (not the size of the file). This number varies from case to case. In 800mb you may have only a few huge objects, but it may also happen that there are more than 20.000.000 of very small objects.
    So, please try with more memory, and let me know if you still encounter the problem.
    Regards,
    Krum

  • Full heap dump - weblogic 8.1

    Hi to everyone,
    Is there any possibility to full heap dump (core dump) in weblogic 8.1 manually?
    Best regards,
    Peter

    In JDK6 there are the jhat and jmap utils which are described here
    http://weblogs.java.net/blog/kellyohair/archive/2005/09/heap_dump_snaps.html
    But, prior to that (JDK5 and earlier), you have to use the HAT utility which can be found at
    http://hat.dev.java.net/
    If you are using JRockit, you can use Mission Control for this, I believe. There's an intro to this tool at
    http://dev2dev.bea.com/pub/a/2005/12/jrockit-mission-control.html?page=1

Maybe you are looking for

  • How can I get the Airplay to appear on my iPhone 4S and iPad 2?

    I have an iPad 2 and an iPhone 4S, and I have never seen the Airplay button that should appear on each device. I have checked wifi and it seems to works fine for printer and e-readers. Also, I have updated to the latest OS version as of this week. Is

  • DVD Backup iPhoto (video import problem)

    I have backed-up all my iPhoto events to DVD's. Now I have imported a DVD into iPhoto. I have all the events back but the video's have the wrong date and time. Also the events are not the same as the DVD. Does anybody know what I had done wrong? When

  • Why won't my macbook connect to my epson xp-300?

    It finds the printer, but whenever i start to print, it pauses and won't let me resume

  • TestStand function to change Upper/Lower Case?

    How can I perform a "toUpper()" on a variable using TestStand expressions? I want to force all upper case on a string that a user has input, before validating it. I know I can do this in a VI, but I would hope that TestStand could do this. There appe

  • SWT ToDo Application -- link inside

    Here's a ToDo application written in Java w/ an SWT GUI. Source code is included along with ant build scripts. Any feedback would be appreciated. Hope you find it useful. www.myjavaserver.com/~dennismcknight