Why are object alloc delays occuring outside of GC?

Hi
I'm currently tuning an application which has a low latency transaction requirement (sub 5ms ideally), hence we're using JRockit RT with an 8ms deterministic strategy (we're not achieving 8ms GC pause, but some time ago we experimented and found this was a good setting for us).
I'm currently focused on addressing latency spikes which we're hoping to bring down from many hundreds of ms to the order of 10-20ms. Many of the spikes are occurring during GC activity as one would expect, but I'm currently looking at some spikes outside of GC which I'm struggling to understand the cause of. What I seem to be seeing is object alloc delays which are occuring and then being resolved, but outside of any GC activity.
For example, see the following snippet from the log. Cmd line params include:
-Xverbose:memory,memdbg=debug,gcpause=debug,compaction,gcreport,refobj
-Xgcprio:deterministic
-Xpausetarget=8ms
Version=
java version "1.6.0_05"
Java(TM) SE Runtime Environment (build 1.6.0_05-b13)
BEA JRockit(R) (build R27.6.0-50_o-100423-1.6.0_05-20080626-2104-linux-x86_64, compiled mode)
[memory ][Wed May 26 12:31:38 2010][23134] Pause 'OC:Cleanup' took 0.179 ms (ended at 1545.738649 s).
[memory ][Wed May 26 12:31:38 2010][23134] 1539.900-1545.738: GC 13279297K->11726889K (14680064K), sum of pauses 386.637 ms
[memdbg ][Wed May 26 12:31:38 2010][23134] Page faults before GC: 1, page faults after GC: 1, pages in heap: 3670016
[finaliz][Wed May 26 12:31:38 2010][23134] (OC) Pending finalizers 0->0
[memdbg ][Wed May 26 12:31:38 2010][23134] Restarting of javathreads took 0.257 ms
*[gcpause][Wed May 26 12:31:38 2010][23134] Thread "pool-1-thread-1" id=217 idx=0x36c tid=25388 was in object alloc 75.397 ms from 1546.387 s*
[memdbg ][Wed May 26 12:34:04 2010][23134] GC reason: Other, cause: Memleak Request Data
[memdbg ][Wed May 26 12:34:04 2010][23134] Stopping of javathreads took 0.817 ms
[memdbg ][Wed May 26 12:34:04 2010][23134] old collection 38 started
[memdbg ][Wed May 26 12:34:04 2010][23134] Alloc Queue size before GC: 0, tlas: 0, oldest: 0
[memdbg ][Wed May 26 12:34:04 2010][23134] Compacting 1 heap parts at index 44 (type internal) (exceptional false)
The above log segment shows the tail of a GC which has just completed, then an object alloc delay for 75ms for thread "pool-1-thread-1", and then a subsequent GC kicks in much later which I believe is unrelated. I have a JRA trace from the test which shows that the object alloc latency event occurred almost a full second after the completion of the GC activity - something you can't see from the above trace which has second resolution in the timestamp.
What I'm unsure of is exactly how the object alloc delay can occur and then be rectified by the JVM outside of a GC. I understood the main cause of an object alloc latency event was that memory wasn't available (i.e. TLA does not have enough free space and a new TLA cannot be allocated) and that a GC would be required to free up memory to allow either a new TLA to be allocated or free up enough space in an existing one. We see an "object alloc" latency delay which follows the above pattern after every GC so I'm guessing it must somehow be related to the GC which has just finished, but I can't figure out how. My hope is to eliminate this kind of latency delay completely or at least down to single-figure ms delays.
Perhaps the deterministic GC means some memory reclaim is happening in parallel outside of the reported GC activity, but I'd be very surprised if this was the reason as surely the collector would be reporting it's activity more fully. It's been a while since I've looked this closedly at GC so I may have forgotten something.
Any help you can provide in working out how to address such latency spikes would be very helpful!
Thanks
Stuart

I think the extra time is due to the compaction of the heap (which occurs after a garbage collection).
Note also that a very low pausetarget can stress the garbage collector to less appreciated behavior, when the application and the hardware or not appropriate. If you look at the application about 30% live data or less at collection time is something your application should strive at. Running with more live data might break the deterministic behavior (depending on the hardware of course).
You can tune compaction with the determistic collector as follows,
MEASUREMENT_ARGS="-Xverbose:gc,gcpause,memdbg"
export MEASUREMENT_ARGS
USER_MEM_ARGS="${MEASUREMENT_ARGS} -Xms512m -Xmx1000m -Xns256m -Xgcprio:deterministic -XpauseTarget=30ms -XXgcThreads:2 -XXtlaSize:min=2k,preferred=16k -XXcompactSetLimitPerObject:500 -XXinitialPointerVectorSize:40 -XXmaxPooledPointerVectorSize:8000 -XXpointerMatrixLinearSeekDistance:5"
export USER_MEM_ARGS
The last four options are used to tune compaction for the deterministic collector. In optimizing compaction try to reduce object references, because when a object is moved during compaction its references must be updated. So moving an object with a lot of references is more costly than moving an object with a few references.
Refer to http://otndnld.oracle.co.jp/document/products/jrockit/jrdocs/pdf/refman.pdf for more information.

Similar Messages

  • Why are my images scrolling from outside the Page width?

    Why are my images scrolling from outside the Page width? I am positive i has all the settings correct, to have an image parallax scroll from the edge of the page into the page and out again. But the images are flying in from the browser width - not the page width. This is not the effect I am after.
    Any suggestions on how to fix this? Given the key position is only linked to the height, I can't for the life of me work out how to ensure the images start moving from a certain position onthe left and right of the page.
    Thanks

    I think the effect you are seeing is a result of how the lineHeight is set up. By default, the lineHeight will be calculated as 120% of the size of the largest thing on the line -- in your case, this is the inline graphic. This is designed to leave space between the line before and the line with the graphic. So it is taking the height of the graphic, multiplying by 120%, and making this the distance between the previous line and the line with the graphic. So the bigger the graphic, the bigger the space above the graphic. Normally this works well with text, but in this case you may want to get closer to "set solid". You can do this by setting the lineHeight to 100%. Or you may wish to leave a couple of percent for the descenders of the previous line. Or, another alternative that may work well for you if you really know exactly where you want the line set, you could measure the graphic, add on the number of extra pixels to leave for the descenders, and make the lineHeight a constant. So you could do something like this:
    <img lineHeight="104%" height="411"/>
    or this:
    <img lineHeight="423" height="411"/>
    Obviously you would still need to specify the source and any other parameters you want on the image.
    Note that if you are trying to fit a 200 pixel high graphic into a 200 pixel high container, you will hit the same problem -- in order to fit the graphic, the container will have to be slightly bigger than the graphic in order to fit the descenders on the last line. This is true even if the last line contains only the graphic, and no descenders (or text) at all.
    Hope this helps,
    - robin

  • Why are all notifications delayed on my iPhone 4S on iOS 7

    Hi! I have an iPhone 4S, iOS 7.0.3, And all notifications such as iMessage, Whatsapp, Twitter, Facebook, etc. are delayed 5 - 20 minutes, until i mannualy check the app... but it does not stop here... when i open iMessage, it appears a new blob on the icon, and i say oh! finally someone wrote to me!! but i can´t read the message because i don´t know why but it doesn´t load until another 5 to 10 minutes... this, over 3G at 5 Mbit/s or even on Wifi 802.11 Extreme N at 120 Mbit/s so it doesn´t appear it´s my connection, because it is a high quality one...
    And the most strange thing is that when receiving a SMS or MMS it is delayed too! so, it´s not the internet connection...
    i checked the notification settings but everything seems to be OK.
    Please help... Thanks!

    Same exact issue- I have additionally changed my Apple ID, reset/restored multiple times. You are not alone

  • Why are objects becoming distorted when zoomed in?

    Lets say for example that I create a target shape at a relatively large size. I will then shrink it down to a more appropriate size and proceed to zoom in on the object. When I zoom in, the target becomes completely distorted. This is a problem that I came across recently as I have just upgraded from cs4 to cs5.1. Since Illustrator is vector based, objects should be crisp and look the same whether they are scaled to the size of the full artboard or down to just 1% of it, correct?
    I have posted a screen shot of the issue that I am running into below. The larger shape on the left is normal, and the shape on the right is that same shape just scaled down. Note: this shape does not have strokes, it is a solid fill. Thanks for any help in advance!

    This is what I had originally thought was the issue, so I created a new doument making sure to uncheck the box for the pixel grid. I then dragged the above shape into the new document and the problem persisted. However, when I recreated a similar shape in the new document, there was no distortion (as expected). It seems that for some reason an object created in a document with the pixel grid option turned on keeps those settings when carried into the new document.
    Is there a way around this or am I going to have to recreate everything in the new document?
    Thanks for tee response, btw!

  • Why are objects unselected when unhiding in InDesign?

    In the past, if I hid multiple objects and then chose "show all", the objects would all be selected or active selections. Now the objects unhide but are not selected. Is this a preference? Am I mistaken that maybe this is only an Illustrator feature?

    to get the best quality files in InDesign- import them, dont drag them over. use File>Place.
    make sure send image data is set to all when printing. (print>settings>graphics and then under images set "send data" to "all")
    also: place your TFF in the frame. you can always fitting > content to frame.
    -janelle

  • Why are Objects that big in memory?

    Hello!
    I have written an application that does calender-stuff and for every day I use a day-object with the folloing member-variables:
      public short state;
      public char fSymbol;
      public byte signSymbol;
      public byte workMod;
      public boolean tipAvail;
      public boolean remindAvail;
      public boolean hourTableAvail;
      public boolean toDoAvail;
      public boolean isEditable;
      public int julianDay;
      I theory such an object should need only 15byte + 2 byte Hotspot "per Object" overhead.
    In reality one Object needs 67bytes with an interpreted 1.1-jvm and 66bytes with hotwpot-client-1.4!
    Where does all the memory go? I know that the jvm does some aligning, but what does it do here?
    Does it put each variable onto the next machine-word-lenght?
    This does not really bother me on desktop (except MSJVM which is god damn slow) but on J2ME powered mobiles this is really ugly!
    Is there any other way than to put everything in ints and mascerade it?
    Thanks in advance, lg Clemens
    PS: I am using a P4 under Linux-2.6 so machine-word-lenght is 4 byte.

    Well, thanks a lot for your thoughts.
    1.) No - Unsafe is only available in J2SE-1.4 and not exposed as a public API. I wont use such dangerous stuff...
    2.) Well, to be honest I only know that the 1.1-jvm netscape was powered allocated 67byte for each object (the Linux-version withought a JIT).
    But Hotspot nearly needs the same amout of memory for the same amount of objects.
    Well, happy mascerading and bit-shifting :-(
    lg Clemens

  • Why are objects in tables removed when opening a document (created in pages 2009) in the new pages?

    I have a problem: Documents I created in Pages 2009, when opened in the new Pages I get "objects in tables were removed".
    This makes all my documents useless !
    Is there any way to get the objects (images and tables mostly) to load into the tables across all my documents ?

    No, there isn't. Use Pages 09 with your old documents and Pages 5.0 for creating new ones. Do you really need Pages 5.0 is the question you need to answer. If not, don't.

  • Why String Objects are immutable ?

    From the JLS->
    A String object is immutable, that is, its contents never change, while an array of char has mutable elements. The method toCharArray in class String returns an array of characters containing the same character sequence as a String. The class StringBuffer implements useful methods on mutable arrays of characters.
    Why are String objects immutable ?

    I find these answers quite satisfying ...
    Here's a concrete example: part of that safety is ensuring that an Applet can contact the server it was downloaded from (to download images, data files, etc) and not other machines (so that once you've downloaded it to your browser behind a firewall, it can't connect to your company's internal database server and suck out all your financial records.) Imagine that Strings are mutable. A rogue applet might ask for a connection to "evilserver.com", passing that server name in a String object. The JVM could check that this server name was OK, and get ready to connect to it. The applet, in another thread, could now change the contents of that String object to "databaseserver.yourcompany.com" at just the right moment; the JVM would then return a connection to the database!
    You can think of hundreds of scenarios just like that if Strings are mutable; if they're immutable, all the problems go away. Immutable Strings also result in a substantial performance improvement (no copying Strings, ever!) and memory savings (can reuse them whenever you want.)
    So immutable Strings are a good thing.
    The main reason why String made immutable was security. Look at this example: We have a file open method with login check. We pass a String to this method to process authentication which is necessary before the call will be passed to OS. If String was mutable it was possible somehow to modify its content after the authentication check before OS gets request from program then it is possible to request any file. So if you have a right to open text file in user directory but then on the fly when somehow you manage to change the file name you can request to open "passwd" file or any other. Then a file can be modified and it will be possible to login directly to OS.
    JVM internally maintains the "String Pool". To achive the memory efficiency, JVM will refer the String object from pool. It will not create the new String objects. So, whenever you create a new string literal, JVM will check in the pool whether it already exists or not. If already present in the pool, just give the reference to the same object or create the new object in the pool. There will be many references point to the same String objects, if someone changes the value, it will affect all the references. So, sun decided to make it immutable.

  • Why are pre-order downloads delayed?

    I pre-ordered Madonna's MDNA to get the single nearly two months ago.  The album went on regular sale this morning, but it is still not possible to download the pre-order.  Why are pre-order customers forced to wait when regular customers can download straight away?  When am I going to be able to download the album?  Really unimpressed with this iTunes.

    Same problem!!! I have sent them an E-mail...really disapointed!! Grr!!
    Good thing I also bought the CD i guess....
    Still really cheesed off!!

  • One problem using TI to catch Objects allocation

    To get objects allocated during runtime, I use byecode intrumentation, JNI function interception, and VM Object Allocation Event of TI. But I still miss the allocation of many objects, about 15% of all the heap objects. why this happens, the objects missed are "[I", "[B","[C","[S","Ljava/lang/Class",and "Ljava/lang/Object".
    My experiments are based on Eclipse3.2 and JDK1.5.9 to catch objects allocation During the process from the beginning of Eclispe's start  to the successfully running of a simple app developed in Eclipse.
    please help to give some hints. Thanks in advance.
    Message was edited by:
            danvor                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    You don't say much about your instrumentation. Do you insert code after each newarray byte code so that you get a notification when arrays are allocated. If not, that should help explain the array objects you are missing. There were a few issues with the VMObjAlloc event in 5.0 that were resolved in 6.0. You might want to check that release to see if you get more events for objects that can't be detected with instrumentation.

  • Why are SMB connections so much slower than AFP on my Mac Servers?

    I work in a school with a mix of Mac and PC clients and servers.  My Mac servers all serve out file sharing to both AFP and SMB.  When my Mac clients connect to the Mac servers via SMB, it takes several seconds, sometimes up to a minute before the login screen appears.  When I connect via AFP, the login window appears instantaneously.  I have also noticed that when the Macs connect to Windows servers, which are obviously SMB, the same long delay occurs.  It's manageable, but because it's so quick with AFP, I wonder why the delay? 
    My biggest concern is that Yosemite connects via SMB by default.  Right now everyone is running Mavericks, which will still assume you're asking it to connect via AFP when you don't specify the afp:// in the address.  My Yosemite beta machine does the opposite, it assumes SMB when the address doesn't specify.  When all my clients make the switch next year, I'm expecting much longer than usual connection times for my mac users who are not technically inclined and who are used to just typing a word or phrase in the "Connect to:" dialog box, rather than an actual address (such as afp://server, they would currently just type "server" and it fills in the rest for them).
    Thanks for any help with this.  I would love to find out how to speed up the SMB connections.

    Performance Tuning the Network Stack
    SMB server browsing woefully slow
    mac os x slow copy file from Samba Server
    Fix slow network file transfers across Mac OSX Lion

  • How much space will be allocated for Occurs 10

    Hello,
    Kindly clarify my below queries?
    1) if occurs 0 takes minimum 8 kb space
    then how much will occurs 10 will take .
    i.e when to use occurs 0 ,occurs 10 ,occurs 100?
    2) what is the maximum number of lines an internal table can contain ?
    Regards,
    Rachel

    Hi,
    <li>OCCURS <n>. The below information answers both questions.
    This size does not belong to the data type of the internal table, and does not affect the type check. You can use the above addition to reserve memory space for <n> table lines when you declare the table object.
    When this initial area is full, the system makes twice as much extra space available up to a limit of 8KB. Further memory areas of 12KB each are then allocated.
    You can usually leave it to the system to work out the initial memory requirement. The first time you fill the table, little memory is used. The space occupied, depending on the line width, is 16 <= <n> <= 100.
    It only makes sense to specify a concrete value of <n> if you can specify a precise number of table entries when you create the table and need to allocate exactly that amount of memory (exception: Appending table lines to ranked lists). This can be particularly important for deep-structured internal tables where the inner table only has a few entries (less than 5, for example).
    To avoid excessive requests for memory, large values of <n> are treated as follows: The largest possible value of <n> is 8KB divided by the length of the line. If you specify a larger value of <n>, the system calculates a new value so that n times the line width is around 12KB
    Thanks
    Venkat.O

  • How Hprof track object allocation?

    I'm learning to use JVMTI.
    I BCI the class by java_crw_demo.dll to track allocations of object and array,
    and I use ClassLoad callback to track class objects' allocation.
    But about 4000 allocations of [Ljava/lang/Object are missed when comparing to Hprof, except this, the tracked allocations are exactly the same with Hporf.
    Why?
    Any help is greatly appreciated.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    Do you have ClassLoad enabled in the start phase?

  • Finalization thread does not keep up with Object allocation

    Hi,
    I'm trying to track down a memeory leak in my program. When I use OptimizeIT I find 11,690 instances of java.lang.ref.Finalizer hanging about. I also see lots of JDBC Connection, CallableStatements, and PreparedCalls hanging around. (They are actually instances of my vendor specific classes). Those objects include a finalize() method.
    Why are all these java.lang.ref.Finalizers hanging about? Why doesn't the finalization thread actually let go of them or finalize them in a timely manner! Eventually, my process runs out of memory and the Finalization thread never seems to finalize these objects. The rate at which I am newing up connections seems to be faster than the finalization thread can destory the object.
    charlie
    p.s.
    When I go into the reference graph, there are lots of Connection objects that don't have link back to the root set. But, they are never GC'ed when I push the GC button. While OptimizeIT doesn't show the finalization thread holding a reference to them, I believe OptimizeIT 4.2 hides the java.lang.ref.Finalizer reference that points to your object. The reference inside java.lang.ref.Finalizer is actually inside a super class called java.lang.ref.Reference. Why I don't know? But, someone with OptimizeIT 4.1 said that they see more information than I do in OptimizeIT 4.2. Also when I do a show Outbound References I don't see the instance members for the super class. (Namely java.lang.ref.Reference.referent) Is anyone familiar enough with OptimizeIT as to why this is occuring?

    >
    Well thanks for the advice, jschell, but I am cleaning
    up my connections by calling close(). But, keep the
    great advice coming. I didn't think this forum had
    quite enough worthless, cynical advice. For the
    benefit of developers who read this forum for some
    worth while advice. I'll post my findings.
    Hmmmm...others seems to think my advice is worthwhile.
    Some have even come to think that my advice is worthwhile after posting remarks like the one above.
    What's really interesting about this problem is that
    this has nothing to do with whether you call close()
    or not. In fact, you could've called close() and
    cleaned up properly, and your program could still leak
    memory! It hinges on objects that implement the
    finalize() method.
    Your program, or third party libraries, or even the JVM can leak memory.
    My JDBC vendor, like many, includes a finalize()
    method in the implemention classes of Connection,
    ResultSet, PreparedCall, etc. Naturally, this is to
    clean up a connection when a programer has lost a
    reference to a connection without calling close, but
    it has a serious side effect.
    Could be.
    And your JDBC vendor would be?
    I certainly know of JDBC vendors who do use finalize. Seems reasonable since the JDBC spec suggests it.
    And I have used JDBC drivers that use it in 24x7 server operations and I have not traced a leak to those drivers yet.
    So to my mind that suggests that finalize by itself is not a problem.
    I've tested this will a class that simply does a wait()
    call in its finalize() method. Then I allocate lots
    of other objects that contain a finalize() method,
    and watch the free memory fall. Just as a guess I would say that that is a really bad thing to do. My first guess would be that it would just lock up the JVM. Since it doesn't that would suggest that the JVM is creating one or more threads to call finalize.
    Threads take memory.
    And when you call wait you suspend those thread indefinitely.
    So I would expect to see memory fall.
    >
    Seems like to me that it's better to leak memory, than
    to try to clean up for the programmer. While
    researching this problem I ran across a long post on
    the .NET mailing list on finalization in .NET, and
    they seem to say the same thing that Sun says which is
    that the use of a finalize() method that the system
    will call implicitly is a bad thing. You should
    always use an explicit call like close().
    I would guess that memory collection in .Net is going to be less robust than in java. It simply hasn't been around long enough.
    While the fix for my problem is quite simple, remove
    the finalize() method from the wrapper class. But, I
    haven't run across any documenation on Sun's site or
    the internet that has run across a memory leak quite
    like this.
    I however, have read, and participated in quite a few threads on this site where many people have suggested that you should never use finalize.
    I think there are even a couple examples that show exactly how using finalize can generate memory leaks.

  • Disc Burner. Why are some files not readable or writeable?

    Hello,
    I am trying to backup my friends computer files on a beige G3 (233 MHz). I am burning about 6 DVDs of AIF music files. Most of the files are about 30 - 40 Megs in size. Often I receive error messages on particular files that read:
    "The operation cannot be completed because some data cannot be read or written.
    (Error code - 36)"
    Followed by the choices: "Stop" or "Continue".
    I have a bad feeling that my startup disk is too small but I am not sure.
    Most of the files work fine. However some of the files cause disc burner to not be able to use them in the disc image before I burn. Why would this be?
    Here are the specifications:
    Computer: beige G3 with two hard drives (one SCSI and one ATA hard drive) and a Pioneer DVR-110D DVD writer which is Apple supported (Apple system profiler indicates Apple supported on this DVD writer). My blank DVD disks are Sony Vermatim DVD-R (1 - 16x speed support). Although I think my beige G3 only writes them at about 2 or 4x.
    I am using OS 10.1.5. My startup disk which has OS 10.1.5 in it is a 4 GIG SCSI disk. Is this big enough for a startup disk (for creating 4.5 GIG DVDs)? My files are on the second hard drive which is a 40 GIG ATA disk.
    I have a bad feeling that my OSX startup drive is too small for disk burning - it is only 4 GIGs in total and OSX Disc burner (Disc copy) is first making a disk image on my startup volume before it burns it. Perhaps that disk image is too large for the startup volume. Is there any way I could ask OSX to put this disk image on the second larger hard drive instead (not the startup drive?).
    Here is my process (I hope I am doing this right - I am new at this):
    1. I first insert a blank DVD-R
    2. A message pops up asking my if I would like to create a blank disc image for this disk (I think this is the typical Disk Copy utility window. I say "yes" and give it a name and choose the "DVD-R or DVD-RAM" option. (I am not sure if there are other important settings to choose here or better settings to use but I guessed the other options that seemed obvious. A blank disk image is created with an icon that looks like a DVD disk.
    3. Then I drag my chosen files to this blank disc image. It takes about 30 minutes to copy over. Thats when I receive the error message that some files couldn't be "read or written". The other files work fine but it would be nice if they all worked.
    Why would some files not be readable or writeable?
    4. Then I choose file/burn in the top menu and the DVD is created. This takes another 20 - 30 minutes. At the very end of the process another message pops up saying:
    "Sorry the operation could not be completed because an unexpected error occured (error code -28)" Followed by an "OK" button.
    However all the files that made it onto the final DVD are fine - its just that its not all of the ones I originally chose in the first step when dragging to create the disk image.
    Am I doing this the right way?
    Why are some files not "readable or writeable" as it indicates in the error message?
    Is my 4 GIG startup disk too small for this? or are the music files possibly corrupt? or could there be some other possible problem?
    Thanks

    To follow up, I have some good news. After following your advice Kappy, it now works very well Thanks! Instead of using the 4 GIG volume for the OS, I am now using an 8 GIG partition on an 80 GIG drive. So now the OS has some room to operate. No disk errors occured on my first DVD
    Now there was one problem. The second DVD burned gave me an error. I am not sure why but I am going to guess that because I had to installed the OS onto an 8 GIG partition maybe the OS needs to be rebooted in between disk burns because although 8 GIGs is certainly greater than the 4 GIGs I gave it last time, it still isn't a lot of space - maybe just enough to do one DVD at a time. Thats only a guess. So I rebooted to see if that clears the system out ready for the next DVD and I am trying to burn the second DVD again. If I remember I will report back. In any case, yes, this seems to be working. I hope this second DVD burns well too.
    Thanks Kappy

Maybe you are looking for