What is th optimal Database Buffer size?

Good evening SAP/Oracle experts,
I am interested to know how many of you have chosen to size your Oracle data buffer (db_cache_size). There seems to be two trains of thought about this
- one that says size the data buffer as large as you can because buffer accesses are 1000s of times quicker than disk accesses
- another that says that "excessive buffer visits are the morbid obesity of the database. Extra logical I/O can degrade the perfromance of virtually every subsystem in an Oracle application" (quote Cary Millsap)
As I tend to prefer the second argument, I originally sized the data buffer on our ECC5 Production system (with CI/DB running on a server with 8Gb RAM) at 500Mb. A few months after go-live, I increased the buffer to 750Mb and recently I increased it again to 1Gb. I found a slight performance improvement with each increase, but database time remained at around 50% of total response time (from ST03N) throughout. Our average response time is currently around 700-750ms, with database time around 300-400ms. Our database is currently around 450Gb.
I therefore ask you, the SDN community, to tell me how large you tend to size your data buffer in comparison to the total amount of RAM on your database server, and what % of your average response time is spent down at the database.
Thanks in advance for any help you can provide.
Arwel Owen
SAP Infrastructure Manager,
Princes Ltd.

Dear Arwel.
To specify an optimal cache value, you can use the dynamic
  DB_CACHE_ADVICE parameter with statistics gathering enabled to predict
  behavior with different cache sizes through the V$DB_CACHE_ADVICE performance  view.
When configuring a new instance, it is impossible to know the correct size for the buffer cache. Typically, a database administrator makes a first estimate for the cache size, then runs a representative workload on the instance and examines the relevant statistics to see whether the cache is under or over configured.
For more details about Sizing the Buffer Cache and Increasing Memory Allocated to the Buffer Cachelook at oracle documentation.
http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96533/memory.htm#30301
In General Dont waste RAM use it completely but minimum 20% to OS and the rest to allocate.
Regards
Vinod

Similar Messages

  • Buffer Sizes Are Not Equal

    Okay what does this term mean:
    Buffer Sizes Are Not Equal
    I am playing with transitions. I have two clips with enough handles.
    I'm using the gradient wipe and putting a Photoshop file in the "well".
    That message pops up in the timeline when I'm stopped at that place in the timeline (similar to the media offline message). When I try to render the effect, the same message appears and doesn't even attempt to render.
    FCP 5.1.4

    The image is RGB...
    The image itself is a blue circle over a transparent background. That's the only layer.
    I did change it to a jpg file, and I stopped getting the error message...but obviously I also lost the alpha channel.
    John

  • Buffer size while taking export of database

    Hi,
    I am taking export backup of my oracle database. While giving exp, the default buffer size is shown as 4096. How do i determine the optimum buffer size so that i can reduce the export time ?
    Thanks in advance..
    Regards,
    Jibu

    Jibu  wrote:
    Hi,
    I am taking export backup of my oracle database. While giving exp, the default buffer size is shown as 4096. How do i determine the optimum buffer size so that i can reduce the export time ?
    Thanks in advance..
    Regards,
    JibuIn addition to Sybrand's comments about alternatives, I'd like to add that just as a general class of problem, this is not the kind of thing I'd waste a lot of time on trying to come up with some magic optimal number. With exp and imp, I generally make the buffer about 10 times the default and forget it. This is the kind of thing where you very quickly reach a point of diminishing returns in terms of time spent "optimizing" the process vs. actual worthwhile gain in run time.
    By "worthwhile" gain, I mean this . . .
    In terms of a batch process like exp,
    -- is a 50% reduction in run time worthwhile when your starting point is a 1 hour run time?
    -- is a 50% reduction in run time worthwhile when your starting point is a 5 minute run time?
    -- is a 50% reduction in run time worthwhile when your starting point is a 30 second run time?
    -- how about if the run is scheduled for 2:00 am when there are no other processes and no customers on the database?

  • What's the optimum buffer size?

    Hi everyone,
    I'm having trouble with my unzipping method. The thing is when I unzip a smaller file, like something 200 kb, it unzips fine. But when it comes to large files, like something 10000 kb large, it doesn't unzip at all!
    I'm guessing it has something to do with buffer size... or does it? Could someone please explain what is wrong?
    Here's my code:
    import java.io.*;
    import java.util.zip.*;
      * Utility class with methods to zip/unzip and gzip/gunzip files.
    public class ZipGzipper {
      public static final int BUF_SIZE = 8192;
      public static final int STATUS_OK          = 0;
      public static final int STATUS_OUT_FAIL    = 1; // No output stream.
      public static final int STATUS_ZIP_FAIL    = 2; // No zipped file
      public static final int STATUS_GZIP_FAIL   = 3; // No gzipped file
      public static final int STATUS_IN_FAIL     = 4; // No input stream.
      public static final int STATUS_UNZIP_FAIL  = 5; // No decompressed zip file
      public static final int STATUS_GUNZIP_FAIL = 6; // No decompressed gzip file
      private static String fMessages [] = {
        "Operation succeeded",
        "Failed to create output stream",
        "Failed to create zipped file",
        "Failed to create gzipped file",
        "Failed to open input stream",
        "Failed to decompress zip file",
        "Failed to decompress gzip file"
        *  Unzip the files from a zip archive into the given output directory.
        *  It is assumed the archive file ends in ".zip".
      public static int unzipFile (File file_input, File dir_output) {
        // Create a buffered zip stream to the archive file input.
        ZipInputStream zip_in_stream;
        try {
          FileInputStream in = new FileInputStream (file_input);
          BufferedInputStream source = new BufferedInputStream (in);
          zip_in_stream = new ZipInputStream (source);
        catch (IOException e) {
          return STATUS_IN_FAIL;
        // Need a buffer for reading from the input file.
        byte[] input_buffer = new byte[BUF_SIZE];
        int len = 0;
        // Loop through the entries in the ZIP archive and read
        // each compressed file.
        do {
          try {
            // Need to read the ZipEntry for each file in the archive
            ZipEntry zip_entry = zip_in_stream.getNextEntry ();
            if (zip_entry == null) break;
            // Use the ZipEntry name as that of the compressed file.
            File output_file = new File (dir_output, zip_entry.getName ());
            // Create a buffered output stream.
            FileOutputStream out = new FileOutputStream (output_file);
            BufferedOutputStream destination =
              new BufferedOutputStream (out, BUF_SIZE);
            // Reading from the zip input stream will decompress the data
            // which is then written to the output file.
            while ((len = zip_in_stream.read (input_buffer, 0, BUF_SIZE)) != -1)
              destination.write (input_buffer, 0, len);
            destination.flush (); // Insure all the data  is output
            out.close ();
          catch (IOException e) {
            return STATUS_GUNZIP_FAIL;
        } while (true);// Continue reading files from the archive
        try {
          zip_in_stream.close ();
        catch (IOException e) {}
        return STATUS_OK;
      } // unzipFile
    } // ZipGzipperThanks!!!!

    Any more hints on how to fix it? I've been fiddling
    around with it for an hour..... and throwing more
    exceptions. But I'm still no closer to debugging it!
    ThanksDid you add:
    e.printStackTrace();
    to your catch blocks?
    Didn't you in that case get an exception which says something similar to:
    java.io.FileNotFoundException: C:\TEMP\test\com\blah\icon.gif (The system cannot find the path specified)
         at java.io.FileOutputStream.open(Native Method)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
         at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
         at Test.unzipFile(Test.java:68)
         at Test.main(Test.java:10)Which says that the error is thrown here:
         // Create a buffered output stream.
         FileOutputStream out = new FileOutputStream(output_file);Kaj

  • What else are stored in the database buffer cache?

    What else are stored in the database buffer cache except the data blocks read from datafiles?

    That is a good idea.
    SQL> desc v$BH;
    Name                                                                                                      Null?    Type
    FILE#                                                                                                              NUMBER
    BLOCK#                                                                                                             NUMBER
    CLASS#                                                                                                             NUMBER
    STATUS                                                                                                             VARCHAR2(10)
    XNC                                                                                                                NUMBER
    FORCED_READS                                                                                                       NUMBER
    FORCED_WRITES                                                                                                      NUMBER
    LOCK_ELEMENT_ADDR                                                                                                  RAW(4)
    LOCK_ELEMENT_NAME                                                                                                  NUMBER
    LOCK_ELEMENT_CLASS                                                                                                 NUMBER
    DIRTY                                                                                                              VARCHAR2(1)
    TEMP                                                                                                               VARCHAR2(1)
    PING                                                                                                               VARCHAR2(1)
    STALE                                                                                                              VARCHAR2(1)
    DIRECT                                                                                                             VARCHAR2(1)
    NEW                                                                                                                CHAR(1)
    OBJD                                                                                                               NUMBER
    TS#                                                                                                                NUMBERTEMP      VARCHAR2(1)      Y - temporary block
    PING      VARCHAR2(1)      Y - block pinged
    STALE      VARCHAR2(1)      Y - block is stale
    DIRECT      VARCHAR2(1)      Y - direct block
    My question is what are temporary block and direct block?
    Is it true that some blocks in temp tablespace are stored in the data buffer?

  • Doing Buffered Event count by using Count Buffered Edges.vi, what is the max buffer size allowed?

    I'm currently using Count Buffered Edges.vi to do Buffered Event count with the following settings,
    Source : Internal timebase, 100Khz, 10usec for each count
    gate : use the function generator to send in a 50Hz signal(for testing purpose only). Period of 0.02sec
    the max internal buffer size that i can allocate is only about 100~300. Whenever i change both the internal buffer size and counts to read to a higher value, this vi don't seem to function well. I need to have a buffer size of at least 2000.
    1. is it possible to have a buffer size of 2000? what is the problem causing the wrong counter value?
    2. also note that the size of max internal buffer varies w
    ith the frequency of signal sent to the gate, why is this so? eg: buffer size get smaller as frequency decrease.
    3. i'll get funny response and counter value when both the internal buffer size and counts to read are not set to the same. Why is this so? is it a must to set both value the same?
    thks and best regards
    lyn

    Hi,
    I have tried the same example, and used a 100Hz signal on the gate. I increased the buffer size to 2000 and I did not get any errors. The buffer size does not get smaller when increasing the frequency of the gate signal; simply, the number of counts gets smaller when the gate frequency becomes larger. The buffer size must be able to contain the number of counts you want to read, otherwise, the VI might not function correctly.
    Regards,
    RamziH.

  • What are all information brought into database buffer cache ?

    Hi,
    What are all information brought into database buffer cache , when user does any one of operations such as "insert","update", "delete" , "select" ?
    Whether the datablock to be modified only brought into cache or entire datablocks of a table brought into cache while doing operations i mentioned above ?
    What is the purpose of SQL Area? What are all information brought into SQLArea?
    Please explain me the logic behind the questions i asked above.
    thanks in advance,
    nvseenu

    Documentation is your friend. Why not start by
    reading the
    [url=http://download.oracle.com/docs/cd/B19306_01/serv
    er.102/b14220/memory.htm]Memory Architecturechapter.
    Message was edited by:
    orafad
    Hi orafad,
    I have learnt MemoryArchitecture .
    In that documentation , folowing explanation are given,
    The database buffer cache is the portion of the SGA that holds copies of data blocks read from datafiles.
    But i would like to know whether all or few datablocks brought into cache.
    thanks in advance,
    nvseenu

  • Buffer Size error on Oracle 10.2.0 Database

    Hi Experts,
    I am getting th following error when my VB application executes on Oracle 10.2.0.
    ORA-01044: Size 5000000 of buffer bound to variable exceeds maximum SYSMAN.EMD_NOTIFICATION
    The Oracle documentation suggests to reduce the Buffer Size.
    As per my knowledge Oracle 10g has only one such parameter Log_Buffer.
    Which parameter is required to be changed in init.ora ?
    Any additional recommendation, which could arrest this error?
    Thanks and Regards

    Hi,
    The same Application works with the same schema and same Database Objects with Oracle 9i, which is pretty surprising. I took a backup of the schema objects from Oracle 10g, and uploaded it on Oracle 9i. The application works fine without a single modification.
    Next, I sat with my developer and checked the parameters, as suggested. Where the Procedure was being called, I found that it is thru command line (not with record set), and the parameter was set to 10000, which was modified it to 5000. This Procedure calling object is used globally across the application. The application started running fine on Oracle 10g, where it was earlier giving errors.
    Now the point is that should we go for the change of the parameter from 10000 to 5000 ?
    This value is very sensitive, and if the number or rows returned is more than 5000 (even 5001), the application shall give an error, and it is obvious that the number of rows in the table for most of the queries shall definitely exceed 5000.
    I feel this is a temporary solution, but can we go for some permanent solution? Definitely, in this fourm I am sure, we have plenty of brains who can trouble shoot this issue.
    Waiting for realtime solution.
    Edited by: user3299261 on Aug 31, 2009 2:14 PM

  • Swapping and Database Buffer Cache size

    I've read that setting the database buffer cache size too large can cause swapping and paging. Why is this the case? More memory for sql data would seem to not be a problem. Unless it is the proportion of the database buffer to the rest of the SGA that matters.

    Well I am always a defender of the large DB buffer cache. Setting the bigger db buffer cache alone will not in any way hurt Oracle performance.
    However ... as the buffer cache grows, the time to determine 'which blocks
    need to be cleaned' increases. Therefore, at a certain point the benefit of a
    larger cache is offset by the time to keep it sync'd to the disk. After that point,
    increasing buffer cache size can actually hurt performance. That's the reason why Oracle has checkpoint.
    A checkpoint performs the following three operations:
    1. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
    It's the DBWR that writes all modified databaseblocks back to the datafiles.
    2. The latest SCN is written (updated) into the datafile header.
    3. The latest SCN is also written to the controlfiles.
    The following events trigger a checkpoint.
    1. Redo log switch
    2. LOG_CHECKPOINT_TIMEOUT has expired
    3. LOG_CHECKPOINT_INTERVAL has been reached
    4. DBA requires so (alter system checkpoint)

  • Optimal Buffer Size?

    Folks,
    In [this thread|http://forums.sun.com/thread.jspa?messageID=10635136#10635136] I posted code to "copy" a file via a socket... Yeah not very original, I know... but I started experimenting with the buffer sizes between 4k and 132k trying to make it run as fast as possible.
    All I magaged to prove was that your milage may vary, lots ... and therefore I conclude that my testing strategy must be fundamentally flawed... a "dirty" test bed, and far too few executions.
    My question is: Is there a rule-of-thumb, or an accepted technique to workout how big to make read/write buffers for optimal performance? I don't mean just sockets, but all places where we read and write buffers full of bytes: like reading and writing local files, or whatever.
    Groping around testing every possibility you can think of (every time you hit such a situation) is tedious, to say the least.
    Cheers all. Keith.

    NIO has proven "a bit" faster, especially for a local file copy.
    I'm on Vista which apparently auto-tunes the TCP recieve buffer size. NIO made no difference to my copy-file-over-socket-back-to-localhost.
    I presume that's because the socket buffer size is the bottleneck, and I guess that is the case because autotuning never increases the recieve-window, because ping returns in a constant 0 ms, leaving the auto-sizing alorithm bereft of meaningful input upon which to base a prediction for a larger recieve window.
    I should also have stated that I started down this path when I was shocked that it was faster to copy a file to another machine over TCP (over a gigabit backbone) than it was to copy the same file over TCP back to localhost... even with the read and write happening on two different physical disks. Meaning that loopback performance must suck.
    ... and this is all theoretical anyway (for my edification only), because only the development environment is affected; and it's "fast enough" anyway. I just like stuff to rock, so when it doesn't, I investigate.
    Cheers all. Keith.
    PS: Sorry for the tardiness of this response. I actually forgot about this thread, until I saw it again in my watch list.

  • What is ths maximum PDO read buffer size using the Series 2 CAN cards?

    Does anyone know the maximum size for the PDO read buffer when using a Series 2 PCI NI-CAN card?

    Hi
    The maximum size for a single PDO does not depent on the series of your board it is depending on what else you are doing with the CANopen Library.
    The board uses a specific shared memory to transfer messages between driver and hardware. The size of this memory fits nearly 350 messages.
    The CANopen Config takes 100 messages for different services like NMT. That means the maximum size for a single PDO would be approx. 250 messages.
    Or for 5 different PDOs 50 each. But normaly you can leave the buffer size to zero, thus the PDO Read would allways read the newest data.
    This calculation is true for the board. That means you have 350 messages per board and 2 ports whould need to share the memory.
    DirkW

  • Adobe SendNow - What is the optimal size for custom logos?

    I just tried to upload a custom logo (800x200) and it seemed to be stuck up in the top left corner, with a large amount of blank white screen.
    What is the optimal size for this please, and how do I centre a smaller logo?
    It is of course entirely possible I will need to make one in PS - not a problem - but the right sizes would be helpful info please.
    Many Thanks

    That is what I read too - but when I uploaded one it got tucked up in the top left corner of the "custom" setup thumbnail.
    Perhaps it is just a glitch in that dialogue, but it certainly looked as if there was a raft of space to the right & underneath.
    Still, probably Pilot Error so I will recheck - thank you Dave.

  • Buffer size to what value? Digital Output

    HI,
    I am trying to write digital waveforms and constants on 14 bits of the port0 [see picture attached] but I always get the error of too small buffer size. 
    Actually I don't know what numeric value I should wire to the sample clock of the DO instead of 256.
    Previously I generated digital waveforms simply autoindexing a for loop's loop index values at the loop's border and I wired this array directly to the DO write and it worked on 8 bits. At that time I put 256 as buffer size because 2^8=256.
    Now the situation would be very similar, I want to output digital waveforms on 8 bits and constants on the other 6 bits. 2^14 as buffer size doesn't work. Can somebody please advise how to calculate the correct buffer size.
    Thanks,
    Krivan
    Attachments:
    buffersize.PNG ‏35 KB

    Hey Krivan,
    You can use a property node to edit the size of the buffer much like I have done (see attached image.). For this piece of code you should also have a 1D Boolean array in order to write to the correct channels and also ensure that the line grouping on the create task is set to on channel for each line.
    Regards
    Andrew George @ NI UK
    Attachments:
    Krivan.png ‏116 KB

  • What is the optimal cel size?

    I'm working with a program called Natural Scene Designer Pro v5.0 which allows the user to create a series of still rendered ray tracings into an animated AVI. I would like to edit these AVI clips in Premiere Elements 10, but before going terribly far into this project, I was curious about a couple of things.
    What is the optimal cel size that I should instruct NSD to produce that will most satisfy Premiere Elements?
    What are the optimal Project Settings in PE10 that should be used in editing this type of "footage" ?
    Initial tests have been done at 320:240 @ 30fps (which does not appear to be supported in PE10 amongst its presets), but eventually, I'd like to make things as large as is practical and for the broadest audience.
    It has been several years since I played around with things like this... which then were on another Windows machine and Premiere Elements 2. I need to get reacquainted with Premiere Elements and version 10 looks a little more mature and myself, more gray
    Thank you very much for any replies.
    Kind regards,
    Kelly

    Kelly,
    Thank you for that clarification. I had mis-read, above, and thought that it WAS an MS CODEC pack.
    The problem with too many CODEC packs is that they often install "stuff," besides the CODEC's, and some of that stuff is not good for a Video editing computer. Also, many (most?) of the CODEC's installed are hacked, or reverse-engineered versions, and will often overwrite good, commercial CODEC's, that are installed. This is why Adobe hides its Adobe MainConcept CODEC's - so that other programs cannot overwrite them. However, their "priority" is still vulnerable, and many CODEC packs will change that priority in the Registry, so that its CODEC's are chosen first, by the system.
    I am a big fan of the Lagarith Lossless CODEC, when using intermediate files. The same for another one, UT Lossless. They DO compress, but at a lossless level, i.e. the quality stays the same. Two others that I like are both Apple QuickTime CODEC's, Animation and Photo-PNG, which are also lossless. However, on my PC, either the Lagarith, or the UT are my first choices. Both of those are wrapped in the AVI format, where the two QT CODEC's are wrapped in the MOV format. [I buy 3D work from several artists, who are on Mac's, and specify the MOV Animation CODEC. Those have always worked 100% on my PC.]
    Now, I see that you have FFDShow installed. One power-user here, Neale, installed that, at the direction of Adobe Technical Support, and all has been great for him. However, and especially in the PrPro Forum, many users have experienced very bad things with it, and for most, PrPro would not even run. Uninstalling that CODEC has proved very difficult, and some have had to wipe their HDD, and reinstall everything from scratch. I shudder, when mention is made of it, BUT Neale has had zero issues, and its installation DID fix a problem, that he had. I avoid it, but that is my personal choice.
    Now, back to the Lagarith Lossless CODEC. One issue with some CODEC packs is that what they install is not even close to the latest version of a particular CODEC. Some are very, very old versions. Before I did much work with the Lagarith, I would check out their site, and compare version numbers, between the one installed with the CODEC pack, and what is currently the newest version. It, like the UT Lossless, is free, so it would cost nothing to install the latest version, if you did not get that one. This article goes into a bit more info on Lagarith and UT: http://forums.adobe.com/thread/875797?tstart=0
    Hope that helps, and thank you for all of the info that you have provided, and for your patience.
    Good luck,
    Hunt

  • What is the default buffer size if we dont specify in buffer write?

    If we dont specify what is the default buffer size in BufferedWriter. How to increase/decrease the size of it?
    What is the purpose of flush?
    If flush() is not used, only partial content is written to the file. Is it because of the default size of the buffer.

    THis is the bufferedwriter class, it helps to look at them, look at the bold underlined, thats answers your defualt buffer size
    * @(#)BufferedWriter.java     1.26 03/12/19
    * Copyright 2004 Sun Microsystems, Inc. All rights reserved.
    * SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.
    package java.io;
    * Write text to a character-output stream, buffering characters so as to
    * provide for the efficient writing of single characters, arrays, and strings.
    * <p> The buffer size may be specified, or the default size may be accepted.
    * The default is large enough for most purposes.
    * <p> A newLine() method is provided, which uses the platform's own notion of
    * line separator as defined by the system property <tt>line.separator</tt>.
    * Not all platforms use the newline character ('\n') to terminate lines.
    * Calling this method to terminate each output line is therefore preferred to
    * writing a newline character directly.
    * <p> In general, a Writer sends its output immediately to the underlying
    * character or byte stream. Unless prompt output is required, it is advisable
    * to wrap a BufferedWriter around any Writer whose write() operations may be
    * costly, such as FileWriters and OutputStreamWriters. For example,
    * <pre>
    * PrintWriter out
    * = new PrintWriter(new BufferedWriter(new FileWriter("foo.out")));
    * </pre>
    * will buffer the PrintWriter's output to the file. Without buffering, each
    * invocation of a print() method would cause characters to be converted into
    * bytes that would then be written immediately to the file, which can be very
    * inefficient.
    * @see PrintWriter
    * @see FileWriter
    * @see OutputStreamWriter
    * @version      1.26, 03/12/19
    * @author     Mark Reinhold
    * @since     JDK1.1
    public class BufferedWriter extends Writer {
    private Writer out;
    private char cb[];
    private int nChars, nextChar;
    private static int defaultCharBufferSize = 8192;
    * Line separator string. This is the value of the line.separator
    * property at the moment that the stream was created.
    private String lineSeparator;
    * Create a buffered character-output stream that uses a default-sized
    * output buffer.
    * @param out A Writer
    *public BufferedWriter(Writer out) {*
    *     this(out, defaultCharBufferSize);*
    * Create a new buffered character-output stream that uses an output
    * buffer of the given size.
    * @param out A Writer
    * @param sz Output-buffer size, a positive integer
    * @exception IllegalArgumentException If sz is <= 0
    public BufferedWriter(Writer out, int sz) {
         super(out);
         if (sz <= 0)
         throw new IllegalArgumentException("Buffer size <= 0");
         this.out = out;
         cb = new char[sz];
         nChars = sz;
         nextChar = 0;
         lineSeparator =     (String) java.security.AccessController.doPrivileged(
    new sun.security.action.GetPropertyAction("line.separator"));
    /** Check to make sure that the stream has not been closed */
    private void ensureOpen() throws IOException {
         if (out == null)
         throw new IOException("Stream closed");
    * Flush the output buffer to the underlying character stream, without
    * flushing the stream itself. This method is non-private only so that it
    * may be invoked by PrintStream.
    void flushBuffer() throws IOException {
         synchronized (lock) {
         ensureOpen();
         if (nextChar == 0)
              return;
         out.write(cb, 0, nextChar);
         nextChar = 0;
    * Write a single character.
    * @exception IOException If an I/O error occurs
    public void write(int c) throws IOException {
         synchronized (lock) {
         ensureOpen();
         if (nextChar >= nChars)
              flushBuffer();
         cb[nextChar++] = (char) c;
    * Our own little min method, to avoid loading java.lang.Math if we've run
    * out of file descriptors and we're trying to print a stack trace.
    private int min(int a, int b) {
         if (a < b) return a;
         return b;
    * Write a portion of an array of characters.
    * <p> Ordinarily this method stores characters from the given array into
    * this stream's buffer, flushing the buffer to the underlying stream as
    * needed. If the requested length is at least as large as the buffer,
    * however, then this method will flush the buffer and write the characters
    * directly to the underlying stream. Thus redundant
    * <code>BufferedWriter</code>s will not copy data unnecessarily.
    * @param cbuf A character array
    * @param off Offset from which to start reading characters
    * @param len Number of characters to write
    * @exception IOException If an I/O error occurs
    public void write(char cbuf[], int off, int len) throws IOException {
         synchronized (lock) {
         ensureOpen();
    if ((off < 0) || (off > cbuf.length) || (len < 0) ||
    ((off + len) > cbuf.length) || ((off + len) < 0)) {
    throw new IndexOutOfBoundsException();
    } else if (len == 0) {
    return;
         if (len >= nChars) {
              /* If the request length exceeds the size of the output buffer,
              flush the buffer and then write the data directly. In this
              way buffered streams will cascade harmlessly. */
              flushBuffer();
              out.write(cbuf, off, len);
              return;
         int b = off, t = off + len;
         while (b < t) {
              int d = min(nChars - nextChar, t - b);
              System.arraycopy(cbuf, b, cb, nextChar, d);
              b += d;
              nextChar += d;
              if (nextChar >= nChars)
              flushBuffer();
    * Write a portion of a String.
    * <p> If the value of the <tt>len</tt> parameter is negative then no
    * characters are written. This is contrary to the specification of this
    * method in the {@linkplain java.io.Writer#write(java.lang.String,int,int)
    * superclass}, which requires that an {@link IndexOutOfBoundsException} be
    * thrown.
    * @param s String to be written
    * @param off Offset from which to start reading characters
    * @param len Number of characters to be written
    * @exception IOException If an I/O error occurs
    public void write(String s, int off, int len) throws IOException {
         synchronized (lock) {
         ensureOpen();
         int b = off, t = off + len;
         while (b < t) {
              int d = min(nChars - nextChar, t - b);
              s.getChars(b, b + d, cb, nextChar);
              b += d;
              nextChar += d;
              if (nextChar >= nChars)
              flushBuffer();
    * Write a line separator. The line separator string is defined by the
    * system property <tt>line.separator</tt>, and is not necessarily a single
    * newline ('\n') character.
    * @exception IOException If an I/O error occurs
    public void newLine() throws IOException {
         write(lineSeparator);
    * Flush the stream.
    * @exception IOException If an I/O error occurs
    public void flush() throws IOException {
         synchronized (lock) {
         flushBuffer();
         out.flush();
    * Close the stream.
    * @exception IOException If an I/O error occurs
    public void close() throws IOException {
         synchronized (lock) {
         if (out == null)
              return;
         flushBuffer();
         out.close();
         out = null;
         cb = null;
    What Flush(); does
    Example, you have a file called c, your writer is b and buffereredwriter is a. so your programs calls a, a talks to b, and b talks to c. when you call the Flush method, the information is sent to the outfile which is c immediately before you even close the file, because when you write to the file, it does not write directly, it writes to a buffer, so flush actually causes the buffer to write to file. Also if you call the close method on that file without the flush, the buffer will still get flushed.
    consider BufferedWriter c = new BufferedWriter(new PrintWriter("c:\\c"));
    you wrap printwriter into a buffered writer, now if you close this "connection" to the file, the buffer will get flushed, noting that all the data is sitting in the buffered and not yet in the file, and this happens if something dont break...

Maybe you are looking for

  • Drill down in Graphs

    Hi Friends, Is there anyway we can make graphs and charts in Web reports more interactive like drilldown in graphs. Suppose user can see sales revenue for different continents in pie-chart and table of web report and when he clicks particular contine

  • Universe on HPOpenView AssetCenter

    Post Author: Blacklist CA Forum: Administration Hi all. I'm quite new to the BO-world so please Has anyone successfully built a Business Objects XI universe on HPOpenView (formerly Peregrine) AssetCenter?I tried using a generic ODBC-connection with "

  • CS6 - "error compiling movie - Disk Full"

    This is the message I am getting from both Premiere and Media Encoder cs6 when I try to export my Premiere project.  It is 1080p 59.94fps.  This disk I am exporting to is FAR from full.  I am running Mac OS X 10.7.4 - not sure how else I can export w

  • Lost cd but have serial code registered

    We bought a windows 8 a few weeks ago. Our old computer had a major virus on it, and it made my photoshop cs5 extended have a configuration error. I tried to find the cd for it but cant find it anywhere. i registed the serial code into my adobe accou

  • Dummy Profit center issue while posting the transaction in FB60

    Dear SAP friends, We have implemented ECC6.0 version, I have the below issue: I am posting the document in FB60 transaction: after simulating the document system is throwing the error "Profit center 9937/FIREPRODMY does not exist for 14.08.2009" Mess