Speeding up FileIO - Double Buffered File Copy?

We are trying to speed up file copy from disk to tape, and I need a little more speed. I have tried playing with the size of the buffer, but that isn't changing much (makeing it slower if anything).
I'm trying to make a double buffered file copy and I can't figure out how to do it. I figured this would be a good place to get speed. Right now, my write is very simple:
byte buffer = new buffer[8 * 1024 * 1024];
FileInputStream in = new FileInputStream(srcFile);
while(true) {
  int amountRead = in.read(buffer);
  if (amountRead == -1) { break; }
  write(buffer, 0, length);
}So what i need to do it be able to read and write at the same time. So I was thinking that I could either make the write method a sperate thread, or some how make threaded buffers that read while the other is being writen. Has anyone tackled this problem before?
If this isn't the right way to speed up File IO, can you let me know other ideas? Thanks in advance!
Andrew

Once again: I wish I could claim credit for these classes, but they were in fact posted a year or so ago by someone lese. If I had the name I would give credit.
I've used these for two heavy-duty applications with never a problem.
<code>
package pipes;
import java.io.IOException;
import java.io.InputStream;
* This class is equivalent to <code>java.io.PipedInputStream</code>. In the
* interface it only adds a constructor which allows for specifying the buffer
* size. Its implementation, however, is much simpler and a lot more efficient
* than its equivalent. It doesn't rely on polling. Instead it uses proper
* synchronization with its counterpart PipedOutputStream.
* Multiple readers can read from this stream concurrently. The block asked for
* by a reader is delivered completely, or until the end of the stream if less
* is available. Other readers can't come in between.
public class PipedInputStream extends InputStream {
byte[] buffer;
boolean closed = false;
int readLaps = 0;
int readPosition = 0;
PipedOutputStream source;
int writeLaps = 0;
int writePosition = 0;
* Creates an unconnected PipedInputStream with a default buffer size.
* @exception IOException
public PipedInputStream() throws IOException {
this(null);
* Creates a PipedInputStream with a default buffer size and connects it to
* source.
* @exception IOException It was already connected.
public PipedInputStream(PipedOutputStream source) throws IOException {
this(source, 0x10000);
* Creates a PipedInputStream with buffer size <code>bufferSize</code> and
* connects it to <code>source</code>.
* @exception IOException It was already connected.
public PipedInputStream(PipedOutputStream source, int bufferSize) throws IOException {
if (source != null) {
connect(source);
buffer = new byte[bufferSize];
* Return the number of bytes of data available from this stream without blocking.
public int available() throws IOException {
// The circular buffer is inspected to see where the reader and the writer
// are located.
return writePosition > readPosition ? // The writer is in the same lap.
writePosition - readPosition : (writePosition < readPosition ? // The writer is in the next lap.
buffer.length - readPosition + 1 + writePosition :
// The writer is at the same position or a complete lap ahead.
(writeLaps > readLaps ? buffer.length : 0)
* Closes the pipe.
* @exception IOException The pipe is not connected.
public void close() throws IOException {
if (source == null) {
throw new IOException("Unconnected pipe");
synchronized (buffer) {
closed = true;
// Release any pending writers.
buffer.notifyAll();
* Connects this input stream to an output stream.
* @exception IOException The pipe is already connected.
public void connect(PipedOutputStream source) throws IOException {
if (this.source != null) {
throw new IOException("Pipe already connected");
this.source = source;
source.sink = this;
* Closes the input stream if it is open.
protected void finalize() throws Throwable {
close();
* Unsupported - does nothing.
public void mark(int readLimit) {
return;
* returns whether or not mark is supported.
public boolean markSupported() {
return false;
* reads a byte of data from the input stream.
* @return the byte read, or -1 if end-of-stream was reached.
public int read() throws IOException {
byte[] b = new byte[0];
int result = read(b);
return result == -1 ? -1 : b[0];
* Reads data from the input stream into a buffer.
* @exception IOException
public int read(byte[] b) throws IOException {
return read(b, 0, b.length);
* Reads data from the input stream into a buffer, starting at the specified offset,
* and for the length requested.
* @exception IOException The pipe is not connected.
public int read(byte[] b, int off, int len) throws IOException {
if (source == null) {
throw new IOException("Unconnected pipe");
synchronized (buffer) {
if (writePosition == readPosition && writeLaps == readLaps) {
if (closed) {
return -1;
// Wait for any writer to put something in the circular buffer.
try {
buffer.wait();
catch (InterruptedException e) {
throw new IOException(e.getMessage());
// Try again.
return read(b, off, len);
// Don't read more than the capacity indicated by len or what's available
// in the circular buffer.
int amount = Math.min(len,
(writePosition > readPosition ? writePosition : buffer.length) - readPosition);
System.arraycopy(buffer, readPosition, b, off, amount);
readPosition += amount;
if (readPosition == buffer.length) {
// A lap was completed, so go back.
readPosition = 0;
++readLaps;
// The buffer is only released when the complete desired block was
// obtained.
if (amount < len) {
int second = read(b, off + amount, len - amount);
return second == -1 ? amount : amount + second;
} else {
buffer.notifyAll();
return amount;
package pipes;
import java.io.IOException;
import java.io.OutputStream;
* This class is equivalent to java.io.PipedOutputStream. In the
* interface it only adds a constructor which allows for specifying the buffer
* size. Its implementation, however, is much simpler and a lot more efficient
* than its equivalent. It doesn't rely on polling. Instead it uses proper
* synchronization with its counterpart PipedInputStream.
* Multiple writers can write in this stream concurrently. The block written
* by a writer is put in completely. Other writers can't come in between.
public class PipedOutputStream extends OutputStream {
PipedInputStream sink;
* Creates an unconnected PipedOutputStream.
* @exception IOException
public PipedOutputStream() throws IOException {
this(null);
* Creates a PipedOutputStream with a default buffer size and connects it to
* <code>sink</code>.
* @exception IOException It was already connected.
public PipedOutputStream(PipedInputStream sink) throws IOException {
this(sink, 0x10000);
* Creates a PipedOutputStream with buffer size <code>bufferSize</code> and
* connects it to <code>sink</code>.
* @exception IOException It was already connected.
public PipedOutputStream(PipedInputStream sink, int bufferSize) throws IOException {
if (sink != null) {
connect(sink);
sink.buffer = new byte[bufferSize];
* Closes the input stream.
* @exception IOException The pipe is not connected.
public void close() throws IOException {
if (sink == null) {
throw new IOException("Unconnected pipe");
synchronized (sink.buffer) {
sink.closed = true;
flush();
* Connects the output stream to an input stream.
* @exception IOException The pipe is already connected.
public void connect(PipedInputStream sink) throws IOException {
if (this.sink != null) {
throw new IOException("Pipe already connected");
this.sink = sink;
sink.source = this;
* Closes the output stream if it is open.
protected void finalize() throws Throwable {
close();
* forces any buffered data to be written.
* @exception IOException
public void flush() throws IOException {
synchronized (sink.buffer) {
// Release all readers.
sink.buffer.notifyAll();
* writes a byte of data to the output stream.
* @exception IOException
public void write(int b) throws IOException {
write(new byte[] {(byte) b});
* Writes a buffer of data to the output stream.
* @exception IOException
public void write(byte[] b) throws IOException {
write(b, 0, b.length);
* writes data to the output stream from a buffer, starting at the named offset,
* and for the named length.
* @exception IOException The pipe is not connected or a reader has closed
* it.
public void write(byte[] b, int off, int len) throws IOException {
if (sink == null) {
throw new IOException("Unconnected pipe");
if (sink.closed) {
throw new IOException("Broken pipe");
synchronized (sink.buffer) {
     if (sink.writePosition == sink.readPosition &&
     sink.writeLaps > sink.readLaps) {
     // The circular buffer is full, so wait for some reader to consume
     // something.
     try {
     sink.buffer.wait();
     catch (InterruptedException e) {
     throw new IOException(e.getMessage());
     // Try again.
     write(b, off, len);
     return;
     // Don't write more than the capacity indicated by len or the space
     // available in the circular buffer.
     int amount = Math.min(len,
     (sink.writePosition < sink.readPosition ?
     sink.readPosition : sink.buffer.length)
- sink.writePosition);
     System.arraycopy(b, off, sink.buffer, sink.writePosition, amount);
     sink.writePosition += amount;
     if (sink.writePosition == sink.buffer.length) {
     sink.writePosition = 0;
     ++sink.writeLaps;
     // The buffer is only released when the complete desired block was
     // written.
     if (amount < len) {
     write(b, off + amount, len - amount);
     } else {
     sink.buffer.notifyAll();
</code>

Similar Messages

  • Slow Files Copy File Server DFS Namespace

    I have two file servers running on VM both servers are on different physical servers.
    Both connect with dfs namespace.
    The problem part is both servers never have same copy speed.
    Sometime very slow files copy about 1MBps on FS01 and fast copy 12MBps on FS02.
    Sometime fast on FS01 and slow on FS02.
    Sometime both of them slow..
    So as usual I reboot the servers. Doesn't work.
    Then I reboot the DC01 also doesn't work. There is another brother DC02.
    After I reboot DC02, one of the FS become normal and another FS still slow.
    FS01 and FS02 randomly. They never get faster speed together.
    Users never complain slow FS because 1MBps is acceptable for them to open word excel etc.,.
    The HUGE problem is I don't have backup when the slow FS days.
    The problem since two weeks I'm giving up fixing it myself and need help from you expert guys.
    Thanks!
    DC01, DC02, FS01, FS02 (Win 2012 and All VMs)

    Hi,
    Since the slow copy is also occurred when you tried the direct copy from both shared folder, you could enable the disk write cache on the destination server to check the results.
    HOW TO: Manually Enable/Disable Disk Write Caching
    http://support.microsoft.com/kb/259716
    Windows 2008 R2 - large file copy uses all available memory and then tranfer rate decreases dramatically (20x)
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/3f8a80fd-914b-4fe7-8c93-b06787b03662/windows-2008-r2-large-file-copy-uses-all-available-memory-and-then-tranfer-rate-decreases?forum=winservergen
    You could also refer to the FAQ article to troubleshoot the slow copy issue:
    [Forum FAQ] Troubleshooting Network File Copy Slowness
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/7bd9978c-69b4-42bf-90cd-fc7541ccb663/forum-faq-troubleshooting-network-file-copy-slowness?forum=winserverPN
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Network speed affected by large file copy operations. Also, why intermittent network outages?

    Hi
    I have a couple of issues on our company network.
    The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
    network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
    The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
    We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
    Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
    When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
    (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
    While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
    I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
    I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
    Ordinarlily we work with many files
    Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
    I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
    Thanks

    What have you checked for resource usage during one of these copies of a large file?
    At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
    What RAID level is there on the file server?
    There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
    Hi, JohnB352
    Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
    I'm going to replace the RAM and double it up to 12GB.
    Thanks! This may help with some other issues we are having. I'll post back after it has been done.
    [Edit]
    Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

  • File copy speeds to CSV vs non-CSV

    I'm working on bringing up a 2012 R2 cluster and doing a basic test.  In this cluster, I have two adapters for iSCSI traffic, one for network traffic, and one for the heartbeat.  Cluster node has all the current updates on it.  Everything
    is set up correctly as far as I can see.  I'm taking a folder with 1GB of random files in it and copying it from the C: drive of a node to an iSCSI LUN.  If I have the LUN set up as a non-CSV disk, the copy happens about three time faster than if
    I have it set up as a CSV disk.  All I'm doing is using FCM to change the disk from CSV to non-CSV (right-click, Remove from CSV, right-click, Add to CSV).  I can swap it back and forth and each time the copy process is about three time slower when
    it's a CSV.  Am I missing something here?  I've been through all the usual stuff with regard to the iSCSI adapters, MPIO, drivers, etc.  But I don't think that would have anything to do with this anyway.  The disk is accessed the same with
    regard to all that whether it's CSV or not, unless I'm missing something.  Right now, I only have a single node configured in the cluster, so it's definitely not anything to do with the CSV being in redirected mode.
    I'm not trying to establish any particular transfer speed, I know file transfers are different than actual workloads and performance tools like iometer when it comes to actual numbers.  But it seems to me like the transfers should be close
    to the same whether the disk is a CSV or not, since I'm not changing anything else. 

    Which system owns the CSV?  If the system from which you are copying does not own the CSV then all the metadata updates have to go across the network to be handled by the node that does own the CSV.  If you are copying a lot of little
    files, there is more metadata.
    Actually, metadata updates always happen in redirected IO from what I'm reading, that has been the part that I was missing.  This explains it. 
    https://technet.microsoft.com/en-us/library/jj612868.aspx?f=255&MSPPError=-2147217396 "When certain small changes occur in the file system on a CSV volume, this metadata must be synchronized on each of the physical nodes that access the
    LUN, not only on the single coordinator node... These metadata update operations occur in parallel across the cluster networks by using SMB 3.0. "
    So a file copy, even when done on a coordinator node, does the metadata updates in redirected mode.  Other articles seem to say the same thing, though not always clearly.  So it's still accurate to say that a file copy isn't the best way to measure
    CSV performance, but there doesn't seem to be a lot of pointing to the (I think) important distinction regarding how the metadata updates work.  From what I can see, that distinction is probably trumping anything else such as who is the
    coordinator node, CSV cache, etc.  For me anyway, it makes a 3X performance difference, so I think that's pretty significant.  

  • Slow file copy speeds

    Hello, I can't ul or dl files from client to me or vice-versa faster than 70kbs. Both connections are static, both are 7mbs up and 2.5+mbs down and both isps have assured me that no throttling is taking place. Ports are set correctly, encryption is off for file copy; I am only one guy managing one machine for my dear gray haired mother. The files are huge 300megs plus (.psds with necessary layers, etc.) but it doesn't seem to make any diff. if I copy 1 at a time or zip a bunch-- no faster than 70k and usually 50k. Routers on both ends say connected at 7up/2.5 down and any ftp client flies to and fro. It is just much more convenient to use ARD 3.2--
    Gracias, Dennis

    Welcome to the Discussions Dennis,
    Are you using the Copy command while also Viewing or Controlling?
    Things that I can think off of the top of my head that could be slowing things down (on one of the machines):
    bandwith use from screen sharing
    A FAT formatted drive
    encrypted user or folder
    difference in OS (10.5 to 10.4 - sad but true at times)
    encrypted transfer (which you covered)
    3rd party firewall / security sw
    I've also seen where you may be able to do a good speed test, but due to router port forwarding using ARD to transfer files is slow, so that would be one other thing to check.
    Sorry I don't have a know cause. I hope this helps, JD

  • Speed of Swing versus double buffered AWT

    Hello
    I've noticed that drawing in a JPanel.paintComponent() takes about 4 times longer than drawing into an offscreen image in AWT Canvas.paint()
    Essential code excerpts follow
    // SWING, takes about 400 millis on my machine
    public void paintComponent(Graphics g) {
    g.setColor(Color.red);
    long startTime = System.currentTimeMillis();
    for (int i = 0; i < 10000; i++)
    g.draw3DRect((int) (Math.random() * 200), 20, 30, 40, true);
    long endTime = System.currentTimeMillis();
    System.out.println("paintComponent() took " + (endTime - startTime) + " millis");
    // AWT, takes about 100 millis on same machine
    public void paint(Graphics g) {
    if (offscreenGraphics == null || offscreenImage == null) {
    offscreenImage = createImage(getWidth(), getHeight());
    offscreenGraphics = offscreenImage.getGraphics();
    long startTime = System.currentTimeMillis();
    if (offscreenGraphics != null) {
    offscreenGraphics.setColor(Color.red);
    for (int i = 0; i < 10000; i++)
    offscreenGraphics.draw3DRect((int) (Math.random() * 200), 20, 30, 40, true);
    g.drawImage(offscreenImage, 0, 0, this);
    long endTime = System.currentTimeMillis();
    System.out.println("paint() took " + (endTime - startTime) + " millis");
    Note that I also tried drawLine() instead of draw3DRect() and experienced similar results
    Can someone explain why doing this in Swing is so slow?
    I'd hoped to take advantage of Swing's double buffering, but I ended up using the same old offscreen image technique in Swing.
    Nick Didkovsky

    Silly question, but did you turn on double buffering or extend a Swing component which has it on by default?
    Not all of them do.
    : jay

  • File Copy Operation

    Hi,
    I am developing a desktop application that needs a file copying feature. I basically need to copy mp3 files from a PC drive to another external drive that is connected to my PC through a USB Port...this external device is USB2.0 compatible. Currently I have coded a simple filecopy thread using Buffered Reader/Writer. The operation for a 6 MB file takes about 6.3 secs, but the same operation in C took me about 1.4 seconds. I am wondering if this is the best speed I can get using Java...and if so should I switch to using C and writing a JNI wrapper around the C function.
    Please suggest me on a suitable course of action.

    I have been investigating the use of buffered streams
    in network transfers (bigger files than mp3s though)
    and i have noticed that the size of the buffer as in
    byte [] buf = new byte[8096];makes very little difference, Yeah it is fun benchmarking.........what i observe is that as we increase the array size the time taken also reduces but then there comes a point (of inflexion?)at which u get a best speed and beyond that any increase in the array size does not make any difference...has anyone observed such a thing??
    Anyway I equalled/beat the C code...and I am just happy for that!!!!!

  • File Copy times

    My newsreader is acting funny and dropping posted messages, so I
    apologize if this shows up twice.
    My comments on the file speed are that the times posted by other just go
    to show how difficult it sometimes is to make good timing measurements.
    I suspect that the wide variations being posted are in large part to
    disk caching. To measure this, you should either flush the caches each
    time or run them multiple times to make sure that the cache affects them
    more equally.
    Here is what I'd expect. The LV file I/O is a thin layer built upon the
    OS file I/O. Any program using file I/O will see that smaller writes
    have somewhat more overhead than a few large writes. However, at some
    size, either LV or the OS will break the larger writes into smaller
    ones. The file I/O functions in general will be slower to read and
    write contents than making a file copy using the copy node or move node.
    Sorry if I can't be more specific, but if you have a task that
    seems way to slow, please send it to technical support and report a
    performance problem. If we can find a better implementation, we will
    try to integrate it.
    Greg McKaskle

    Maybe this is because of the write buffer?
    Try mounting the media using the -o sync option to have data written immediately.

  • Storage Spaces: Virtual Disk taken offline during file copy, marked as "This disk is offline because it is out of capacity", but plenty of free space

    Server 2012 RC. I'm using Storage Spaces, with two virtual disks across 23 underlying physical disks.
    * First virtual disk is fixed provisioning, parity across 23 physical disks: 10,024GB capacity
    * Second virtual disk is fixed provisioning, parity across the remaining space on 6 of the same physical disks: 652GB capacity
    These have been configured as dynamic disks, with an NTFS volume spanned across the two (larger virtual disk first). Total volume size 10,676GB. For more details of the hardware, and why the configuration is like this, see: http://social.technet.microsoft.com/Forums/en-US/winserver8gen/thread/c35ff156-01a8-456a-9190-04c7bcfc048e
    I'm copying several TB from a network share to this volume. It is very slow at ~12MB/sec, but works. However, three times so far, several hours in to the file copy and with plenty of free space remaining, the 10,024GB virtual disk is suddenly taken offline.
    This obviously then fails the spanned volume and stops the file copy.
    The second time, I took screenshots, below. The disk (Disk27) is marked offline due to "This disk is offline because it is out of capacity". And the disk in the spanned volume is marked as missing (which is what you would expect when one of its member disks
    is offline).
    I can then mark the disk (Disk27) back online again, and this restores the spanned volume. I can then re-start the file copy from where it failed. There doesn't appear to be any data loss, but it does cause an outage that requires manual attention. As you
    can see, there is plenty of space left on the spanned volume.
    Each time this has happened, there are a few event 150 errors in the System event log: "Disk 27 has reached a logical block provisioning permanent resource exhaustion condition.". Source: Disk.
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider Name="disk" /> 
      <EventID Qualifiers="49156">150</EventID> 
      <Level>2</Level> 
      <Task>0</Task> 
      <Keywords>0x80000000000000</Keywords> 
      <TimeCreated SystemTime="2012-06-07T11:24:53.572101500Z" /> 
      <EventRecordID>14476</EventRecordID> 
      <Channel>System</Channel> 
      <Computer>Trounce-Server2.trounce.corp</Computer> 
      <Security /> 
      </System>
    - <EventData>
      <Data>\Device\Harddisk27\DR27</Data> 
      <Data>27</Data> 
      <Binary>000000000200300000000000960004C0000000000000000000000000000000000000000000000000</Binary> 
      </EventData>
      </Event>
    This error seems to be related to thin provisioning of disks. I found this:
    http://msdn.microsoft.com/en-us/library/windows/desktop/hh848068(v=vs.85).aspx. But both these Virtual Disks are configured as Fixed, not Thin provisioning, so it shouldn't apply.
    My thoughts: the virtual disk should not spuriously go offline during a file copy, even if it was out of space. And in any case, there is plenty of free space remaining. Also, I don't understand the reason for why it is marked as offline ("This disk is offline
    because it is out of capacity"). Why would a disk go offline because it was out of thin capacity, rather than just returning an "out of disk space" error while keeping it online.

    Interesting Thread, I've been having the same issue. I had a failed hardware RAID that was impossible to recover in place, so after being forced to do a 1:1 backup, I find myself with 5 2TB hard drives to play with. Storage Spaces seemed like an interesting
    way to go until I started facing the issues we share.
    So my configuration is A VM Running Windows Server 2012 RC with 5 Virtualized Physical drives using a SCSI interface, 2TB in size that make up my storage pool. A Single Thinly provisioned Disk of 18 TB (using 1 disk for parity)
    Interestly enough, write speed has not been an issue on this machine (30~70MB/s, up from 256k on the beta) 
    Of note to me is this error in my event log 13 minutes before the drive disappeared:
    "The shadow copies of volume E: were deleted because the shadow copy storage could not grow in time.Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied."Source: volsnap, Event ID: 25, Level: Error
    followed by:
    "The system failed to flush data to the transaction log. Corruption may occur in VolumeId: E:, DeviceName: \Device\HarddiskVolume17.(The physical resources of  this disk have been exhausted.)"Source: Ntfs (Microsoft-Windows-Ntfs), Event ID: 140, Level: Warning
    I figure the amount of space available to me before I start encountering physical limits is in the vicinity of about 7TB. It dropped out for the second time at 184 GB.
    FYI, the number of columns created for me is 5
    Regards,
    Steven Blom

  • Using 6533 DIO32HS, is it possible to use double buffered output with varying pauses?

    I'm using Level-Ack handshaking to transmit data. Currently, I'm hooked up to a loop-back on the DIO32HS card.
    If I don't use double-buffering, I end up with pauses in data transmission, so I need to use double buffering. Unfortunately, I can't seem to set up a delay in the middle of a double buffered scheme.
    What I need to do is this:
    Transmit 64 packets of data (16 bits each) on group 2 / Receive 64 packets of data (16 bits each) on group 1
    Delay for .2 ms
    Transmit the same packets again / receive
    The delay in the middle will need to be varied, from .2 to 20 ms.
    I'm programming in Visual C++ 6.0 under Windows 2000, with (as suggested above) group1 c
    onfigured as input (DIOA and DIOB) and group2 set up as output (DIOC and DIOD). Due to the speed of transmission (256kHz) and the small size of the data set, the program I wrote, no matter how tight I try to make it, cannot insert the proper delay and start the next send on time.
    Does anyone have any idea if such a pause is possible? Anyone know how to do it, or any suggestions on what to try?
    Thanks!

    .2 ms is a very small time delay to use in software. Windows usually isn't more accurate than about 10 or 20 ms. If you need to have small, precise delays you could either use a real time OS, like pharlap and LabVIEW RT, or use extra hardware to generate the delays correctly.
    I would recommend using a separate MIO or counter/timer board (like a 660x) to generate timing pulses for the DIO32HS. This gives you precise timing control at high speed. If the 32HS is in Level ACK Mode, it will respond to external ACK and REQ signals. This is covered in more detail on page 5-10 of the PCI-DIO32HS User Manual.

  • File copy/paste adds " - Copy" - how to change to " - Copy" + date/time?

    Is there any way I can change the default file copied name from having " - Copy" at the end to " - Copy" plus the date and time?
    Before editing files I always make a copy of the existing file so I have a backup. I do this by clicking on the file I want to change in the list of files then do Ctrl C and Ctrl V which creates a copy of the file at the end of the list of times, with "- Copy" at the end of the name in the format "index - Copy.php". Is there any way I can get Dreamweaver to add the date and time to the name which would save me from editing the name and adding the date and time every time. Its a pain having to do this but I don't know any other way to ensure I know the date and time the file was copied. I know that there is a Modified date and time but changing a template for instance changes that and besides the file could also be edited but having the name reflect the copy date/time allows me to have the Modified date/time as a double check. I would like to have the file name in the format "index - Copy20110216 1621.php", the date being the international format and the time slotted on the end of it. This then puts the files in date/time order in the list.

    There's no way of customising a copied file name using DW that I'm aware of.
    Either rename it manually in DW or use a 3rd party file copy utlity outside DW.
    Is there a reason for keeping a hard copy of the last copied date that the Last Modified date in the operating system does not record?

  • Double buffering still gives flickering graphics.

    I copied code from a tutorail which is supposed to illustrate double buffering.
    After I run it, it still flickers though.
    I use applet viewer, which is part of netbeans to run my applet.
    Link to tutorial: http://www.javacooperation.gmxhome.de/TutorialStartEng.html
    My questions are:
    Is the strategy used for double buffering correct?
    Why does it flicker?
    Why does the program change the priority a couple of times?
    Can you make fast games in JApplets or is there a better way to make games? (I think C++ is too hard)
    Here is the code:
    package ballspel;
    import java.awt.Color;
    import java.awt.Graphics;
    import java.awt.Image;
    import javax.swing.JApplet;
    //import java.applet.*;
    * @author Somelauw
    public class BallApplet extends /*Applet*/ JApplet implements Runnable {
    private Image dbImage;
    private Graphics dbg;
    private int radius = 20;
    private int xPos = 10;
    private int yPos = 100;
    * Initialization method that will be called after the applet is loaded
    * into the browser.
    @Override
    public void init() {
    //System.out.println(this.isDoubleBuffered()); //returns false
    // Isn't there a builtin way to force double buffering?
    // TODO start asynchronous download of heavy resources
    @Override
    public void start() {
    Thread th = new Thread(this);
    th.start();
    public void run() {
    Thread.currentThread().setPriority(Thread.MIN_PRIORITY);
    while (true) {
    xPos++;
    repaint();
    try {
    Thread.sleep(20);
    } catch (InterruptedException ex) {
    ex.printStackTrace();
    Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
    @Override
    public void paint(Graphics g) {
    super.paint(g);
    //g.clear();//, yPos, WIDTH, WIDTH)
    g.setColor(Color.red);
    g.fillOval(xPos - radius, yPos - radius, 2 * radius, 2 * radius);
    @Override
    public void update(Graphics g) {
    super.update(g);
    // initialize buffer
    if (dbImage == null) {
    dbImage = createImage(this.getSize().width, this.getSize().height);
    dbg = dbImage.getGraphics();
    // clear screen in background
    dbg.setColor(getBackground());
    dbg.fillRect(0, 0, this.getSize().width, this.getSize().height);
    // draw elements in background
    dbg.setColor(getForeground());
    paint(dbg);
    // draw image on the screen
    g.drawImage(dbImage, 0, 0, this);
    // TODO overwrite start(), stop() and destroy() methods
    }

    Somelauw wrote:
    I copied code from a tutorail which is supposed to illustrate double buffering.
    After I run it, it still flickers though.
    I use applet viewer, which is part of netbeans.. AppletViewer is part of the JDK, not NetBeans.
    ..to run my applet.
    Link to tutorial: http://www.javacooperation.gmxhome.de/TutorialStartEng.html
    Did you specifically mean the code mentioned on this page?
    [http://www.javacooperation.gmxhome.de/BildschirmflackernEng.html]
    Don't expect people to go hunting around the site, looking for the code you happen to be referring to.
    As an aside, please use the code tags when posting code, code snippets, XML/HTML or input/output. The code tags help retain the formatting and indentation of the sample. To use the code tags, select the sample and click the CODE button.
    Here is the code you posted, as it appears in code tags.
    package ballspel;
    import java.awt.Color;
    import java.awt.Graphics;
    import java.awt.Image;
    import javax.swing.JApplet;
    //import java.applet.*;
    * @author Somelauw
    public class BallApplet extends /*Applet*/ JApplet implements Runnable {
        private Image dbImage;
        private Graphics dbg;
        private int radius = 20;
        private int xPos = 10;
        private int yPos = 100;
         * Initialization method that will be called after the applet is loaded
         * into the browser.
        @Override
        public void init() {
            //System.out.println(this.isDoubleBuffered()); //returns false
            // Isn't there a builtin way to force double buffering?
            // TODO start asynchronous download of heavy resources
        @Override
        public void start() {
            Thread th = new Thread(this);
            th.start();
        public void run() {
            Thread.currentThread().setPriority(Thread.MIN_PRIORITY);
            while (true) {
                xPos++;
                repaint();
                try {
                    Thread.sleep(20);
                } catch (InterruptedException ex) {
                    ex.printStackTrace();
                Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
        @Override
        public void paint(Graphics g) {
            super.paint(g);
            //g.clear();//, yPos, WIDTH, WIDTH)
            g.setColor(Color.red);
            g.fillOval(xPos - radius, yPos - radius, 2 * radius, 2 * radius);
        @Override
        public void update(Graphics g) {
            super.update(g);
            // initialize buffer
            if (dbImage == null) {
                dbImage = createImage(this.getSize().width, this.getSize().height);
                dbg = dbImage.getGraphics();
            // clear screen in background
            dbg.setColor(getBackground());
            dbg.fillRect(0, 0, this.getSize().width, this.getSize().height);
            // draw elements in background
            dbg.setColor(getForeground());
            paint(dbg);
            // draw image on the screen
            g.drawImage(dbImage, 0, 0, this);
        // TODO overwrite start(), stop() and destroy() methods
    }Edit 1:
    - For animation code, it would be typical to use a javax.swing.Timer for triggering updates, rather than implementing Runnable (etc.)
    - Attempting to set the thread priority will throw a SecurityException, though oddly it occurs when attempting to set the Thread priority to maximum, whereas the earlier call to set the Thread priority to minimum passed without comment (exception).
    - The paint() method of that applet is not double buffered.
    - It is generally advisable to override paintComponent(Graphics) in a JPanel that is added to the top-level applet (or JFrame, or JWindow, or JDialog..) rather than the paint(Graphics) method of the top-level container itself.
    Edited by: AndrewThompson64 on Jan 22, 2010 12:47 PM

  • Double Buffering and Components

    Hello I am wondering how do I turn off double buffering for my components. This is important for printing as double buffering makes the print job alot of MB

      /** The speed and quality of printing suffers dramatically if
       *  any of the containers have double buffering turned on.
       *  So this turns if off globally.
       *  @see enableDoubleBuffering
      public static void disableDoubleBuffering(Component c) {
        RepaintManager currentManager = RepaintManager.currentManager(c);
        currentManager.setDoubleBufferingEnabled(false);
      /** Re-enables double buffering globally. */
      public static void enableDoubleBuffering(Component c) {
        RepaintManager currentManager = RepaintManager.currentManager(c);
        currentManager.setDoubleBufferingEnabled(true);
      }

  • Which is better, Double Buffering 1, or Double Buffering 2??

    Hi,
    I came across a book that uses a completely different approach to double buffering. I use this method:
    private Graphics dbg;
    private Image dbImage;
    public void update() {
      if (dbImage == null) {
        dbImage = createImage(this.getSize().width, this.getSize().height);
        dbg = dbImage.getGraphics();
      dbg.setColor(this.getBackground());
      dbg.fillRect(0, 0, this.getSize().width, this.getSize().height);
      dbg.setColor(this.getForeground());
      paint(dbg);
      g.drawImage(dbImage, 0, 0, this);
    }that was my method for double buffering, and this is the books method:
    import java.awt.*;
    public class DB extends Canvas {
         private Image[] backing = new Image[2];
         private int imageToDraw = 0;
         private int imageNotDraw = 1;
         public void update(Graphics g) {
              paint(g);
         public synchronized void paint(Graphics g) {
              g.drawImage(backing[imageToDraw], 0, 0, this);
         public void addNotify() {
              super.addNotify();
              backing[0] = createImage(400, 400);
              backing[1] = createImage(400, 400);
              setSize(400, 400);
              new Thread(
                   new Runnable() {
                        private int direction = 1;
                        private int position = 0;
                        public void run() {
                             while (true) {
                                  try {
                                       Thread.sleep(10);
                                  }catch (InterruptedException ex) {
                                  Graphics g = backing[imageNotDraw].getGraphics();
                                  g.clearRect(0, 0, 400, 400);
                                                    g.setColor(Color.black);
                                  g.drawOval(position, 200 - position, 400 - (2 * position), 72 * position);
                                  synchronized (DB.this) {
                                       int temp = imageNotDraw;
                                       imageNotDraw = imageToDraw;
                                       imageToDraw = temp;
                                  position += direction;
                                  if (position > 199) {
                                       direction = -1;
                                  }else if (position < 1) {
                                       direction = 1;
                                  repaint();
              ).start();
         public static void main(String args[]) {
              Frame f = new Frame("Double Buffering");
              f.add(new DB(), BorderLayout.CENTER);
              f.pack();
              f.show();
    }which is better? I noticed smoother animation with the later method.
    Is there no difference? Or is it just a figment of my imagination??

    To be fair if you download an applet all the class files are stored in your .jpi_cache, and depending on how that game requests its graphics sometimes they are stored there to, so really if you have to download an applet game twice, blame the programmer (I've probably got that dead wrong :B ).
    But, what's wrong with Jars. They offer so much more.
    No offence meant by this Malohkan but if you can't organize your downloaded files the internet must really be a landmine for you :)
    Personally I'd be happy if I never seen another applet again, it seems java is tied to this legacy, and to the average computer user it seems that is all java is capable of.
    Admitidly there are some very funky applets out here using lots of way over my head funky pixel tricks, but they would look so much better running full screen and offline.

  • Large file copy to iSCSI drive fills all memory until server stalls.

    I am having the file copy issues that people have been having with various versions of Server now for years, as can be read in the forums. I am having this issue on Server 2012 Std., using Hyper-V.
    When a large file is copied to an iSCSI drive, the file is copied into memory first faster than it can be sent over the network. It fills all available GB of memory until the server, which is a VM host, pretty much stalls and also all the VMs stall. This
    continues until the file copy is finished or stopped, then the memory is gradually released as it is taken out of memory as it is sent over the network.
    This issue was happening on send and receive. I change the registry setting for Large Cache to disable it, and now I can receive large files from the iSCSI. They now take an additional 1 GB of memory and it sits there until the file copy is finished.
    I have tried all the NIC and disk settings as can be found in the forums around the internet that people have posted in regard to this issue.
    To describe in a little more detail, when receiving a file from iSCSI, the file copy windows shows a speed of around 60-80 MB / sec, which is wire speed. When sending a file to iSCSI, the file copy window shows a speed of 150 MB/sec, which is actually the
    speed at which it is being written to memory. The NIC counter in Task Mgr shows instead the actual network speed which is about half of that. The difference is the rate at which memory fills until it is full.
    This also happens when using Window Server Backup. It freezes up the VM Host and Guests while the host backup is running because of this issue. It does cause some software issues.
    The problem does not happen inside the Guests. I can transfer files to a different LUN on the same iSCSI, which uses the same NIC as the Host with no issue.
    Does anyone know if the fix has been found for this? All forum posts I have found for this have closed with no definite resolution found.
    Thanks for you help.
    KTSaved

    Hi,
    Sorry if it causes confusion but "by design" I mean "by design it will use memory for copying files via network".
    In Windows 2000/2003, the following keys could help control the memory usage:
    LargSystemCache (0 or 1) HKEY_LOCAL_MACHINE\CurrentControlSet\Control\Session Manager\Memory Management
    Size (1, 2 or 3) in HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameter
    I saw threads mentioned that it will not work in later systems such as Windows 2008 R2.
    For Windows 2008 R2 and Windows 2008, there is a service named Microsoft Windows Dynamic Cache Service which addressed this issue:
    https://www.microsoft.com/en-us/download/details.aspx?id=9258
    However I searched and there is no update version for Windows 2012 and 2012 R2.
    I also noticed that the following command could help control the memory usage. With value = 1, NTFS uses the default amount of paged-pool memory:
    fsutil behavior set memoryusage 1
    You need a reboot after changing the value. 
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

Maybe you are looking for

  • My iphone 5 won't unlock and the screen fades to black.

    My iphone 5 won't unlock and the screen fades to black. I can't answer calls or texts. help!

  • Code Analyzer

    I am currently using CF 4.5 on a Windows 2000 server and am trying to migrate to MX7 on Windows 2003. The code analyzer does not seem to want to analyze code on a remote server, so after I installed developer version and copied all the files over to

  • Context Menu removed by 9.5.4 Update

    I have Acrobat 9.0 Pro installed on a Windows 8 machine (Pro 64-bit) and noticed the context menu (combine files) was removed by the 9.5.4 update.  As a sanity check, I removed the program (needed adobe cleaner - did not appear on installed programs)

  • Mouse pointer freeze

    hi i recently buyed a macbook, but from time to time the mouse pointer freezes for a few seconds.I disabled already the two fingers option from the touchpad,(found that suggestion in a forum ) but without success.Is that problem known, or has somebod

  • MAgnifying a Text Selection

    I have to create a trng document on an IRS form. The form is a PDF, I am needing to magnify a portion of the text. Almost like enlarging a sentence so it si easier to read when I place it in the . Any ideas on how to do this. I have acrobat 7 pro. An