File Copy

Hi,
I have a requirement where user need to copy the selected files/folders from his local machine to another machine/Server and vise versa. Should i user FTP protocol for this? or just pass the IP address of the source and destination machine? Will that work? What kind of mechanisim should i use? Have done program to copy files in the same machine. But iam stuck as how to proceed with copying from one machine to another.
User need to select the file/folder. I need to design an interface which list out all the directories and files. I did this in swing application using JTree. How to do this in JSP ?
Thanks,
Thanuja.

Although you're using HA this does not appear related to HA at all. Really you
are taking a backup of an environment and moving it to a new machine. You
should read and follow the instructions here:
http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/transapp_hotfail.html
In particular, it is unlikely the addresses in the region files (__db*) are valid on
a new machine. Also, after copying the files, you need to run catastrophic recovery
on the new backup environment directory.
Sue LoVerso
Oracle

Similar Messages

  • Existing files on drive connected to latest Airport Extreme do not show up, however files copied over the air to the drive do. Any ideas?

    I have the latest Airport Extreme with a Seagate drive connected. I copied files to the drive while it was connected to my MacBook Pro. The files do not show up when connected to the AE, but they appear when connected to the Macbook, Files copied over the network to the drive appear. The drive is formatted HFS+.

         After playing with this for the last few days, I found a solution. Under the "Disks" tab in Airport Utility, the "Secure Shared Disks" drop down needs to be set to "With device password". Mine was set to "With acounts", which even after correct authentication, none of the files would appear.
         Another interesting note:  The files that I could see, the ones I copied over the network, do not appear now. I can copy over new files, and they show up, but the ones copied prior to changing the "Secure Shared Disks" option, do not.

  • File Copy times

    My newsreader is acting funny and dropping posted messages, so I
    apologize if this shows up twice.
    My comments on the file speed are that the times posted by other just go
    to show how difficult it sometimes is to make good timing measurements.
    I suspect that the wide variations being posted are in large part to
    disk caching. To measure this, you should either flush the caches each
    time or run them multiple times to make sure that the cache affects them
    more equally.
    Here is what I'd expect. The LV file I/O is a thin layer built upon the
    OS file I/O. Any program using file I/O will see that smaller writes
    have somewhat more overhead than a few large writes. However, at some
    size, either LV or the OS will break the larger writes into smaller
    ones. The file I/O functions in general will be slower to read and
    write contents than making a file copy using the copy node or move node.
    Sorry if I can't be more specific, but if you have a task that
    seems way to slow, please send it to technical support and report a
    performance problem. If we can find a better implementation, we will
    try to integrate it.
    Greg McKaskle

    Maybe this is because of the write buffer?
    Try mounting the media using the -o sync option to have data written immediately.

  • How to measure progress of file copy?

    One of the requirements for my AIR application is to copy
    large files from one location to another on the client computer.
    I've been using the copyToAsync() method in AIR, but unfortunately
    this does not trigger progress events. Copying gigabyte files that
    take several seconds becomes pretty uncomfortable for users, when
    they can't see the progress.
    Does anyone know of an alternative or workaround? All I need
    to do is copy a file from one location to another, and measure the
    progress while that is happening.
    Since I control the client machines (it is a closed
    environment), I have the option to setup file servers on every
    machine and use the file download control with Flash instead of
    file copy. However, when I investigate this option it seems that
    the file download control will always trigger user interaction to
    select the target path (which I don't want -- I want to
    programatically select the source files and the target directory).
    If someone knows a solution to that problem I would happily take it
    and move on :)

    Hi,
    Use 2 FileStream objects in async mode (openAsync). One to
    read and another to write. When you read in async mode, you get
    progress events. When they fire, you can read bytesAvailable
    amounts of data and write that to the output filestream.
    The output filestream can listen for output_progress events
    to ensure that new data can be written to it or not.

  • Network speed affected by large file copy operations. Also, why intermittent network outages?

    Hi
    I have a couple of issues on our company network.
    The first is thate a single large file copy imapcts the entire network and dramatically reduces network speed and the second is that there are periodic outages where file open/close/save operations may appear to hang, and also where programs that rely on
    network connectivity e.g. email, appear to hang. It is as though the PC loses it's connection to the network, but the status of the network icon does not change. For the second issue if we wait the program will respond but the wait period can be up to 1min.
    The downside of this is that this affects Access databases on our server so that when an 'outage' occurs the Access client cannot recover and hangs permamnently.
    We have a Windows Active Directory domain that comprises Windows 2003 R2 (soon to be decommissioned), Windows Server 2008 Standard and Windows Server 2012 R2 Standard domain controllers. There are two member servers: A file server running Windows 2008 Storage
    Server and a remote access server (which also runs WSUS) running Windows Server 2012 Standard. The clients comprise about 35 Win7 PC's and 1 Vista PC.
    When I copy or move a large file from the 2008 Storage Server to my Win7 client other staff experience massive slowdowns when accessing the network. Recently I was moving several files from the Storage Server to my local drive. The files comprised pairs
    (e.g. folo76t5.pmm and folo76t5.pmi), one of which is less than 1MB and the other varies between 1.5 - 1.9GB. I was moving two files at a time so the total file size for each operation was just under 2GB.
    While the file move operation was taking place a colleague was trying to open a 36k Excel file. After waiting 3mins he asked me for help. I did some tests and noticed that when I was not copying large files he could open the Excel file immediately. When
    I started copying more data from the Storage Server to my local drive it took several minutes before his PC could open the Excel file.
    I also noticed on my Win7 client that our email client (Pegasus Mail), which was the only application I had open at the time would hang when the move operation was started and it would take at least a minute for it to start responding.
    Ordinarlily we work with many files
    Anyone have any suggestions, please? This is something that is affecting all clients. I can't carry out file maintenance on large files during normal work hours if network speed is going to be so badly impacted.
    I'm still working on the intermittent network outages (the second issue), but if anyone has any suggestions about what may be causing this I would be grateful if you could share them.
    Thanks

    What have you checked for resource usage during one of these copies of a large file?
    At a minimum I would check Task Manager>Resource Monitor.  In particular check the disk and network usage.  Also, look at RAM and CPU while the copy is taking place.
    What RAID level is there on the file server?
    There are many possible areas that could be causing your problem(s).  And it could be more than one thing.  Start by checking these things.  And go from there.
    Hi, JohnB352
    Thanks for the suggestions. I have monitored the server and can see that the memory is nearly maxed out with a lot of hard faults (varies between several hundred to several thousand), recorded during normal usage. The Disk and CPU seem normal.
    I'm going to replace the RAM and double it up to 12GB.
    Thanks! This may help with some other issues we are having. I'll post back after it has been done.
    [Edit]
    Forgot to mention: there are 6 drives in the server. 2 for the OS (Mirrored RAID 1) and 4 for the data (Striped RAID 5).

  • Does the last updated backup in Time Machine have all my files for "standard" file copy? (i want to copy and paste files to another computer that doesn't have TM)

    My personal Macbook pro died, but I did have a Time Machine backup. I have a new iMac that I would like to use as a family computer. I would like to transfer some files to the new computer (and set-up a new Time Machine backup), but archive the remainder on a non-TM drive. I was planning to:
    1. File copy my last backup folder from my old TM drive to the new computer
    2. Copy (and then delete) the files I want to archive on another HD drive
    3. Wipe clean my current TM drive
    4. Set-up TM on my new computer
    Is this a good way to proceed?
    Is copying the last backup folder (vs. all backup folders) enough to move my files over?
    Any better approaches appreciated!

    I suggest you visit MicroCenter and go to there Apple section and ask one of there guys if they are still offering the 2 terabyte Backup For the iMac, Mine was only $100.00.
    So the iMac has Time machine plus the USB 2 Terabyte Backup for less than your $130.00

  • File copying problems

    Copying some files using the Finder results in a dialogue box error message: "The Finder cannot complete the opeeration because some data in +(file name)+ could not be read or written. (Error code -36). The same files copy without error when using SilverKeeper (v.2.0.2) backup program.
    I am also experiencing a problem when quitting SIlverKeeper. A second or so after selecting Quit, and after the program has actually quit, a system dialogue box appears saying that the program has unexpectedly quit and offering choices of Reopen, Report to Apple or Close.
    The system also seems somewhat sluggish.
    Can anyone offer suggestions?

    Hi John, At this point I think you should get Applejack...
    http://www.versiontracker.com/dyn/moreinfo/macosx/19596
    After installing, reboot holding down CMD+s, (+s), then when the DOS like prompt shows, type in...
    applejack AUTO
    Then let it do all 5 of it's things.
    At least it'll eliminate some questions if it doesn't fix it.
    The 6 things it does are...
    Correct any Disk problems.
    Repair Permissions.
    Clear out Cache Files.
    Repair/check several plist files.
    Dump the VM files for a fresh start.
    Trash old Log files.
    First reboot will be slower, sometimes 2 or 3 restarts will be required for full benefit... my guess is files relying upon other files relying upon other files!
    Disconnect the USB cable from any UPS so the system doesn't shut down in the middle of the process.

  • Is it possible to have media files copied to a new location as you import them into a project?

    I have roughly 800gb of footage i've taken over the past 2 years of traveling, and I am making a single video montage... The files are located on an external drive, making them slower to work with. I have a 1tb ssd but not enough free space to copy all the files to it for sorting. What i want to do is have the files copy over to the SSD as i add them to the project, so that only the footage i'm going to use gets copied over to the SSD. The only way i can determine if i am going to use the footage or not is to scrub through and mark Ins and Outs in premiere...
    So i guess my question, in the simplest terms i can ask is, Is there a setting in Premiere, like the project manager tool which allows you to consolidate used media files to one destination, but one that will consolidate files as you add them to the project? or could i do this manually by repeatedly using the project manager tool to consolidate files periodically?
    Hope that makes sense...
    Oh, and yes, i have considered other methods to my madness, like reviewing the footage outside of premiere before deciding if i will use it or not, frankly, i have over 4000 pieces of footage and it's just much faster to scrub through them in premiere than to open each one in vlc to review. It's also much more convenient to mark my ins and outs as i review the footage, so premiere is my only option.
    Thanks!

    You can easily (and better, IMHO) do what you want in Adobe Prelude.  It's part of the CC subscription, and may have been included with CS6.
    Open the Ingest panel, navigate to the external drive where you have your source clips, make the thumbnails in the Ingest panel comfortably large, click a video clip to enable scrubbing, and use the J-K-L keys to navigate playback through the clips.  Put a check mark on the clips you want and be sure to select and set up the Transfer option on the right side of the panel before ingesting.  Don't select the Transcode option.
    Cheers,
    Jeff

  • File copy speeds to CSV vs non-CSV

    I'm working on bringing up a 2012 R2 cluster and doing a basic test.  In this cluster, I have two adapters for iSCSI traffic, one for network traffic, and one for the heartbeat.  Cluster node has all the current updates on it.  Everything
    is set up correctly as far as I can see.  I'm taking a folder with 1GB of random files in it and copying it from the C: drive of a node to an iSCSI LUN.  If I have the LUN set up as a non-CSV disk, the copy happens about three time faster than if
    I have it set up as a CSV disk.  All I'm doing is using FCM to change the disk from CSV to non-CSV (right-click, Remove from CSV, right-click, Add to CSV).  I can swap it back and forth and each time the copy process is about three time slower when
    it's a CSV.  Am I missing something here?  I've been through all the usual stuff with regard to the iSCSI adapters, MPIO, drivers, etc.  But I don't think that would have anything to do with this anyway.  The disk is accessed the same with
    regard to all that whether it's CSV or not, unless I'm missing something.  Right now, I only have a single node configured in the cluster, so it's definitely not anything to do with the CSV being in redirected mode.
    I'm not trying to establish any particular transfer speed, I know file transfers are different than actual workloads and performance tools like iometer when it comes to actual numbers.  But it seems to me like the transfers should be close
    to the same whether the disk is a CSV or not, since I'm not changing anything else. 

    Which system owns the CSV?  If the system from which you are copying does not own the CSV then all the metadata updates have to go across the network to be handled by the node that does own the CSV.  If you are copying a lot of little
    files, there is more metadata.
    Actually, metadata updates always happen in redirected IO from what I'm reading, that has been the part that I was missing.  This explains it. 
    https://technet.microsoft.com/en-us/library/jj612868.aspx?f=255&MSPPError=-2147217396 "When certain small changes occur in the file system on a CSV volume, this metadata must be synchronized on each of the physical nodes that access the
    LUN, not only on the single coordinator node... These metadata update operations occur in parallel across the cluster networks by using SMB 3.0. "
    So a file copy, even when done on a coordinator node, does the metadata updates in redirected mode.  Other articles seem to say the same thing, though not always clearly.  So it's still accurate to say that a file copy isn't the best way to measure
    CSV performance, but there doesn't seem to be a lot of pointing to the (I think) important distinction regarding how the metadata updates work.  From what I can see, that distinction is probably trumping anything else such as who is the
    coordinator node, CSV cache, etc.  For me anyway, it makes a 3X performance difference, so I think that's pretty significant.  

  • Files copied (backed up) on an external hard drive can NOT be opened

    MacBook Pro (2.5GHz, intel Core i5)
    OS X Yosemite, 10.10.2
    I was running low on storage space on my Mac Pro. I copied all items on Desktop to my external hard drive. After all files were copied from MacBook Pro to the external hard drive, "get info" suggested that the amount of space taken entirely was the same between when all items were on the Desktop and when they were on the external hard drive. The number of items copied over were also identical. I should have been more careful, but I then deleted all items from the Desktop. It was until a few days later (today) that I noticed majority of the files copied over onto the external disk can NOT be opened now. The error message was "The alias “<filename>” can’t be opened because the original item can’t be found.".
    I really don't want to lose these files, but where are they? They got their file sizes correct but just can not be opened. I tried copying these files back to Desktop on my MacBook Pro, that's not helpful. I tried accessing the files on a PC, still can't open the files. I tried repairing disk of the external hard drive (under Disk Utility), still can't open the files. I tried repairing disk permissions of my MacBook Pro hard drive, still can't open the files.
    One last thing I tried in terminal (I am not terminal savvy at all, just googled the "The alias..." error message and copied the instruction):  /usr/bin/SetFile -a a /Volumes/MAC This did not solve the problem, either.
    Thank you for your help in advance!

    Hi, eddie from pataskala ohio.
    We love pro-active thinking here at the Apple Discussions and you're off to a great start.
    Even though upgrading iTunes shouldn't affect your Library, one of the funny things about computers is that sometime, somehow, somewhere when you least expect it , they can do some not so funny things, like losing data that they're not supposed to. Which is why it's a good idea to have a back up copy of any data (such as your music) that you value in reserve ... just in case.
    The most straightforward way, is to go to the iTunes Edit menu > Preferences > Advanced tab > General sub-tab and make sure your iTunes Music folder is assigned to its default location: C:\Documents and Settings\User Name\My Documents\My Music\iTunes .
    Next make sure all your music is stored in that folder by using the iTunes Advanced menu > Consolidate Library option.
    Now you can copy the entire iTunes folder - the iTunes Music folder and the iTunes library files in particular - to your external drive.
    If anything does go wrong with the upgrade, you'll be all set to recreate your Library.

  • Has anyone else had Mountain Lion 10.8.2 file copying trouble?

    Has anyone had any trouble with file copying to external disks?
    I have used a third party programme called "Synchronize! Pro X" (SyncPX) for many years with OS X up to 10.5.8 and it has never given any trouble at all.  In July I bought a new MacBook Pro 15", upgraded to 10.8.1 and bought the latest update for SyncPX (6.5.1).  This worked perfectly as far as I know until I downloaded the 10.8.2 Mountain Lion upgrade about a week ago.
    The symptoms are:
    1) When files are copied in backup mode (running as a single user, updating only files that have changed) to an external disk using Firewire, a high percentage of them do not have their file information copied correctly.  The "Last modified" date is often set to the time that the copy was made rather than being replaced with the  "Last modified" date from the source file.  Also, "further information" is sometimes not copied.
    2)  When files are copied in "Full bootable backup" mode (SyncPX runs "as root", copies files from ALL users and tries to set owner and group to be same as source file), the same symptoms as in (1) occur, but also SyncPX hangs right at the end of copying.  Sometimes it says "Updating dyld cache…", sometimes "Updating aliases…".  When this happens it is impossible to halt SyncPX except by forcing a quit, and then the external disc is not released so that it cannot be ejected except by forcing a shutdown.  In that case the external disk usually needs repairing.  The errors are always "incorrect number of extended attributes".
    I have tried lots of things.
    a) Restart
    b) Reformatting the external disk and starting over.
    c) Reinstalling the system (after checking and repairing the main system disc), followed by (b)
    d) Turning off indexing for the external disc
    e) Repeating the test with a different external disk (to make sure it's not down to a faulty external disk)
    f) I have looked at Console logs but they do not indicate anything odd going on.
    None of this makes any difference.  By contrast, a straight drag and drop copy from Finder works with no errors.
    At this point you probably think the errors are SyncPX's fault.  But:
    i) The same version of SyncPX worked until 10.8.2 was released
    ii) If the destination disk for a SyncPX backup is a network disk, everything works OK.
    I started by taking the fault up with SyncPX's developer, Qdea.  One other user had noted a fault similar to (2).  Qdea maintain that there's nothing wrong with their code, and blame Apple's disc drivers.
    On balance I think Qdea could be right, or at least Apple have changed some specifications but not told the developer world.
    I haven't yet tried to take this up directly with Apple (through AppleCare) as I think they will probably wash their hands of it and blame the developer.  The point of this post is to see if anyone else has had a similar problem, perhaps with different software, and if there is anyone with sufficient experience to give advice on how to take this forward.  Is the case for an OS bug convincing or do others feel it's a fault with SyncPX alone?  If so how does one explain its perfect behaviour until 10.8.2?
    What steps can I take to pin this down further?  All advice and comments will be gratefully received..

    RobertDHarding wrote:
    At this point you probably think the errors are SyncPX's fault. 
    And you would be right.
    I started by taking the fault up with SyncPX's developer, Qdea.  One other user had noted a fault similar to (2).  Qdea maintain that there's nothing wrong with their code, and blame Apple's disc drivers.
    What steps can I take to pin this down further?  All advice and comments will be gratefully received..
    Dump Qdea for someone who knows how to write Mac software.
    1. Qdea displays the following logo: . This is a pre-MacOS X logo.
    2. Qdea displays the following review:
    MacFixit had a good reputation, if you are old enough to remember when it used to exist.
    3. Qdea provides download links to SynchronizeX and version in French, German, and Japanese. Apparently they don't know that Mac software can be localized into a single executable.
    I'm sure there are any number of similar utilities available, all of them cheaper than SynchronizeX's $99.99 price.

  • Slow Files Copy File Server DFS Namespace

    I have two file servers running on VM both servers are on different physical servers.
    Both connect with dfs namespace.
    The problem part is both servers never have same copy speed.
    Sometime very slow files copy about 1MBps on FS01 and fast copy 12MBps on FS02.
    Sometime fast on FS01 and slow on FS02.
    Sometime both of them slow..
    So as usual I reboot the servers. Doesn't work.
    Then I reboot the DC01 also doesn't work. There is another brother DC02.
    After I reboot DC02, one of the FS become normal and another FS still slow.
    FS01 and FS02 randomly. They never get faster speed together.
    Users never complain slow FS because 1MBps is acceptable for them to open word excel etc.,.
    The HUGE problem is I don't have backup when the slow FS days.
    The problem since two weeks I'm giving up fixing it myself and need help from you expert guys.
    Thanks!
    DC01, DC02, FS01, FS02 (Win 2012 and All VMs)

    Hi,
    Since the slow copy is also occurred when you tried the direct copy from both shared folder, you could enable the disk write cache on the destination server to check the results.
    HOW TO: Manually Enable/Disable Disk Write Caching
    http://support.microsoft.com/kb/259716
    Windows 2008 R2 - large file copy uses all available memory and then tranfer rate decreases dramatically (20x)
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/3f8a80fd-914b-4fe7-8c93-b06787b03662/windows-2008-r2-large-file-copy-uses-all-available-memory-and-then-tranfer-rate-decreases?forum=winservergen
    You could also refer to the FAQ article to troubleshoot the slow copy issue:
    [Forum FAQ] Troubleshooting Network File Copy Slowness
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/7bd9978c-69b4-42bf-90cd-fc7541ccb663/forum-faq-troubleshooting-network-file-copy-slowness?forum=winserverPN
    Best Regards,
    Mandy 
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Storage Spaces: Virtual Disk taken offline during file copy, marked as "This disk is offline because it is out of capacity", but plenty of free space

    Server 2012 RC. I'm using Storage Spaces, with two virtual disks across 23 underlying physical disks.
    * First virtual disk is fixed provisioning, parity across 23 physical disks: 10,024GB capacity
    * Second virtual disk is fixed provisioning, parity across the remaining space on 6 of the same physical disks: 652GB capacity
    These have been configured as dynamic disks, with an NTFS volume spanned across the two (larger virtual disk first). Total volume size 10,676GB. For more details of the hardware, and why the configuration is like this, see: http://social.technet.microsoft.com/Forums/en-US/winserver8gen/thread/c35ff156-01a8-456a-9190-04c7bcfc048e
    I'm copying several TB from a network share to this volume. It is very slow at ~12MB/sec, but works. However, three times so far, several hours in to the file copy and with plenty of free space remaining, the 10,024GB virtual disk is suddenly taken offline.
    This obviously then fails the spanned volume and stops the file copy.
    The second time, I took screenshots, below. The disk (Disk27) is marked offline due to "This disk is offline because it is out of capacity". And the disk in the spanned volume is marked as missing (which is what you would expect when one of its member disks
    is offline).
    I can then mark the disk (Disk27) back online again, and this restores the spanned volume. I can then re-start the file copy from where it failed. There doesn't appear to be any data loss, but it does cause an outage that requires manual attention. As you
    can see, there is plenty of space left on the spanned volume.
    Each time this has happened, there are a few event 150 errors in the System event log: "Disk 27 has reached a logical block provisioning permanent resource exhaustion condition.". Source: Disk.
    - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
    - <System>
      <Provider Name="disk" /> 
      <EventID Qualifiers="49156">150</EventID> 
      <Level>2</Level> 
      <Task>0</Task> 
      <Keywords>0x80000000000000</Keywords> 
      <TimeCreated SystemTime="2012-06-07T11:24:53.572101500Z" /> 
      <EventRecordID>14476</EventRecordID> 
      <Channel>System</Channel> 
      <Computer>Trounce-Server2.trounce.corp</Computer> 
      <Security /> 
      </System>
    - <EventData>
      <Data>\Device\Harddisk27\DR27</Data> 
      <Data>27</Data> 
      <Binary>000000000200300000000000960004C0000000000000000000000000000000000000000000000000</Binary> 
      </EventData>
      </Event>
    This error seems to be related to thin provisioning of disks. I found this:
    http://msdn.microsoft.com/en-us/library/windows/desktop/hh848068(v=vs.85).aspx. But both these Virtual Disks are configured as Fixed, not Thin provisioning, so it shouldn't apply.
    My thoughts: the virtual disk should not spuriously go offline during a file copy, even if it was out of space. And in any case, there is plenty of free space remaining. Also, I don't understand the reason for why it is marked as offline ("This disk is offline
    because it is out of capacity"). Why would a disk go offline because it was out of thin capacity, rather than just returning an "out of disk space" error while keeping it online.

    Interesting Thread, I've been having the same issue. I had a failed hardware RAID that was impossible to recover in place, so after being forced to do a 1:1 backup, I find myself with 5 2TB hard drives to play with. Storage Spaces seemed like an interesting
    way to go until I started facing the issues we share.
    So my configuration is A VM Running Windows Server 2012 RC with 5 Virtualized Physical drives using a SCSI interface, 2TB in size that make up my storage pool. A Single Thinly provisioned Disk of 18 TB (using 1 disk for parity)
    Interestly enough, write speed has not been an issue on this machine (30~70MB/s, up from 256k on the beta) 
    Of note to me is this error in my event log 13 minutes before the drive disappeared:
    "The shadow copies of volume E: were deleted because the shadow copy storage could not grow in time.Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied."Source: volsnap, Event ID: 25, Level: Error
    followed by:
    "The system failed to flush data to the transaction log. Corruption may occur in VolumeId: E:, DeviceName: \Device\HarddiskVolume17.(The physical resources of  this disk have been exhausted.)"Source: Ntfs (Microsoft-Windows-Ntfs), Event ID: 140, Level: Warning
    I figure the amount of space available to me before I start encountering physical limits is in the vicinity of about 7TB. It dropped out for the second time at 184 GB.
    FYI, the number of columns created for me is 5
    Regards,
    Steven Blom

  • Can't get File.copy(target) to work.

    Hi,
    I'm writing a script where I want to copy a file to another location after the render has finished, but I can't get the File.copy() method to work.
    Here's a snippet of the code:
    app.project.renderQueue.render();
    var renderedFile = app.project.renderQueue.items[1].outputModules[1].file;
    renderedFile.copy("c:\\");
    The render works fine and renderedFile.fullName returns the correct path and name, but the actual copying just doesn't work.
    I have made sure that scripts are allowed to access files and network.
    thanks
    Ludde

    never mind, I solved it.
    It wasn't clear to me that I had to specify both target path AND file name.
    This works just fine now:
    renderedFile.copy("c:\\rendered.mov");
    cheers
    Ludde

  • How to use the Microsoft File Copy dialog box  in Java ??

    Can anyone tell me if it is possible to use the Microsoft File Copy dialog box (the one with the animated gif of the paper flying from one folder to anaother) in a Java.
    Many thanks

    And in any case, in any version of Windows that I've looked at, the file copy animation is an AVI, not a GIF.
    db

  • Problem with UTL FILE COPY

    Hi,
    I am facing one small problem in UTL FILE COPY package. We have an automated scheduled batch process. As a daily batch process it moves the files from one folder to other. During movement, it first copies the file from source folder to destination folder using FILE COPY package and then removes the file using FILE REMOVE package from the source folder.
    Few cases are found that the file when files are copied they are copied as 0 bytes in destination folder but the source file had 1 MB file size. The interesting part is I am unable to replicate the issue. Means when i try to run once again the package its working properly and files are copied with correct size. This kind is issue is occuring very rarely though as once in month but the question is why FILE_COPY package is not working properly sometimes.. I am unable to understand.
    Thanks a lot for going through this. Any suggestions.
    Regards,
    Ashish

    Anyhow make sure that the copy will start after the full generation of files.
    You can think like a spool file generation with huge data. In that case initially spool file available with 0 byte only. After finish of query only it shows its actual size.

Maybe you are looking for