Why one group of files takes so long to copy

Hi...  I have two different folders full of files that I occasionally backup to another location.  Both sets contain approximately 3000 files each.  However, what I will call set 1's 3000 files represent about 700 MBytes of data while the 3000 files in set 2 are about half that size at about 350 MBytes. 
The vast majority of those files are never changed.  Instead, each week perhaps a few kilobytes and maybe 10-20 files might change.  When I copy set 2 to its backup location, the copy might take 15 seconds tops.  When I copy set 1 to its backup location on the same drive as set 2, the backup takes at least a factor of 10 longer...  Maybe more...  I could imagine a factor of 2 since there are twice as many MBytes of data in that set 1 but why might it be taking so, so much longer???  I've looked at the file attributes between the two sets and they appear to all be the same.
Even more interesting, if I do the backup for set 2 (the faster one) that I said maybe takes 15 seconds tops, if I immediately do the same thing again, it now takes less than a second meaning that the system seems to know those files have not changed since the previous copy.  However, if I do the same with set 1, the second immediate copy takes every bit as long as the first copy just took moments before...  It's as if something is telling the Finder to go ahead and recopy ALL of those files every time whether they have changed or not. 
Any input on why I might be seeing that???  I care because I also backup these files via an online backup service and that set 1 group of files always takes a ton longer on that service as well.  What might cause this???
thanks... bob..

Well, maybe more detail would help but I was trying to keep it simple.  There are certain files that I now and then copy into encrypted, sparse disk images and I then backup those encrypted disk images elsewhere.  Currently I open the disk images which then shows the files/folders but those might be a few weeks old.  I then open the same folders that exist elsewhere on my computer and that's when I use the drag and drop to update what will soon be relocked up into the encrypted disk images.
Sets 1 and 2 as I described them above are just natural groupings of files/folders I once chose those these encrypted disk images wouldn't be incredibly large. 
I do have Disk Warrior as somone asked above...
And I don't have Carbon Copy Cloner but I do have its cousin, SuperDuper which I used to backup the entire computer to a backup drive now and then. 
And I doubt I'm running into memory problems as this is a new rMBP with 16 GBytes of RAM
I could use SuperDuper to copy these files for me...  Might be worth a try...  But it's just so curious that one group of files copies in an instant when you copy and then immediately copy again while the second group seems to want to redo the copy every time you request it...  It might just be a few funny files in that one group...  Interesting...

Similar Messages

  • Why does downloading from iCloud take so long now?

    I've been using Match since it was released. I'm used to the download times required for hour-length albums, playlists, and hi-def TV programs.
    My memory is that a year ago, downloading an hour-long music album that was bought on iTunes in iTunes Plus format (256Kbs) took abouit 5-10 mins. So why does it now take ONE HOUR?!?
    Likewise, my memory is that a year ago, downloading an hour-long hi-def TV episode that was bought on iTunes took abouit 20-30 mins. So why does it now take TWO HOURS?!?
    My infrastructure is the same: broadband connection with over 100Mbps down, iPhone 4s, iPad 3.
    WHY has downloading from Match gotten SO SLOW on iOS devices? On my new iMac it is blazing fast. And a year ago, it was rather quick (as detailed above).
    Anyway who has info on this, would greatly appreciate seeing it posted. Thanks.

    prik67 wrote:
    i formatted the Hard Drive from Disk Utility in order to Re-install OSX Lion...The download time first shown was like 53 hours and then after like 10 minutes it went down to 10 hours...i am pretty sure thst the internet connection i have is very fast, so why does downloading OSX Lion take so long?
    It's about 4GB, from memory, and the download speed can be affected from things like site traffic or variations in speed from your own ISP. But in general, it IS a very large download. Also never put your trust in a computer telling you how much time is left to download. As you can see, it recalculates everything more often and more quickly than the Tax Department! (Oh, and don't watch it. You know the old saying)
    Cheers
    Pete

  • Why does my iPad Air takes a long time to charge?

    Why does my iPad Air takes a long time to charge?

    You need to provide more detail. What do you mean by "a long time?" Are you charging it plugged into a wall socket?
    Barry

  • Why does downloading OSX Lion take so long??

    i formatted the Hard Drive from Disk Utility in order to Re-install OSX Lion...The download time first shown was like 53 hours and then after like 10 minutes it went down to 10 hours...i am pretty sure thst the internet connection i have is very fast, so why does downloading OSX Lion take so long?

    prik67 wrote:
    i formatted the Hard Drive from Disk Utility in order to Re-install OSX Lion...The download time first shown was like 53 hours and then after like 10 minutes it went down to 10 hours...i am pretty sure thst the internet connection i have is very fast, so why does downloading OSX Lion take so long?
    It's about 4GB, from memory, and the download speed can be affected from things like site traffic or variations in speed from your own ISP. But in general, it IS a very large download. Also never put your trust in a computer telling you how much time is left to download. As you can see, it recalculates everything more often and more quickly than the Tax Department! (Oh, and don't watch it. You know the old saying)
    Cheers
    Pete

  • Why does my iphone 4 take soooo long to stream video?

    Why does my iphone 4 take soooo long to stream a video???  I will have an iphone 3GS next to it, and we will play a youtube video at the same time, and the 3GS goes significantly faster.  Any advice??

    Bad cable? Bad USB port? Something corrupt in the album itself?
    We need more info as to what your configuration is and what you've tried.

  • File name too long cannot copy (cont'd)

    This is a continuation of the post started September 01, 2009, with the last post on
    October 17, 2011 named File name too long cannot copy
    Since this is an ongoing unsolved issue, and the thread was locked, it continues here.
    It is ever so easy to create a long file/path using explorer.exe as will be shown below.
    What needs to be solved is, what is an easy way (no, not by listing out all the files in a directory to a text file and counting characters), perhaps using a shell extension, to find out which files in a directory and it's subdirectories will be (or were)
    too long to copy, (and then perhaps even copying those over either intact or renamed)?
    Maflagulator said:
    I'm running the 7100 build...enjoying it except for one big thing:
    While attempting to copy 402gb from my main storage volume onto a spare 500gb drive (for the purpose of changing to a new RAID array) I've come across something that I would expect a Windows 98 OS to give me.
    It tells me that a file has TOO LONG of a file name, then provides with two unhelpful options: SKIP or CANCEL
    I never had XP give me an issue like this at all, so what gives? And while some specific files did have long file names (such as for songs, etc.) it had 7 issues with folders stating that their name was too long, but in fact they were not since they were
    titled '07-06-07' for the date that I dumped the audio files in them. However, they may have contained FILES with long file names though.
    Anyone else get this same situation? Perhaps the RTM version does not do this? Can anyone verify this regarding their install of the RC or the RTM?
    It made it through 400gb out of the 402gb transfer.
    I'm just happy to see that it doesn't spazz out about an issue like this until it has done all the other transfers that it can do because it saves the issues it has with files until the very end. In XP it would spazz about it the moment it came across it
    causing the transfer process to halt.
    Since long path/file names can so easily be created on Win7, it might be useful to see a typical way this happens, which might then give clues how to work with them.
    In Windows Vista, we learnt from:
    File names and file name extensions: frequently asked questions that:
    Windows usually limits file names to 260 characters. But the file name must actually be shorter than that, since the complete path (such as C:\Program Files\filename.txt) is included in this character count.
    In Windows 7, we are told here:
    File names and file name extensions: frequently asked questions that:
    It depends on the length of the complete path to the file (such as C:\Program Files\filename.txt). Windows limits a single path to 260 characters. This is why you might occasionally get an error when copying a file with a very long file name to a location
    that has a longer path than the file's original location.
    From the Windows Dev Center - Desktop, we read about Maximum Path Length Limitation here:
    Naming Files, Paths, and Namespaces
    This helps us understand why a folder can be a maximum of 244 characters, from the defined 260 length of MAX_PATH as follows:
    260 minus C:\ (3) minus <NUL> (1) = 256
    256 minus 8.3 file name (12) = 244
    We also learn there that: The Windows API has many functions that also have Unicode versions to permit an extended-length path for a maximum total path length of 32,767 characters.
    And we read the claim that: The shell and the file system have different requirements. It is possible to create a path with the Windows API that the shell user interface is not be able to interpret properly.
    There is also a comment below this document that reads: In a previous iteration of this document, it is mentioned that The Unicode versions of several functions permit a maximum path length of approximately 32,000 characters composed of components up to
    255 characters in length. This information is now gone.
    So we are in a position where the file system and Windows API can create long path/flies that the shell cannot handle.
    But then we need to be able to handle it, so a little exploration might lead to a better understanding of how to do this.
    For most tasks being performed on long folder/files, Windows 7 and other Windows programs balk when the Path+Filename length > 260
    Let's create a long path/file.
    Create a folder called A at the root of a Drive.
    Create a sub-folder of A called: B
    Create a sub-folder of B called: C
    Make a FILE in sub-folder C called (no spaces or break, one long continuous string): 123456789A123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J 123456789K123456789L123456789M123456789N123456789O123456789P123456789Q123456789R123456789S123456789T
    123456789U123456789V123456789W123456789X123456.txt
    Rename sub-folder C to the string (no spaces or break, one long continuous string) (The actual directory created will be slightly shorter than this full length): 123456789A123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J
    123456789K123456789L123456789M123456789N123456789O123456789P123456789Q123456789R123456789S123456789T 123456789U123456789V123456789W123456789X123456789Y123456789Z
    Rename sub-folder B to the same full string above. (The actual directory created will be slightly shorter than this full length but 2 characters longer than the step above.)
    Rename folder A to that same full original string. (Again the actual directory created will be slightly shorter than this full length but 2 characters longer than the step above.)
    You now have the lovely file placed at (the breaks are just so it fits into the screen):
    C:\123456789A123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J 123456789K123456789L123456789M123456789N123456789O123456789P123456789Q123456789R123456789S123456789T 123456789U123456789V123456789W123456789X1234\ 123456789A123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J
    123456789K123456789L123456789M123456789N123456789O123456789P123456789Q123456789R123456789S123456789T 123456789U123456789V123456789W123456789X12\ 123456789A123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J 123456789K123456789L123456789M123456789N123456789O123456789P123456789Q123456789R123456789S123456789T
    123456789U123456789V123456789W123456789X\ 123456789A123456789B123456789C123456789D123456789E123456789F123456789G123456789H123456789I123456789J 123456789K123456789L123456789M123456789N123456789O123456789P123456789Q123456789R123456789S123456789T 123456789U123456789V123456789W123456789X123456.txt
    You have a folder length of over 700 and a file length of over 250 for a total of over 950
    However you will notice that each folder, when created, could only be a maximum of 247 charachters including the path (example C:\ , & C:\A , & C:\A\B
    This only applies backwards, that is up the path. It did not matter what was further down the path.
    Now, you can't easily access or rename the file, but you can rename the folders easily.
    For best results, start renaming from the top of the Tree, working down the subfolders, because renaming from down up will limit you, and in fact won't work if the folder lengths are too long.
    So how might knowing this help us?
    Well, to copy this long_file from the C:\ drive to the D:\ drive, and keeping the path structure, this should work:
    Note the name of the top folder. Rename it to something very short, say: A (Make sure C:\A does not exist)
    Note the name of the 2nd folder. Rename it to something very short, say: B (Make sure C:\A\B does not exist)
    Note the name of the 3rd folder. Rename it to something very short, say: C (Make sure C:\A\B\C does not exist)
    Make sure D:\A does not exist - then copy the A folder on disk C: to disk D: (which gives you D:\A\B\C\long_file
    Rename D:\A\B\C to D:\A\B\Original_3rd_Folder_name
    Rename D:\A\B to D:\A\B\Original_2nd_Folder_name
    Rename D:\A to D:\Original_top_Folder_name
    Rename C:\A\B\C back to their original names, in this same reverse order starting with C, then B, then A
    Note: If using Explorer, at some points you might have to press the F5 refresh key.
    This is of course how you might copy such long path/files without using the other more "easy" techniques for the "normal" everyday user like:
    sharing a sub-folder
    using the commandline to assign a drive letter by means of SUBST
    using AddConnectionunder VB to assign a drive letter to a path
    using the "\\?\" prefix to a path string to tell the Windows APIs to disable all string parsing and to send the string that follows it straight to the file system
    and so on.
    See how simple Windows can be for Dummies!
    But then, how can we know that files to be copied exceed this MAX_PATH? Or also after a copy has taken place, know exactly which files that have NOT been copied because of exceeding the MAX_PATH limit, and then a procedure to copy these either by renaming
    them, or by copying them intact as they are?
    There have been suggestions to use
    LongPathTool, but this does not have a facility to check a series of folders and tell you which files are going to be caught by the error when copying. So once a copy has taken place using Windows 7, one does not know which files did not get copied, and
    where exactly they are located.
    Neither does the free
    Old Path Scanner do that. It can only check for overly long directory paths, but misses out when the directory path is within limits, but adding in the file name puts it out of bounds.
    So, as shown above, it is ever so easy to create a long file/path using explorer.exe
    So, what then is an easy way (no, not by listing out all the files in a directory to a text file and counting characters), perhaps using a shell extension, to find out which files in a directory and it's subdirectories will be (or were) too long to copy,
    (and then perhaps even copying those over either intact or renamed)?

    This is a not a "solution" ....but a "work around": a low tech fix....for error message "your file name is too long to be copied, deleted, renamed, moved" :
    1.   problem is this: the "file name" has a limit on number of characters.....the sum of characters really includes the entire path name; you gotta shorten it first (i.e, if the total number of characters in the file name + Path name are over the
    limit, the error appears).  The deeper your file folder sub levels are, the more this problem will come up, especially when you copy to a subfolder of a subfolder/subfolder of another path ...adds to character limit)
    2.  How do you know which combined file names + path names are too long if  you are in the  middle of a copy operation and this error pops up?  Some files copied but the "long files error message" says "skip" or "cancel" ... but not which
    files are the "too long" ones.  If you hit "skip" or "cancel" the "too long" files are left behind...but are mixed in with non-offender "good" "short name" files.   Sorting thru 1000s of "good" files to find a few "bad" ones manually is impractical.
    3.   Here's how you sort out the "bad" from the "good":
    4.    Let's say you want to copy a folder ..."Football" ...that has five layers of subfolders; each subfolder contains numerous files:
      C:/1 Football / 2 teams/ 3 players/ 4 stats/ 5 injuriessidelineplayerstoolong 
           There are five levels root "1 football" with subfolders  2, 3, 4 and lastly "5 injuries"
    5.    Use "cut" and "paste"  (for example to backup all five levels to a new backup folder):
           select "1 football" ....cut....select target folder....paste 
           ("cut" command means as the files are copied to the target destination, the file is deleted from the source location)
          Hint: avoid "cut" and "paste" to a target folder that is itself a sub/sub/sub folder ...that compounds the "characters over the limit" problem ...because the characters in the sub/sub/sub folder are included in the "file name
    character limit"...instead "paste" to a C:/ root directory.
           Suppose in the middle of this operation...error pops up: "5 files have file names that are too long"  Skip or cancel?
           select "skip"  ...and let operation finish
    6.    Now go back and look at the source location: since the software allows only the "good" "short name" files to be copied (and because you "skipped" the "bad" "Long name" files so they are not copied or deleted) ...all that remains
    in the source location are the "bad" "long name files" (because "good" ones were deleted from the source location after the "cut" operation...the bad ones stick out like a sore thumb.
    7.   You will find ....all that remains in source folders are: the "bad" "too long" files; in this example the "bad" file is in level 5:
          C:/ 1 football / 2 teams /3 players /4 stats /5 injuriessidelineplayerstoolong
    8.   select folder 5 injuriessidelineplayerstoolong (that's right...select folder, not file) gotta rename the folder first.
    9.  hit F2 rename folder..folder name highlighted...delete some of the letters in the folder name:
           like this:   5 injuriessidelineplayers  ....you should delete 'toolong'....from the folder name
    10.  then go into folder 5....and do the same operation ...with the too long file name:
            hit F2 rename file....file name hightlighted...delete some of the letters
               like this:  injuriessidelineplayers.....you should delete 'toolong' from the file name
    11.  Now..."cut and paste"  the renamed file to the target backup folder.  
    The Error message will pop up again if you missed any "bad" files....for example, it will indicate "5 files too long" ....then repeat process 5 times until you fix all of them
    12.     Finally, copy the target destination files back to the source location (when you are certain all source location file folder locations are empty) 
    Of course, this "makeshift" solution would not be necessary if MSFT would fix the problem...

  • Why finding replication stream matchpoint takes too long

    hi,
    I am using bdb je 5.0.58 HA(two nodes group,JVM 6G for each node).
    Sometimes, I found bdb node takes too long to restart(about 2 hours).
    When this occurs, I catch the process stack of bdb(jvm process) by jstack.
    After analyzing stack,I found "ReplicaFeederSyncup.findMatchpoint()" taking all the time.
    I want to know why this method takes so much time,and how can I avoid this bad case.
    Thanks.
    帖子经 liang_mic编辑过

    Liang,
    2 hours is indeed a huge amount of time for a node restart. It's hard be sure without doing more detailed analysis of your log as to what may be going wrong, but I do wonder if it is related to the problem you reported in outOfMemory error presents when cleaner occurs [#21786]. Perhaps the best approach is for me to describe in more detail what happens when a replicated node is connecting with a new master, which might give you more insight into what is happening in your case.
    The members of a BDB JE HA replication group share the same logical stream of replicated records, where each record is identified with a virtual log sequence number, or VLSN. In other words, the log record described by VLSN x on any node is the same data record, although it may be stored in a physically different place in the log of each node.
    When a replica in a group connects with a master, it must find a common point, the matchpoint, in that replication stream. There are different situations in which a replica may connect with a master. For example, it may have come up and just joined the group. Another case is when the replica is up already but a new master has been elected for the group. One way or another, the replica wants to find the most recent point in its log, which it has in common with the log of the master. Only certain kinds of log entries, tagged with timestamps, are eligible to be used for such a match, and usually, these are transaction commits and aborts.
    Now, in your previous forum posting, you reported an OOME because of a very large transaction, so this syncup issue at first seems like it might be related. Perhaps your replication nodes need to traverse a great many records, in an incomplete transaction, to find the match point. But the syncup code does not blindly traverse all records, it uses the vlsn index metadata to skip to the optimal locations. In this case, even if the last transaction was very long, and incomplete, it should know where the previous transaction end was, and find that location directly, without having to do a scan.
    As a possible related note, I did wonder if something was unusual about your vlsn index metadata. I did not explain this in outOfMemory error presents when cleaner occurs but I later calculated that the transaction which caused the OOME should only have contained 1500 records. I think that you said that you figured out that you were deleting about 15 million records, and you figured out that it was the vlsn index update transaction which was holding many locks. But because the vlsn index does not record every single record, it should only take about 1,500 metadata records in the vlsn index to cover 15 million application data records. It is still a bug in our code to update that many records in a single transaction, but the OOME was surprising, because 1,500 locks shouldn't be catastrophic.
    There are a number of ways to investigate this further.
    - You may want to try using a SyncupProgress listener described at http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/rep/SyncupProgress.html to get more information on which part of the syncup process is taking a long time.
    - If that confirms that finding the matchpoint is the problem, we have an unadvertised utility, meant for debugging, to examine the vlsn index. The usage is as follows, and you would use the -dumpVLSN option, and run thsi on the replica node. But this would require our assistance to interpret the results. We would be looking for the records that mention where "sync" points are, and would correlate that to the replica's log, and that might give more information if this is indeed the problem, and why the vlsn index was not acting to optimize the search.
    $ java -jar build/lib/je.jar DbStreamVerify
    usage: java { com.sleepycat.je.rep.utilint.DbStreamVerify | -jar je-<version>.jar DbStreamVerify }
    -h <dir> # environment home directory
    -s <hex> # start file
    -e <hex> # end file
    -verifyStream # check that replication stream is ascending
    -dumpVLSN # scan log file for log entries that make up the VLSN index, don't run verify.
    -dumpRepGroup # scan log file for log entries that make up the rep group db, don't run verify.
    -i # show invisible. If true, print invisible entries when running verify mode.
    -v # verbose

  • The first binary file write operation for a new file takes progressively longer.

    I have an application in which I am acquiring analog data from multiple
    PXI-6031E DAQ boards and then writing that data to FireWire hard disks
    over an extended time period (14 days).  I am using a PXI-8145RT
    controller, a PXI-8252 FireWire interface board and compatible FireWire
    hard drive enclosures.  When I start acquiring data to an empty
    hard disk, creating files on the fly as well as the actual file I/O
    operations are both very quick.  As the number of files on the
    hard drive increases, it begins to take considerably longer to complete
    the first write to a new binary file.  After the first write,
    subsequent writes of the same data size to that same file are very
    fast.  It is only the first write operation to a new file that
    takes progressively longer.  To clarify, it currently takes 1 to 2
    milliseconds to complete the first binary write of a new file when the
    hard drive is almost empty.  After writing 32, 150 MByte files,
    the first binary write to file 33 takes about 5 seconds!  This
    behavior is repeatable and continues to get worse as the number of
    files increases.  I am using the FAT32 file system, required for
    the Real-Time controller, and 80GB laptop hard drives.   The
    system works flawlessly until asked to create a new file and write the
    first set of binary data to that file.  I am forced to buffer lots
    of data from the DAQ boards while the system hangs at this point. 
    The requirements for this data acquisition system do not allow for a
    single data file so I can not simply write to one large file.  
    Any help or suggestions as to why I am seeing this behavior would be
    greatly appreciated.

    I am experiencing the same problem. Our program periodically monitors data and eventually save it for post-processing. While it's searching for suitable data, it creates one file for every channel (32 in total) and starts streaming data to these files. If it finds data is not suitable, it deletes the files and creates new ones.
    On our lab, we tested the program on windows and then on RT and we did not find any problems.
    Unfortunately when it was time to install the PXI on field (an electromechanic shovel on a copper mine) and test it, we've come to find that saving was taking to long and the program screwed up. Specifically when creating files (I.E. "New File" function). It could take 5 or more seconds to create a single file.
    As you can see, field startup failed and we will have to modify our programs to workaround this problem and return next week to try again, with the additional time and cost involved. Not to talk about the bad image we are giving to our costumer.
    I really like labview, but I am particularly upset beacuse of this problem. LV RT is supposed to run as if it was LV win32, with the obvious and expected differences, but a developer can not expect things like this to happen. I remember a few months ago I had another problem: on RT Time/Date function gives a wrong value as your program runs, when using timed loops. Can you expect something like that when evaluating your development platform? Fortunately, we found the problem before giving the system to our costumer and there was a relatively easy workaround. Unfortunately, now we had to hit the wall to find the problem.
    On this particular problem I also found that it gets worse when there are more files on the directory. Create a new dir every N hours? I really think that's not a solution. I would not expect this answer from NI.
    I would really appreciate someone from NI to give us a technical explanation about why this problem happens and not just "trial and error" "solutions".
    By the way, we are using a PXI RT controller with the solid-state drive option.
    Thank you.
    Daniel R.
    Message Edited by Daniel_Chile on 06-29-2006 03:05 PM

  • Copy video files then processing files takes to long

    first i have to copy the video files which takes about 6 hours sometimes more sometimes less then processing the files takes another 6 hours more sometimes less... am i doing something wrong ro how can i speed this up?
    i am copying the files from my mac hd to an external hd... is that the problem?

    i have copies on one external drive and i am copying them to another drive for for permanent storage and when i copy them over it takes long about 6 hrs... after its finished copying it process the files so that takes about the same time as copying them. I even tried doing it straight from my video camera to my mac and after that it has to process the files...i dont understand why it even does this. this never happened with imovie 6 hd or even final cut express it only does this with imovie 8... to me it does not make any sense. i thought things would be faster in imovie 8 but it doesnt seem to be...

  • Read from text file takes very long after the first time

    Dear LabVIEW experts,
    I'm having a problem with read from text file. I'm trying to read only every nth line of a file for a preview with this sub vi:
    I seems to work well the first time I do it. The loop takes almost no time to execute an iteration.
    Then when I load the same file again with exactly the same settings one iteration takes around 50ms.
    Subsequent attempts seem to always take the longer execution time.
    Only when I restart the calling vi it will be quick for one file.
    When executing the sub vi alone it is always quick, but I don't see how the main vi (too complex to post here) could obstruct the execution of the sub vi.
    I don't have a the file opened elsewhere in the main vi, I don't use too much memory...
    Right now I don't now where to look. Does anyone have an idea?
    Regards
    Florian
    Solved!
    Go to Solution.

    I don't know the LabVIEW internals here, but I would think that it is quite possible that closing a file opened for read/write access writes a new copy of the file to disk, or at least checks the file in order to make sure a new file does not have to be written.
    Therefore, if your main VI calls this subVI sequentially (you don't give any information about the place of this subVI in the main VI), you are actually looking at a close (check/write) -> open operation for any time you call it, as opposed to a simple open operation the first time. If you were to open the file for simple read access (since that's all you do), it should work fast every time because there is no need to check to see if it has changed.
    Cameron
    To err is human, but to really foul it up requires a computer.
    The optimist believes we are in the best of all possible worlds - the pessimist fears this is true.
    Profanity is the one language all programmers know best.
    An expert is someone who has made all the possible mistakes.
    To learn something about LabVIEW at no extra cost, work the online LabVIEW tutorial(s):
    LabVIEW Unit 1 - Getting Started
    Learn to Use LabVIEW with MyDAQ

  • Saved files take a long time to appear on Server or Desktop

    We have a network of nearly 40 macs, Panther Server, mostly panther clients but the newer machines are on 10.4.
    All of the newer machines exhibit the problem in that files saved to the server or to the desktop take a long time to appear.
    It appears to be a refresh issue, as on the server other clients can see the file straight away and locally the files can be seen on the desktop via the terminal.
    Force quitting the finder forces the files to appear but this is not a solution.
    The clients which have the problem all run 10.4 and the Server is 10.3. The 10.3 clients do not exhibit the same issues.

    I have also seen this issue at one of my larger clients (50+ workstations).
    The issues seems isolated to 10.4.5 or higher. Our workstations with 10.4.2 did not exhibit this problem. It is not an issue with hardware (we have seen this on an iBook G3 and various G5 tower models). Nor does it appear to be server-related: we have seen the issue saving locally with no servers mounted. We have seen the issue when saving from Adobe CS apps, MS Office apps, and from system apps (like TextEdit). Deleting com.apple.finder.plist from the user's home Library and then relaunching the Finder gets relief for a short time but the problem returns soon after. plutil reports the file is OK. The volume directory is fine, the permissions check out, and caches have been purged, to no avail.
    Like you, summersault, I am able to see the saved file on the command line, and when my users Force Quit the Finder, the file appears. Other workarounds, like the Nudge contextual menu, do not work.
    I have submitted the error via apple.com/feedback and would encourage you to do the same.
    Anxiously awaiting 10.4.7...
    -B
      Mac OS X (10.4.6)  

  • Why is my exported file 2 seconds longer than the project in my timeline

    I am using a Mac, latest OSX. I need my project to run at exactly 30minutes. In the timeline my project is marked at exactly 30 minutes from the in/out points. When I export the media with Premier Pro I end up with a final product which is 30 minutes and 2 seconds long. Where are these two extra seconds coming from? I had my source range set for Sequence in/out points and this read as 30 miutes exactly, and the predicted output time for the final export read as 30 minutes exactly as well. This still added two extra seconds so I then set the source range to custom and typed in 30minutes exactly and it is STILL giving me 2 extra seconds. What is going on here?
    Export settings summary reads as follows
    1920x1080 (1.0)
    23.976 fps
    Progressive
    00:30:00:00
    VBR
    1 pass
    Target 0.19 Mbps
    Max 0.19 Mbps
    AAC
    320 kbps
    48 kHz
    Stereo

    Because within Premiere Pro, your are reading the total running time based on Non-drop frame timecode.
    Depending on which version of Quicktime you are using you can click on the counter and change the value of the numbers displayed:
    where Standard is real time. If you switch it Non Drop, you'll see it will display the length (incorrectly) as 30 minutes.
    If you are using Quicktime 10 I believe you can only display real time.
    This is all because your video is running at 23.976 frames per second. That means it is running slightly slower than real time, which would be true 24fps. Over the period of 30 minutes, this slow frame playback is making the show run longer by two seconds.
    This why in NTSC 29.97 and 59.94 video there is drop and non-drop timecode. Drop frame timecode drops timecode numbers over time to compensate for the difference that the video rate is slightly slower than real time.
    In drop frame, when you have a duration of 30 minutes, it coresponds to real time 30 minutes.
    In non-drop timecode, when you have a duration indicating  30 minutes  it will actually take slightly longer to playout.
    There is not a standard for 24fps drop frame timecode, it is always non-drop. You can't switch to drop frame timecode if your sequence frame rate is 23.976 or 24fps.
    The math:
    If you had 30 minutes of footage running at 24 fps - 24fps X 60 = 1440 frames in a minute, 1440 x 30 = 43200 frames in the show. (So non-drop TC, which counts the frames without regard to the true playback rate, displays a duration of 30 minutes)
    But since the video, is, in fact, running slightly slower, at 23.976, those 43200 frames take longer to play out by .024 frame per second of real time.  So 43200  (your total show amount of frames) divided by 23.976 (the playout per second frame rate) = 1801.801802 seconds, which is 30 minutes, 1.8 seconds.
    Avid used to ( and maybe still does) come with a calculator to make this conversion and to help make the frame count = the desired running time.
    It should be noted that within commercial production, with durations of 30 or 60 seconds, this whole mess is less apparent. It only shows up when you work with program length material. This is why they invented drop frame timecode.
    A solution is to tighten up the edit by 1.8 seconds over 30 minutes
    MtD

  • Why does emptying the garbage take so long now? It has greatly slowed down my workflow.

    Before Mountain Lion and maybe even Lion came out emptying the trash was instantaneous... What the **** happened. Now if my hardrive gets full and I have a **** load of stuff to remove it takes HOURS to do so. This is ridiculous, why does it take so long to delete something?

    Have you checked the option to Empty the Trash Securely? That will slow it down significantly.

  • Previewing files takes a long time when trying to attach them

    Whenever i try to preview files while trying to attach them in mail..once the file is selected it takes a long time to preview and hence be attached. First it was a problem with finder taking ages to preview...i sorted this out..but not the mail thing. Even after running disk utility, repairing permissions, cache cleaning and removing the com.apple.mail.plist file ..It didn't work. I started seeing the same problem in adium when I was trying to select a picture from a folder to change my display pic. Fortunately it hasn't happened in word or excel.
    Can anyone help with problem??
    Power Book G4   Mac OS X (10.4.5)   2 GB RAM, 128 MB VRAM

    It's loading the code insight. Since you have such a large number of objects you may want to turn this off.
    sqldeveloper -J-Dsdev.insight=false
    -kris

  • Why does iCloud back up take so long

    Why does iCloud back up take so long

    One huge factory is the speed of your Internet connection. Read here:
    http://support.apple.com/kb/TS3992

Maybe you are looking for