Writing to large file takes an increasing​ly long time.

We are acquiring a large amount of data and streaming it to disk.  We have noticed that when the file gets to be a certain size it takes an increasingly longer time to complete the write operation.  Of course, this means that during these times our DAQ backlog grows large, and, although we can process any backlog quickly enough, when the write operation takes a sufficiently long time, we will overwrite our buffer and the DAQ will fail.  We have looked at numerous examples of high-speed DAQ and feel that we are following the examples as given.  This behavior happens on a variety of computers, under different programming stratagies (data as 1D WF, Raw, etc.).  On one system (h/w&s/w) we can get to almost 1.5GB flawlessly before our write speed drops off severely and affects our DAQ, while on another (much more capable) system we can reach 20 GB before starting a decline in write speed.  We've implemented a work-around by limiting our file size and writing to a new file when the limit is reached (multiple 10 GB files as an example) then reworking the data files during postprocessing.  We would like to know why this is happening.  I do not believe this is a G issue as the info I have is you can open a file and write to it with "position current" as many bytes as you like, then close it when done, and I have read that you can do this "until your disk is full".  I have searched the NI knowledgebase without any relevant info on this behavior, and the MS KBase with the same results.
Here is a little detail about our setup.  PXI chassis with 4472 cards acquiring data at 102.4KS/s.  One system has XPpro, a controller in slot 1 (128MB RAM, 1.3 GHz cpu) and two 4472 cards (call it Junior), another at the high end has XP pro, four 4472 cards, MXI-4 connection to a computer with 2GB RAM, dual AMD Opterons @ 2.0 GHz, 400 GB RAID (call it Senior).  All systems XP pro SP 2, LabVIEW 7.1.1, NI-DAQ 7.4.  The programming methodologies used follow the many high-speed data logger examples found in the KBase and it works flawlessly until up until the file reaches a critical size that is different for systems of differing capabilities (Junior and Senior) (the rate of performance degradation is different also).  Obviously we are using a high sample rate on a lot of channels for a long time, but we do not see an obvious increase in memory usage until we pass our "critical" file size, so I am pretty confident that our program is ok, and that LabVIEW is also behaving itself.  I am suspicious of WinXP but I have no good infomation that reliably points to it as the culprit.
If you can shed some light on this issue, I would appreciate it.  I know, it seems odd even to me, that being unable to write a 50-60 GB file should be a concern, it wasn't that long ago that I thought 500 MB files were huge, but the things that our engineers want to be able to do these days would stun me if I took the time to think about them instead of solve them.  Thanks for the efforts!

The OS is probably reallocating space for the file from time to time. Say it had space for 10 GB at location 1. When the file size exceeds 10 GB then the OS goes looking for a location 2 which is bigger and then rewrites the file. Or it may begin fragmenting the files, but it has extra overhead keeping track of all the fragments. If you can allocate a space bigger than the largest file you expect to produce at the file creation time, you may avoid the slowdown.
I have not used files this big and do not use Windows, so these comments are generic based on things I have heard from others and have observed in other systems.
Lynn

Similar Messages

  • Large file takes long time to open

    I'm not an Illustrator user.  I work in IT.  One of our Illustrator users is complaining that it takes about 10 to 15 minutes to open a 500MB file.  She has the latest hardware and 4 GBs of RAM on Windows 7.  She is on the latest version of CS5.  I just installed it a month ago.  CS4 was doing the same thing, so we bought new hardware and CS5.  I'm wondering if this is normal for such a large file to take this long to open?

    John is probably on to something, but then again it may be normal even when opened locally - AI's memory management simply isn't that great and it also doesn't progressively render partially opened files, so it waits for the whole file to be loaded before even giving you a shred of a preview, which, depending on what features are used, by itself can take forever....
    Mylenium

  • Non-self-contained movie huge & takes a very very long time to export

    This is for FCE 4.
    I have a 1.5 hour movie, and when I export it to quicktime (Not quicktime conversion) it takes WAY too long.
    I have "make self-contained movie" unchecked, so I thought creating a reference movie would be very quick. Why is it taking so long and why is the resulting file huge?
    My source files is DV, and only minor editing and effects were used. However the entire movie was cropped & resized. Is that why?

    Ah I think I figured it out.
    I did another test project with the same source DV file, but didn't do ANY editing to it.
    It took 2 minutes to export a 70mb reference mov file from a 7 minute DV sequence.
    Then I cropped & resized the sequence by changing parameters in the "motion" tab.
    Then it took 20 minutes to export a much larger reference mov file from the same 7 minute DV sequence.
    Then I clicked "render all" after selecting every render option in the drop-down menu.
    Then it took less than 1 minute to export a 70mb reference mov file from the same 7 minute DV sequence.
    So I guess I was having problems because I didn't render "everything", because I didn't select "FULL" on the drop-down menu for rendering. I thought only RED segments in the timeline needs to be rendered prior to export, but apparently the "FULL" render must be selected as well so that the entire timeline is nice and purple. (or some shade of blue)
    Also, a cropped and resized video is very large in size (GB) if it is exported as a reference file only without prior rendering. * Can someone else confirm that this is the normal behavior in FCE? *
    Oh well, lesson learned.. This movie is going to take a very long time to export, and result in a very large file. But I can't use it without cropping & resizing it!
    Message was edited by: Yongwon Lee

  • Downloads of zip and other files are taking a very long time.

    Prior to this week, downloads of zip files and other types of files were pretty fast; however, downloads are suddenly taking a very long time (i.e. 30 minutes to download an album's worth of music files). It's possible that I may have deleted something that was necessary to download files quickly, but I'm not sure. Can anyone provide suggestions on how to get my download groove back? Thanks!

    Hi beaubeck,
    Welcome to the Support Communities!
    Resetting the modem/router and trying with an ethernet cable (wired) vs. wireless would be two good troubleshooting steps.  More details are included in the article below:
    Wi-Fi: How to troubleshoot Wi-Fi connectivity
    http://support.apple.com/kb/HT4628?viewlocale=en_US
    Cheers,
    - Judy

  • Restore takes a really really long time

    Last night I replaced the hard drive in our iMac and started the process of restoring from a Time Machine backup. The restore has been working now for nearly 20 hours and estimates that it will be finished in approximately 52 more.
    Is this kind of restore time typical? I'm guessing the 320GB drive that we replaced was at least 75% full.

    Wes Plate wrote:
    Last night I replaced the hard drive in our iMac and started the process of restoring from a Time Machine backup. The restore has been working now for nearly 20 hours and estimates that it will be finished in approximately 52 more.
    Is this kind of restore time typical? I'm guessing the 320GB drive that we replaced was at least 75% full.
    First, the estimates are often quite inaccurate. Second, a lot depends on your Mac, the drive, and how they're connected.
    If you recall how full your Mac was when you did your first TM backup, and how long it took, you can make a rough estimate. In my experience, a full restore takes roughly 60% as long as a full backup.
    That's on a much smaller PPC system, but ought to be roughly similar, percentage-wise.

  • Large file download does not start and times out while loading

    I've been trying to download movie files from a website and it does not work. When I click on the link, the download begins preliminarily, but does not seem to actually start downloading the file. It stays like this:
    The MBs continue to go up for a while, but a download progress bar never appears (just the spinning striped bar) and I never get a time remaining on the download. After a few minutes the download just fails, and it says something like "network error." I've tried it on Chrome, FireFox, and Safari and the result is always the same.
    I asked someone at the company about it, and he said they never heard of this problem and that their downloads work for other customers. Is this a problem on my end or their end?
    Does anyone know what the problem may be? Is there anything I can do to fix this?

    I don't know what route Apple will take. Sure, they can boot it in firewire mode and try to reinstall
    the update, but in the interest of time or other factors they may take another route.
    Kj

  • It takes CS6 a really long time to load files, but it didn't use to

    Hello,
    I have CS6 64bit on a reasonably up to date windows 7 system. Just checked for CS6 upgrades 7/14/13, and have the latest version (13.0.1.2).
    Things have been working fine since I installed the product about a month ago.
    At some point something went wrong and it now takes 30 seconds or more to load a file via "file>open..."
    It also takes 30 or more second just for photomerge to display the file dialog from "file>automate>photomerge..."
    All other photo s/w I have loads raw files, tifs, jpegs, etc... very quickly. This is CS6 specific.
    I disabled loading of plugins, didn't help (CS6 starts quickly enough, just not so much file loading).
    Anyone out there have any suggestions?
    Is this a known issue?
    Thanks,
    Mark

    DId you install anything else on your system around that time?
    Maybe a font manager utility?
    If it is just slow to show the File Open dialog - then you need to look at the OS, since that is just an OS dialog that Photoshop calls.

  • Large backup takes more than 1 hour - time machine kicks in again!

    I have just watched my Time Machine backup while it's been going.
    It was a large backup and took over 1 hour. However, as time machine backups up "every hour" - the original backup was still going when it then stopped, and started the new backup (as the original backup had been going for 1 hour already).
    The original backup had over 1GB left - yet the new backup only backed up 14MB.
    And, there is no way of telling what happened or what was or was not backed up.
    Anyone got any suggestions?
    Snapper

    SnapperNZ wrote:
    I have just watched my Time Machine backup while it's been going.
    It was a large backup and took over 1 hour. However, as time machine backups up "every hour" - the original backup was still going when it then stopped,
    It probably finished ok. The amount to be backed-up is an estimate, usually pretty close, but sometimes not.
    and started the new backup (as the original backup had been going for 1 hour already).
    The original backup had over 1GB left - yet the new backup only backed up 14MB.
    And, there is no way of telling what happened or what was or was not backed up.
    Yes, there are.
    See #A1 in [Time Machine - Troubleshooting|http://web.me.com/pondini/Time_Machine/Troubleshooting.html] (or use the link in *User Tips* at the top of this forum) for a handy widget that will display the backup messages from your logs.
    See #A2 there for a couple of apps that will show exactly what was backed-up each time.

  • Mail sometimes takes a very, very long time to open messages

    This is similar to some other messages I've seen here, but I perhaps have some new observations.
    Often, but not always, Mail takes up to 120 seconds to open each message in my Inbox. This happens with messages that have already been downloaded from the server, and even with messages that have already been read previously. While opening a message, Mail is otherwise unresponsive and spinning the ol' beachball cursor.
    When this happens, the system.log on the Console displays these lines. Mail is process 4502 in this example.
    Jan 19 08:20:23: --- last message repeated 1 time ---
    Jan 19 08:20:23 Juliette /usr/sbin/spindump[4552]: process 4502 is being monitored
    Jan 19 08:20:28 Juliette kernel[0]: disk0s2: I/O error.
    Jan 19 08:20:28 Juliette kernel[0]:
    Jan 19 08:20:37: --- last message repeated 1 time ---
    Jan 19 08:20:37 Juliette kernel[0]: disk0s2: I/O error.
    Jan 19 08:20:37 Juliette kernel[0]:
    The last three lines are then repeated every 10 seconds until the message is finally displayed at which time spindump reports that it spindump is no longer monitoring process 4502 (or whatever number Mail happens to be). The time required to sort itself out is variable, from 30 to 120 seconds.
    I ran Verify on my disk and there were no errors of any kind. I have also checked and corrected permissions on the disk a couple of times.
    If I run Mail>Mailbox>Rebuild the problem is cleared up (for a while), but eventually returns. The problem is cleared up only on the Inbox on which I ran Rebuild. There may be a global rebuild somewhere, but I haven't found it.
    I have also noticed that after running Rebuild when the problem is occurring, a mailbox will contain one more unopened message than it did before Rebuild-ing, i.e. If there were no unopened messages, there will now be 1, or if there were 6 unopened messages, there will now be 7. And the new unopened message is at some seemingly random place in the message list. I can't say that this happens every time, but I have not seen it not happen.
    I'm not really qualified to make a diagnosis of this, but, nevertheless, I will venture a guess that my local mail database is somehow getting corrupted, or possibly is permanently corrupted, in some minor way. Assuming that might be true, is there a way to rebuild my entire mail database? It's fairly large, about 600 Mbytes.
    Thanks in advance for any help or advice.

    Update: I found out how to re-index my mail database (in the Mail Help, duh) and this may have fixed the problem. At least I haven't seen it since. I'll post a final report at the end of the day.

  • RMAN backup files are still exist since long time, how to delete?

    Dear sir;
    I'm using the below script to do daily backup, however there are many rman backup files are still exist and consumes HD size, how could I delete these files in daily bases? some files dated in FEB, MAR, APR,
    ============Daily RMAN script=========
    rman target /<<!
    backup incremental level=0 as compressed backupset database format '/u15/rman/full_backup_%U.rman';
    backup archivelog all not backed up 2 times format '/u15/rman/arc_backup_%U.rman';
    backup current controlfile format '/u15/rman/control_%U.rman';
    delete archivelog all backed up 2 times to device type disk completed before 'sysdate-7';
    delete noprompt obsolete;
    ================================END
    Thanks and best regards
    Ali

    Hi;
    Our backup policy should have 7 days; however we have here some files from JAN, FEB,MAR, APR /2012 WHICH ARE BEYOND THE RETENTION DATE and these files should be deleted by executing " delete noprompt obsolete; ".
    All files are exist in /u15/rman/
    -rw-r----- 1 oracle oinstall 1151763968 Jan 21 01:36 arc_backup_7kn19h4a_1_1.rman
    -rw-r----- 1 oracle oinstall 1136882176 Jan 21 01:36 arc_backup_7ln19h4q_1_1.rman
    -rw-r----- 1 oracle oinstall 1135984640 Jan 21 01:36 arc_backup_7mn19h5a_1_1.rman
    -rw-r----- 1 oracle oinstall 1126627328 Jan 21 01:37 arc_backup_7nn19h5q_1_1.rman
    -rw-r----- 1 oracle oinstall 880606720 Mar 12 02:53 arc_backup_7nn5ldhp_1_1.rman
    -rw-r----- 1 oracle oinstall 1093043712 Jan 21 01:37 arc_backup_7on19h6a_1_1.rman
    -rw-r----- 1 oracle oinstall 9797632 Dec 15 01:04 control_04mu7tcp_1_1.rman
    -rw-r----- 1 oracle oinstall 36896768 Mar 3 02:55 control_4cn4tm9k_1_1.rman
    -rw-r----- 1 oracle oinstall 36896768 Mar 4 02:53 control_4on50ahm_1_1.rman
    -rw-r----- 1 oracle oinstall 36896768 Mar 5 02:55 control_56n52v1j_1_1.rman
    -rw-r----- 1 oracle oinstall 16252928 Jan 23 01:40 control_8tn1eq3t_1_1.rman
    -rw-r----- 1 oracle oinstall 16252928 Jan 24 01:40 control_9cn1heg0_1_1.rman
    -rw-r----- 1 oracle oinstall 202940416 Dec 15 01:04 full_backup_01mu7t50_1_1.rman
    -rw-r----- 1 oracle oinstall 1097728 Dec 15 01:04 full_backup_02mu7tcc_1_1.rman
    -rw-r----- 1 oracle oinstall 201285632 Dec 14 01:04 full_backup_0nmu58ou_1_1.rman
    -rw-r----- 1 oracle oinstall 5957304320 Feb 29 02:46 full_backup_2ln4g9l1_1_1.rman
    -rw-r----- 1 oracle oinstall 4128768 Feb 29 02:47 full_backup_2mn4gft8_1_1.rman
    -rw-r----- 1 oracle oinstall 6027075584 Mar 1 02:49 full_backup_32n4o6ov_1_1.rman
    -rw-r----- 1 oracle oinstall 4128768 Mar 1 02:49 full_backup_33n4od66_1_1.rman
    -rw-r----- 1 oracle oinstall 6187171840 Mar 2 02:51 full_backup_3gn4qr50_1_1.rman
    -rw-r----- 1 oracle oinstall 4145152 Mar 2 02:51 full_backup_3hn4r1kn_1_1.rman
    -rw-r----- 1 oracle oinstall 6115786752 Mar 3 02:51 full_backup_40n4tfgu_1_1.rman
    above is a short list of contents.
    to do our daily backup we perform the following script (in daily)
    ==================
    backup incremental level=0 as compressed backupset database format '/u15/rman/full_backup_%U.rman';
    backup archivelog all not backed up 2 times format '/u15/rman/arc_backup_%U.rman';
    backup current controlfile format '/u15/rman/control_%U.rman';
    delete archivelog all backed up 2 times to device type disk completed before 'sysdate-7';
    delete noprompt obsolete;
    ==================
    Thanks and best regards
    Ali

  • Backup take hours, bug coming long time after 2.0.1

    The backup was a bit long with 2.0 on my new 3G. But it was ok. 2.0.1 solved this and was really better.
    I had to reinstall my mac, with a time machine full restore. It was successful, but now, it take almost 1 hour to backup my phone, and that, each time !
    I know I can stop the backup process but hey, I want it working as before !

    Yes, 2.0.1 also updates the modem firmware to 01.48.02. Luckily for me it actually improved my 3G/Edge receptions. At first though, Edge reception actually got worse. The next day it went back to full strength (better than before the update). Leads me to believe it was on the AT&T side... maybe new or updated protocol/negotiation settings that needed updating after the new modem firmware.
    Try turning your phone off for the night and see what happens in the morning. Network updates are done automatically when a phone is reset like that.

  • I would like to use removable hard disk drive (portable hard disk) for iphone because of many data, report and so on. I want to receive data for e-mail, so the iphone holds large amounts of data for a long time, and i want to use its at any time

    nowadays, people use removable hard disk drive (portable hard disk) & usb flash drive at computer because of usability, easy to carry around.
    so, i would like to use these functions for iphone. i want to develop mini portable hard disk for the use of iphone like computer

    Not a feature of iphone.

  • DataBlAppend takes long time on registered data

    Greetings! I'm using DIAdem 2012 on a Win7/64-bit computer (16GB memory and solid-state hard drive).  I work with one tdms file at a time but that file can be up to 8GB so I bring it into the Data Portal via the Register Data option.  The tdms file contains about 40 channels and each channel has about 50M datapoints.  If it matters, the data type of each channel is U16 with appropriate scaling factors in the channel parameters.  I display one channel in View and my goal is to set the two cursors on either side of an "event" then copy that segment of data between the cursors to a new channel in another group.  Actually, there are about ten channels that I want to copy exactly the same segment out to ten new channels.  This is the standard technique for programmatically "copying-flagged-data-points", i.e. reading and using the X1,X2 cursor position.  I am using DataBlAppend to write these new channels (I have also tried DataBlCopy with identical results).  My VBS script works exactly as I desire.  The new channel group containing the segments will be written out as a tdms file using another script. 
    Copying out "small" segments takes a certain amount of time but copying larger segments takes an increasing amount of time, i.e. the increase is not linear.  I would like to do larger segments but I don't like waiting 20-30 minutes per segment.  The time culprit is the script line "Call DataBlAppend (CpyS, CurPosX1, CurPosX2-CurPosX1 +1, CpyT)" where CpyS and CpyT are strings containing the names of the source and target channels respectively (the empty target channels were previously created in the new group). 
    My question is, "is there a faster way to do this within DIAdem?"  The amount of data being written to the new group can range from 20-160MB but I need to be able to write up to 250MB.  TDMS files of this size can normally be loaded or written out quite quickly on this computer under normal circumstances, so what is slowing this process down?  Thanks!

    Greetings, Brad!! 
    I agree that DataBlCopy is fast when working "from channels loaded in the Data Portal" but the tdms file I am working with is only "registered" in the portal.  I do not know exactly why that makes a difference except that it must go out to the disk in order to read each channel.  The function DataBlCopy (or Append) is a black box to me so I was hoping for some insight as to why it is behaving like it is under these circumstances.  However, your suggestion to try the function DataFileLoadRed() may bear fruit!  I wrote up a little demo script to copy out a "large" segment from a 8GB file registered in the portal using DataFileLoadRed and it is much, much faster!  It was a little odd selecting "IntervalCount" as my method and the total number of intervals the same as the total number of data points between my begin and end points, and "eInterFirstValue" [in the interval] as the reduction method, but the results speak for themselves.  I will need to do some thorough checking to verify that I am getting exactly the data I want but DataFileLoadRed does look promising as an alternative.  Thanks!
    Chris

  • How do I find where my larger files are all at Once

    Like all I am getting close to having a full HD. Is there a way to tell where all my large files are all at the same time without having to go to each folder, file, application
    Thx Much

    Thx Much, I am aware of all that, just wondering is there is any software freeware, widget that could tell me where all my very large files are at the same time with having to go through all the folders/files/libraries..
    It is amazing that many large files are stored in the strangest places.
    Jack

  • 4.2.3/.4 Data load wizard - slow when loading large files

    Hi,
    I am using the data load wizard to load csv files into an existing table. It works fine with small files up to a few thousand rows. When loading 20k rows or more the loading process becomes very slow. The table has a single numeric column for primary key.
    The primary key is declared at "shared components" -> logic -> "data load tables" and is recognized as "pk(number)" with "case sensitve" set to "No".
    While loading data, these configuration leads to the execution of the following query for each row:
    select 1 from "KLAUS"."PD_IF_CSV_ROW" where upper("PK") = upper(:uk_1)
    which can be found in the v$sql view while loading.
    It makes the loading process slow, because of the upper function no index can be used.
    It seems that the setting of "case sensitive" is not evaluated.
    Dropping the numeric index for the primary key and using a function based index does not help.
    Explain plan shows an implicit "to_char" conversion:
    UPPER(TO_CHAR(PK)=UPPER(:UK_1)
    This is missing in the query but maybe it is necessary for the function based index to work.
    Please provide a solution or workaround for the data load wizard to work with large files in an acceptable amount of time.
    Best regards
    Klaus

    Nevertheless, a bulk loading process is what I really like to have as part of the wizard.
    If all of the CSV files are identical:
    use the Excel2Collection plugin ( - Process Type Plugin - EXCEL2COLLECTIONS )
    create a VIEW on the collection (makes it easier elsewhere)
    create a procedure (in a Package) to bulk process it.
    The most important thing is to have, somewhere in the Package (ie your code that is not part of APEX), information that clearly states which columns in the Collection map to which columns in the table, view, and the variables (APEX_APPLICATION.g_fxx()) used for Tabular Forms.
    MK

Maybe you are looking for

  • CRM Consumer (Organisation) Mapping with R/3

    Hello Guru's, I am facing a problem, we are implementing CRM Service for one of our Client, in Service we need to map Installed At Partner function with R/3. We created Installed At Partner function under the role CRM006 Consumer (Organisation), so t

  • [SOLVED] Lastfm-Client, Installation fails.

    Hello! A week ago the package lastfm-client moved to the AUR from [community]. Since that I encounter an error trying to install it. The erroroutput doesn't say me anything so I hoped probably someone of you knows how to fix this. /usr/bin/ld: ../bui

  • What does the scriptcollector do and how does it related to jsf lifecycle?

    hi, i want to know what does the scriptcollector do and how does it related to jsf lifecycle? and also in the scriptcollector if i call this, preRender=#{myBean.onPageLoadBegin} postRender=#{myBean.onPageLoadEnd} how my page will react?

  • How do you change the name of a show on your ipod

    i have downloaded some tv shows onto my ipod but i dont know how to change the name of the show so they actually come up on my ipod, other shows i have downloaded already have a show name and come up on my ipod but i can't re-name the show, i can how

  • Explanation - Why are IAS 9i and an RDBMS both needed ?

    Why are IAS 9i and an RDBMS both needed ? After all, IAS 8I/9I does contain a "database" in it -- but Oracle calls it a "data cache" !! Here's why ... The IAS 8I/9I "data cache" is intended to be a "performance enhancment" mechanism to avoid network