Large waveform file support?

I have read a great deal recently on how to write/read large files in LV. Unless I am missing something, there doesn't seem to be a clean "native" LV implementation of large file i/o? OpenG has some vis that do the trick for Write/Read Bytes to/from File, but I need to find a way of Writing/Reading Arrays of Waveforms to/from a file of perhaps up to 50 GB. Currently, even though the record number input on the Read WF from File vi is an I32 (leading me to think I can have a really large number of records!) there appears to be the same 2 GB limit as other native LV file i/o vis. Has anyone solved this issue (writing and reading arrays of WF to "large" files)?

Managing large data sets in LabVIEW has been othen discussed in Info-LabVIEW, but I think this NI-link could help too:
http://zone.ni.com/devzone/conceptd.nsf/webmain/6A56C174EABA7BBD86256E58005D9712?opendocument

Similar Messages

  • Cannot load large CSV files in SignalExpress("Not enough memory to complete this operation" error)

    Hi guys,
    I'm new here and just  have browsed
    some of the related topics here regarding my problem but could not seem
    to find anything to help me fix this problem so I decided to post this.
    I currently have a saved waveform from an oscilloscope that is quite
    big in size(around 700MB, CSV file format) and I want to view this on
    my PC using SignalExpress. Unfortunately when I try to load the file
    using "Load/Save Signals -> Load From ASCII", I always get the "Not
    enough memory to complete this operation" error. How can we view and
    analyze large waveform files in SignalExpress? Is there a workaround on
    this? 
    Thanks,
    Louie
    P.S.>I'm very new to Signal Express and haven't modified any settings on it. 

    Hi Louie,
    Are you encountering a read-only message when you tried to save the boot.ini file? If so, you can try this method: right-click on My Computer >> Select "Properties", and go to the "Advanced" tab. Select "Settings", and on the next screen, there is a button called Edit. If you click on Edit you should be able to modify the "/3GB" tag in boot.ini. Are you able to change it in this manner? After reboot, you can reload the file to see if it helps.
    To open a file in SignalExpress, a contiguous chunk of memory is required. If SignalExpress cannot find a contiguous memory chunk that is large enough to hold this file, then an error will be generated. This can happen when fragmentation occurs in memory, and fragmentation (memory management) is managed by Windows, so unfortunately this is a limitation that we have.
    As an alternative, have you looked at NI DIAdem before? It is a software tool that allows users to manage and analyze large volumes of data, and has some unique memory management method that lets it open and work on large amounts of data. There is an evaluation version which is available for download; you can try it out and see if it is suitable for your application. 
    Best regards,
    Victor
    NI ASEAN
    Attachments:
    Clipboard01.jpg ‏181 KB

  • Arbitrary waveform generation from large text file

    Hello,
    I'm trying to use a PXI 6733 card hooked up to a BNC 2110 in a PXI 1031-DC chassis to output arbitrary waveforms at a sample rate of 100kS/s.  The types of waveforms I want to generate are generally going to be sine waves of frequencies less than 10 kHz, but they need to be very high quality signals, hence the high sample rate.  Eventually, we would like to go up to as high as 200 kS/s, but for right now we just want to get it to work at the lower rate. 
    Someone in the department has already created for me large text files > 1GB  with (9) columns of numbers representing the output voltages for the channels(there will be 6 channels outputting sine waves, 3 other channels with a periodic DC voltage.   The reason for the large file is that we want a continuous signal for around 30 minutes to allow for equipment testing and configuration while the signals are being generated. 
    I'm supposed to use this file to generate the output voltages on the 6733 card, but I keep getting numerous errors and I've been unable to get something that works. The code, as written, currently generates an error code 200290 immediately after the buffered data is output from the card.  Nothing ever seems to get enqued or dequed, and although I've read the Labview help on buffers, I'm still very confused about their operation so I'm not even sure if the buffer is working properly.  I was hoping some of you could look at my code, and give me some suggestions(or sample code too!) for the best way to achieve this goal.
    Thanks a lot,
    Chris(new Labview user)

    Chris:
    For context, I've pasted in the "explain error" output from LabVIEW to refer to while we work on this. More after the code...
    Error -200290 occurred at an unidentified location
    Possible reason(s):
    The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.
    To avoid this error, you can do any of the following:
    1. Increase the size of the background buffer by configuring the buffer.
    2. Increase the number of samples you write each time you invoke a write operation.
    3. Write samples more often.
    4. Reduce the sample rate.
    5. Change the data transfer mechanism from interrupts to DMA if your device supports DMA.
    6. Reduce the number of applications your computer is executing concurrently.
    In addition, if you do not need to write every sample that is generated, you can configure the regeneration mode to allow regeneration, and then use the Position and Offset attributes to write the desired samples.
    By default, the analog output on the device does what is called regeneration. Basically, if we're outputting a repeating waveform, we can simply fill the buffer once and the DAQ device will reuse the samples, reducing load on the system. What appears to be happening is that the VI can't read samples out from the file fast enough to keep up with the DAQ card. The DAQ card is set to NOT allow regeneration, so once it empties the buffer, it stops the task since there aren't any new samples available yet.
    If we go through the options, we have a few things we can try:
    1. Increase background buffer size.
    I don't think this is the best option. Our issue is with filling the buffer, and this requires more advanced configuration.
    2. Increase the number of samples written.
    This may be a better option. If we increase how many samples we commit to the buffer, we can increase the minimum time between writes in the consumer loop.
    3. Write samples more often.
    This probably isn't as feasible. If anything, you should probably have a short "Wait" function in the consumer loop where the DAQmx write is occurring, just to regulate loop timing and give the CPU some breathing space.
    4. Reduce the sample rate.
    Definitely not a feasible option for your application, so we'll just skip that one.
    5. Use DMA instead of interrupts.
    I'm 99.99999999% sure you're already using DMA, so we'll skip this one also.
    6. Reduce the number of concurrent apps on the PC.
    This is to make sure that the CPU time required to maintain good loop rates isn't being taken by, say, an antivirus scanner or something. Generally, if you don't have anything major running other than LabVIEW, you should be fine.
    I think our best bet is to increase the "Samples to Write" quantity (to increase the minimum loop period), and possibly to delay the DAQmx Start Task and consumer loop until the producer loop has had a chance to build the queue up a little. That should reduce the chance that the DAQmx task will empty the system buffer and ensure that we can prime the queue with a large quantity of samples. The consumer loop will wait for elements to become available in the queue, so I have a feeling that the file read may be what is slowing the program down. Once the queue empties, we'll see the DAQmx error surface again. The only real solution is to load the file to memory farther ahead of time.
    Hope that helps!
    Caleb Harris
    National Instruments | Mechanical Engineer | http://www.ni.com/support

  • (urgent) SQL*Loader Large file support in O734

    hi there,
    i have the following sqlloader error when trying to upload data file(s),
    each has size 10G - 20G to Oracle 734 DB on SunOS 5.6 .
    >>
    SQL*Loader-500: Unable to open file (..... /tstt.dat)
    SVR4 Error: 79: Value too large for defined data type
    <<
    i know there's bug fix for large file support in Oracle 8 -
    >>
    Oracle supports files over 2GB for the oracle executable.
    Contact Worldwide Support for information about fixes for bug 508304,
    which will add large file support for imp, exp, and sqlldr
    <<
    however, really want to know if any fix for Oracle 734 ?
    thx.

    Example
    Control file
    C:\DOCUME~1\MAMOHI~1>type dept.ctl
    load data
    infile dept.dat
    into table dept
    append
    fields terminated by ',' optionally enclosed by '"'
    trailing nullcols
    (deptno integer external,
    dname char,
    loc char)
    Data file
    C:\DOCUME~1\MAMOHI~1>type dept.dat
    50,IT,VIKARABAD
    60,INVENTORY,NIZAMABAD
    C:\DOCUME~1\MAMOHI~1>
    C:\DOCUME~1\MAMOHI~1>dir dept.*
    Volume in drive C has no label.
    Volume Serial Number is 9CCC-A1AF
    Directory of C:\DOCUME~1\MAMOHI~1
    09/21/2006  08:33 AM               177 dept.ctl
    04/05/2007  12:17 PM                41 dept.dat
                   2 File(s)          8,043 bytes
                   0 Dir(s)   1,165 bytes free
    Intelligent sqlldr command
    C:\DOCUME~1\MAMOHI~1>sqlldr userid=hary/hary control=dept.ctl
    SQL*Loader: Release 10.2.0.1.0 - Production on Thu Apr 5 12:18:26 2007
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 2
    C:\DOCUME~1\MAMOHI~1>sqlplus hary/hary
    SQL*Plus: Release 10.2.0.1.0 - Production on Thu Apr 5 12:18:37 2007
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    As I am appending I got two extra rows. One department in your district and another in my district :)
    SQL> select * from dept;
        DEPTNO DNAME          LOC
            10 ACCOUNTING     NEW YORK
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            40 OPERATIONS     BOSTON
            50 IT             VIKARABAD
            60 INVENTORY      NIZAMABAD
    6 rows selected.
    SQL>

  • Large File Support OEL 5.8 x86 64 bit

    All,
    I have downloaded some software from OTN and would like to combine the individual zip files into one large file. I assumed that I would unzip the individual files and then create a single zip file. However, my OEL zip utility fails with the file is too large error.
    Perhaps related, I can't ls the directories that contain huge files. For example, a huge VM image file.
    Do I need to rebuild the utilities to support huge files or do I need install/upgrade rpm's?  If I need to relink lint programs, what are the commands to do such?
    Thanks.

    Correction:  This is occurring on a 5.7 release and not a 5.8.  The command uname -a yields 2.6.32-200.13.1.2...
    The file size I'm trying to assemble is ~ 4 GB (the zip files for 12c DB).  As for using tar, as I understand it, EM 12c wants a single zip file for software deployments.
    I'm assuming that the large file support was not linked into the utilities. For example when I try to list contents of a directory which has huge/large files, I get the following message:
    ls:  V29653-01.iso:  Value too large for defined data type
    I'm assuming that I need to upgrade to later release.
    Thanks.

  • IdcApache2Auth.so Compiled With Large File Support

    Hi, I'm installing UCM 10g on solaris 64 Bit plattform and Apache 2.0.63 , everything went fine until I update configuration in the httpd.conf file. When I query server status it seems to be ok:
    +./idcserver_query+
    Success checking Content Server  idc status. Status:  Running
    but in the apache error_log and I found the next error description:
    Content Server Apache filter detected a bad request_rec structure. This is possibly a problem with LFS (large file support). Bad request_rec: uri=NULL;
    Sizing information:
    sizeof(*r): 392
    +[int]sizeof(r->chunked): 4+
    +[apr_off_t]sizeof(r->clength): 4+
    +[unsigned]sizeof(r->expecting_100): 4+
    If the above size for r->clength is equal to 4, then this module
    was compiled without LFS, which is the default on Apache 1.3 and 2.0.
    Most likely, Apache was compiled with LFS, this has been seen with some
    stock builds of Apache. Please contact Support to obtain an alternate
    build of this module.
    When I search at My Oracle Support for suggestions about how to solve my problem I found a thread which basically says that Oracle ECM support team could give me a copy IdcApache2Auth.so compiled with LFS.
    What do you suggest me?
    Should I ask for ECM support team help? (If yes please tell me How can I do it)
    or should I update the apache web server to version 2.2 and use IdcApache22Auth.so wich is compiled with LFS?
    Thanks in advance, I hope you can help me.

    Hi ,
    Easiest approach would be to use Apache2.2 and the corresponding IdcApache22Auth.so file .
    Thanks
    Srinath

  • Will PB 2.0 support large M4A files

    One hugely irritating thing I noticed when I got my PB was that 80% of my iTunes files couldn't copied onto the device because of the large M4A file format (Apple Lossless Encoding) isn't supported. Has this been fixed or should I say enhanced in 2.0? (My music libary comes from my CDs, not iTunes store, and since I play this back on pretty decent system, I don't want the compressed files which are missing a lot of the music.)
    sr

    on your PlayBook,
    what happens when you take the file explorer (or the free app named "Air Browser") and click on the file ? Does it open in the video player ? Does the video play or do you get an error message (if so, what is the exact error message) ?
    as a reminder, please check the following article from the public knowledge about the supported codecs:
    KB26518 Media types that are supported on the BlackBerry PlayBook
    The search box on top-right of this page is your true friend, and the public Knowledge Base too:

  • Large File Support

    Hi all,
    I'm new to the fourms but not new to the products. Recently
    I've been working on a project that requires very large XML files,
    like over 200,000 lines of code. I have noticed that when I work on
    these large files the software is really buggy. First I can't use
    the scroll wheel on my mouse. It just doesn't work. Second, half
    the time when I open a file it will trunkate the code after a few
    hundred thousand lines. I don't think it should do this. Visual
    Studio sure doesn't. I would rather work with dreamweaver, but
    because of these problems on large files I am forced to use Visual
    Studio. So, here's the question, Is there something I should do to
    make dreamweaver work with large files or is this something that
    Adobe needs to fix?
    Thanks for the replys.

    man mount_ufs
    -o largefiles | nolargefiles

  • OSB - Iterating over large XML files with content streaming

    Hi @ll
    I have to iterate over all item in large XML files and insert into a oracle database.
    The file is about 200 MB and contains around 500'000, and I am using OSB 10gR3.
    The XML structure is something like this:
    <allItems>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    <item>.....</item>
    </allItems>
    Actually I thought about using a proxy service with enabled content streaming and a "for each" action for iterating
    over all items. But for this the whole XML structure has to be materialized into a variable otherwise it is not possible!
    More about streaming large files can be found here:
    [http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#large_messages]
    There is written "When you enable streaming for large message processing, you cannot use the ... for each...".
    And for accessing single items you should should use an assign action with a xpath like "$body/allItems/item[1]";
    this works fine and not the whole XML stream has to be materialized.
    So my idea was to use the "for each" action and processing seqeuntially all items with a xpath like:
    $body/allItems/item[$counter]
    But the "for each" action just allows iterating over a sequence of xml items by defining an selection xpath
    and the variable that contains all items. I would like to have a "repeat until" construct that iterates as long
    $body/allItems/item[$counter] returns not null. Or can I use the "for each" action differently?
    Does the OSB provides any other iterating mechanism? I know there is this spli-join construct that supports
    different looping techniques, but as far I know it does not support content streaming, is this correct?
    Did I miss somehting?
    Thanks a lot for helping!
    Cheers
    Dani
    Edited by: user10095731 on 29.07.2009 06:41

    Hi Dani,
    Yes, according to me this would be the best approach. You can use content-streaming to pass this large xml to ejb and once it passes successfully EJB should operate on this. If you want any result back (for further routing), you can get it back from EJB.
    EJB gives you power of java to process this file and from java perspective 150 MB is not a very LARGE data. Ensure that you are using buffering. Check out this link for an explanation on Java IO Streams and, in particular, buffered streams -
    http://java.sun.com/developer/technicalArticles/Streams/ProgIOStreams/
    Try dom4J with xpp (XML Pull Parser) parser in case you have parsing requirement. We had worked with 1.2GB file using this technique.
    Regards,
    Anuj

  • I want to save large pdf files from my computer to my iPad so I can view them in Adobe Reader.

    I want to save large pdf files from my computer to my iPad so I can view them in Adobe Reader. I do not want to view them in iBooks or Quickoffice. Since I cannot email these large files, how can I get them on my iPad? Thanks for your help.

    I think that Adobe reader supports file sharing.
    Connect the iPad to your computer and launch iTunes. Select the iPad on the left side under "devices". Click on the Apps Tab in the window on the right. Scroll down to the File Sharing Apps list under the main apps list. Click on Adobe Reader. Drag the documents into the file sharing window on the right or select them using the "add" button.
    Launch the Adobe reader app and the files should be in there.

  • Have a very large text file, and need to read lines in the middle.

    I have very large txt files (around several hundred megabytes), and I want to be able to skip and read specific lines. More specifically, say the file looks like:
    scan 1
    scan 2
    scan 3
    scan 100,000
    I want to be able to skip move the filereader immediately to scan 50,000, rather than having to read through scan 1-49,999.
    Thanks for any help.

    If the lines are all different lengths (as in your example) then there is nothing you can do except to read and ignore the lines you want to skip over.
    If you are going to be doing this repeatedly, you should consider reformatting those text files into something that supports random access.

  • Large XML File in JMS Message

    Hi Experts,
    I got to send larg XML files (+20MB) to the Message listener for processing. Would you please comment if it would be a good approach or suggest some tested and authentic solution for this purpose. Prompt response will be highly appreciated.
    Regards
    Shahzad mahmood

    You can use the following algo to send large messages
    - use BytesMessage to send the large message
    - break the message into chunks (chunk size can be configurable)
    - in the header of the first message add a header prioperty to notify that this is the first message
    - in the header add a property for the size of the each message
    - in the header of the last message add a property to notify that its the last message
    on the receiver side
    - see the header property of the message
    - if its first message, open a stream (may be file stream)
    - keep writing messages to this stream till you receive the last message (identified by the header property)
    - close the stream after processing the last message.
    BTW some JMS products like FioranoMQ (www.fiorano.com) provide built-in support for large messages.
    You might want to look into such products.
    Thanks
    Bhuvan

  • Music lost replaced with large 'other' file

    Hi.
    All my music disappeared from my 3gs which now has a large amount of 'other' files - I assume the corrupt music files.
    I'm having to re-sync my music but how do I get rid of the large other files. DO I need to restore from a back up?
    Ta

    longestdrive wrote:
    how do I get rid of the large other files. DO I need to restore from a back up?
    That will do it. This article gives all the details and shows you the screens you will see during the restore: http://support.apple.com/kb/HT1414

  • HELP!! Very Large Spooling / File Size after Data Merge

    My question is: If the image is the same and only the text is different why not use the same image over and over again?
    Here is what happens...
    Using CS3 and XP (P4 2.4Ghz, 1GB Ram, 256MB Video Card) I have taken a postcard pdf (the backside) placed it in a document, then I draw a text box. Then I select a data source and put the fields I wish to print (Name, address, zip, etc.) in the text box.
    Now, under the Create Merged Document menu I select Multiple Records and then use the Multiple Records Layout tab to adjust the placement of this postcard on the page. I use the preview multiple records option to lay out 4 postcards on my page. Then I merge the document (it has 426 records).
    Now that my merged document is created with four postcards per page and the mailing data on each card I go to print. When I print the file it spools up huge! The PDF I orginally placed in the document is 2.48 MB but when it spools I can only print 25 pages at a time and that still takes FOREVER. So again my question is... If the image is the same and only the text is different why not use the same image over and over again?
    How can I prevent the gigantic spooling? I have tried putting the PDF on the master page and then using the document page to create the merged document and still the same result. I have also tried createing a merged document with just the addresses then adding the the pdf on the Master page afterward but again, huge filesize while spooling. Am I missing something? Any help is appreciated :)

    The size of the EMF spool file may become very large when you print a document that contains lots of raster data
    View products that this article applies to.
    Article ID : 919543
    Last Review : June 7, 2006
    Revision : 2.0
    On This Page
    SYMPTOMS
    CAUSE
    RESOLUTION
    STATUS
    MORE INFORMATION
    Steps to reproduce the problem
    SYMPTOMS
    When you print a document that contains lots of raster data, the size of the Enhanced Metafile (EMF) spool file may become very large. Files such as Adobe .pdf files or Microsoft Word .doc documents may contain lots of raster data. Adobe .pdf files and Word .doc documents that contain gradients are even more likely to contain lots of raster data.
    Back to the top
    CAUSE
    This problem occurs because Graphics Device Interface (GDI) does not compress raster data when the GDI processes EMF spool files and generates EMF spool files.
    This problem is very prominent with printers that support higher resolutions. The size of the raster data increases by four times if the dots-per-inch (dpi) in the file increases by two times. For example, a .pdf file of 1 megabyte (MB) may generate an EMF spool file of 500 MB. Therefore, you may notice that the printing process decreases in performance.
    Back to the top
    RESOLUTION
    To resolve this problem, bypass EMF spooling. To do this, follow these steps:1. Open the properties dialog box for the printer.
    2. Click the Advanced tab.
    3. Click the Print directly to the printer option.
    Note This will disable all print processor-based features such as the following features: N-up
    Watermark
    Booklet printing
    Driver collation
    Scale-to-fit
    Back to the top
    STATUS
    Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section.
    Back to the top
    MORE INFORMATION
    Steps to reproduce the problem
    1. Open the properties dialog box for any inbox printer.
    2. Click the Advanced tab.
    3. Make sure that the Print directly to the printer option is not selected.
    4. Click to select the Keep printed documents check box.
    5. Print an Adobe .pdf document that contains many groups of raster data.
    6. Check the size of the EMF spool file.

  • Opening large photo files produces black screen

    When opening a small (less than a meg) photo, it works find. I'm able to move to the next file. If I open a large file (8-10 megs) it sometimes works fine however if I use the arrow to move to the next file in the folder I get a black screen. Also if I have a folder of many large photo files I can see all the thumbnail photos at a small size. If I try to increase the view size to large some will be black, however if I click on the file many times it opens fine, sometimes not.  It was working fine, just recently started to act up.  Using adobe photoshop I can open each file so I know they are there and not corrupt files.

    Hello @JohnBruin,
    Welcome to the HP Forums, I hope you enjoy your experience! To help you get the most out of the HP Forums I would like to direct your attention to the HP Forums Guide First Time Here? Learn How to Post and More.
    I have read your post on how opening large photo files produces a black screen on your desktop computer. I would be happy to assist you in this matter!
    If you boot your system into Safe Mode, are you able to open large photographs? Is this a recent or a reoccurring issue? 
    In the meantime, I recommend following the steps in this document on Computer Locks Up or Freezes (Windows 8). This should help prevent your system from defaulting to a black screen. 
    I would also encourage you to post your product number for your computer. Below is a is an HP Support document that will demonstrate how to find your computer's product number. 
    How Do I Find My Model Number or Product Number?
    Please re-post with the results of your troubleshooting, as well as the requested information above. I look forward to your reply!
    Regards
    MechPilot
    I work on behalf of HP
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the right to say “Thanks” for helping!

Maybe you are looking for