Convert Large Binary File to Large ASCI File

Hello, I need some suggestions on how to convert a large binary file > 200MB to an ASCI File. I have a program that streams data to disk in binary format and now I would like to either add to my application or create a new app if necessary. Here is what I want to do:
Open the Binary File
Read a portion of the File into an array
Convert the array to ASCI
Save the array to a file
Go back and read more binary data
Convert the array to ASCI
Append the array to the ASCI file
Keep converting until the end of the binary file.
I should say that the binary data is 32-bits and I do need to parse the data; bits 0-11, bits 12-23, and bits 31-28, but I can figure that out later. The problem I see is that the file will be very large, perhaps even greater than 1GB and I don't have a clue how to read a portion of the file and come back and read another portion and then stop at the end of the file. I hope to save the data in a spreadsheet.  If anyone has some experience with a similiar situation I'd appreciate any input or example code.
Thanks,
joe

sle,
In the future, please create a new thread for unrelated questions.  To answer your question, you can use "Split Number" from the Data Manipulation palette.
Message Edited by jasonhill on 03-14-2006 03:46 PM
Attachments:
split number.PNG ‏2 KB

Similar Messages

  • Convert Binary File to ASCI File

    Hello, I'm having a little trouble writing a labview program to read a binary file and save it to an ASCI file. The binary file may be large so I've tried to read the file in chunks. I will appreciate any help you could offer. I've attached my .vi below
    thanks,
    joe
    Attachments:
    BinToASCI.vi ‏45 KB

    I don't really understand what you mean by "the data from the binary file are really huge numbers and don't match what is expected". Are you specifying the correct datatype to the Read function?
    As far as your VI is concerned: Your initial statements indicated that you wanted to read this in chunks since you're dealing with very large files. However, while your VI reads in one number at a time, it writes out the whole thing in one big chunk. This is going to be a huge memory draw. You need to write the data to the ASCII file at the same time as when you read it, not after you've read it all. Given this, it makes no sense to only read one number at a time as you're doing. You're going to be performing a disk I/O operation for each number! This is going to take forever, not to mention take a nice bite out of the life expectancy of your hard drive. I don't say this often, but you need to restructure your VI to something like this:
    1. Provide dialog to ask for the file to read.
    2. Provide dialog to ask for the file to write to.
    3. Write out your header to the output file.
    4. Start a loop where the number of iterations is based on the number of times you have to read the file using a max chunk size. This is based on the file size and the size of the chunk that you're comfortable in reading. The chunk size is dependent on your machine - i.e., speed and amount of RAM.
    5. Read that many numbers and write out the converted numbers.
    6. Repeat until done.
    7. Close both files.
    See attached file as a starting point. The error handling is not that great, so keep that in mind.
    Message Edited by smercurio_fc on 03-17-2006 05:20 PM
    Attachments:
    Read Chunks.vi ‏68 KB

  • Somewhat large binary file has trouble being opened

    Hi,
    I'm a new Labview user (Labview 8.0), and I'm trying to convert a
    binary file into a wave file and do a lot of other things with that
    binary file (like analyzing frequency via a spectrum graph). My file
    works fine for files under 150MB, but once the file size is larger than
    that, the computer slows way way down and I cannot collect the entire
    data set. I already have 1GB of RAM (512x2) and I don't want to pay
    more money if the issue can be fixed within Labview.
    Is there a way to split up the binary file into little pieces when
    being read, or is there another way to go about doing this? I'm using
    the File Open dialog to read the file.
    If anyone could provide a step-by-step solution to this, I'd be very
    grateful. Thanks everyone.
    Jennings

    Thanks for the reply,
    That would kind of work, but ultimately, I need a graph that
    contains the entire file. In other words, I need to convert the binary
    file to a wave file that I can view as a graph. If I only open one
    half, then I would only see the graph from that first half, which is
    not useful to the project. I need to view the wave file in its
    entirety as a waveform graph, so is there any way to add the data from
    the second half of the file to the first half?
    I guess it comes down to the memory buffer size, right? Labview tries
    to load the entire file onto the RAM, but I need it to load a portion
    onto the RAM, save that portion on the hard drive, load a second
    portion onto the RAM, and add that portion to the first portion on the
    hard drive, and so forth.
    Is there any function in labview that can do this? Or can I only
    manipulate data using the RAM, and not the hard drive?
    Thanks for any and all help!
    Jennings
    JLS wrote:
    > Hello,
    >  
    > From your previous post, it sounds like you're able to open and read from the file - can you try reading half (the get file size function will be helpful) and immediately writing that half to another file?  If this works, you can try the same thing with the second half using the set file position function.  You could repeat this process until you had files which were of manageable size for your application/processing needs.
    >  
    > I hope this helps!
    >  
    > Best Regards,
    >  
    > JLS

  • In camera raw how large should I open my raw file before converting it to a TIF file?

    in camera raw how large should I open my raw file before converting it to a TIF file---2736 x 3648 (10.0 MP), 3072x4096 (12.7 MP) or 3840x5120? (19.7 MP).  I want a sharp TIF file.   I'm shooting with an Olympus E-510, 10.0 MP camera?
    thanks - Ken
    [email protected]

    rasworth wrote:
    There is no advantage to saving or opening other than at the native resolution.
    Actually, not entirely true. In the case of Fuji DSLR's you would do better to double the rez in Camera Raw as that matches the interpolation that the Fuji software does in their higher quoted "effective resolutions"(it ain't real resolution mind you but it can benefit certain image types).
    If you know for an absolute fact you need more resolution than the native file has, you really might want to test upsampling in Camera Raw as it has a newly tuned upsample (put in in ACR 5.2) that is an adaptive Bicubic algorithm that is either the normal Bicubic or Bicubic Smoother depending on the size–and that's something even Photoshop can't do.
    But in general unless you know for a fact you need the image bigger (or have a Fuji camera) processing at the file's native size is the most efficient.

  • Problems converting PDF to MS Word document.  I successfuly converted 4 files and now subsequent files generate a "conversion failure" error when attempting to convert the file.  I have a large manuscript and I separated each chapter to assist with the co

    Problems converting PDF to MS Word document.  I successfully converted 4 files and now subsequent files generate a "conversion failure" error when attempting to convert the file.  I have a large manuscript and I separated each chapter to assist with the conversion; like I said, first 4 parts no problem, then conversion failure.  I attempted to convert the entire document and same result.  I specifically purchased the export to Word feature.  Please assist.  I initially had to export the Word Perfect document into PDF and attempting to go from PDF to MS Word.

    Hi sdr2014,
    I'm sorry to hear your conversion process has stalled. It sounds as though the problem isn't specific to one file, as you've been unable to convert anything since the first four chapters converted successfully.
    So, let's try this:
    If you're converting via the ExportPDF website, please log out, clear the browser cache, and then log back in. If you're using Reader, please choose Help > Check for Updates to make sure that you have the most current version installed.
    Please let us know how it goes.
    Best,
    Sara

  • Read large binary file

    How would you read a large binary file into memory please? I had a thought about creating a Byte Array on the fly, however you cannot createa byte array with a long so what happens when you reach the maximum value an integer can store?

    a) You can map the file, instead of reading it physically.
    b) Let's suppose that you are running Sun JVM in Windows 2000/XP/2003 and you have 4-GB of RAM in your machine (memory is so cheap nowadays...)
    - Windows can not use the full 4GB (it reserves 2GB for itself, and if you buy a special server version, it reserves only 1GB for itself.)
    - Java can't use more than 1.6 GB in Windows due to some arcane reasons. (Someone posted the Bug Database explanation in this forum.)
    So you have an upper limit of 1.6 GB.
    1.6GB is a value smaller than the size of an int, so you could have an array as big (using the class java.nio.ByteBuffer and the like). Try and see.

  • DNG-converted 10D files too large, not recognizable in Aperture

    [I've posted this message in the Adobe DNG forums after doing some searing around for an answer. I thought some others here might be doing the same thing and could comment]
    While importing some of my older Canon 10D-shot images into Aperture, I noticed something curious about the DNG-versions of the files. They're much larger than I would expect and Apple's Core Image processor doesn't appear to be able to read them. For example, on one file the original CRW file is 5.3MB. The DNG conversion without the embedded original is 17.4MB. This is consistent across all my 10D converted files.
    Apple's Preview app, as well as anything else based on the Core Image processing code, can't read the DNG, but it can read the original CRW. I know that Apple has botched parts of the DNG specification, but since the converted DNG is twice the size I would expect it to be, this seems like it might be a problem with the DNG converter itself. Anyone else seeing this with rev 3.2 of the converter?
    BTW, the files open up in ACR just fine.
    G5 1.8G SP, 1.5M RAM   Mac OS X (10.4.3)  

    Sorry for the noise. It turns out that my prefs had gotten munged overnight when the DNG converter crashed and had returned to one of my test configurations where I was converting the RAW files to linear. Aperture, as is well understood, doesn't handle the de-mosaiced format well at all.
    This was user error.

  • When I shoot in "raw" mode my files are in the 12 to 15 MB range.  If I convert the file to "Tiff" the file converts to about 35MB.  When I convert to a "tiff", does this mean I can make larger prints?

    When I shoot in "raw" mode my files are in the 12 to 15 MB range.  If I convert the file to "Tiff" the file converts to about 35MB.  When I convert to a "tiff", does this mean I can make larger prints?

    When I shoot in "raw" mode my files are in the 12 to 15 MB range.  If I convert the file to "Tiff" the file converts to about 35MB.  When I convert to a "tiff", does this mean I can make larger prints?
    No, that only means, that your raw format has a better compression. Converting to TIFF will not add any better resolution. Not the file size does count for better prints, but the pixel size - the width and height of the photo in pixels.

  • Any info on CRC, checksum, or other file integity VIs for large binary files?

    Working on send rather large binary files (U16 stream to file) via internet. Would like to check for file integity via CRC or comparable checksum. Would appreciate any comments/suggestions

    Hi Brian,
    You said;
    "Would appreciate any comments/suggestions".
    You did not mention what transport mechanism you plan on using.
    As I understand ALL of the standard mechanism use CRC of some form to ensure the validity of the packet BEFORE it is ever passed up the OSI 7-Layer model.
    TCP/IP based protocols will see to it that all of the segments of a transfer are completed and in order.
    UDP on the other hand is a broadcast type protocol and does not ensure any packets are recieved.
    So,
    At the very worst you should be able to handle your "sanity checks" by simply using a sequence value that is included in your out-going message. The reciever should just have to check if the current seq value is equal to the previous +1.
    I co-developed an app that ut
    ilized this technique to transfer status messages from a RT platform to a Windows machine. The status messages in this app where concidered FYI, so the sequence counter served as a way of determining if anything was missed.
    I am insterested in others thoughts on this subject.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Large binary file reading

    Hi,
    I'm currently using a java.io.BufferedInputStream to read a large binary file.
    I recently discovered there is a chunk of data that shows up near the end of each file. (these files are binary and are XX to XXX mb in size)
    Loading it all to memory first would kill my performance so I'd like to be able to come up with an alternate method.
    Does Java or the class above offer a way I can
    1) get the length of the file
    2) seek to a point say 2000 bytes from the end so I can start reading the binary data?
    Ideally I'd like to do a backwards read as that would be quickest. Is there a way to change the operation so that a 'read' would be reading backwards (from end to beginning)?
    For me speed is the #1 thing i have to worry about. So to be able to seek forward several hundred thousand bytes at a time would help tremendously.

    how does the 'skip' method work? Probably by using OS specific calls to read to a point in the file.
    maybe I could
    'skip' length - 20k from the start or something like
    that.
    Yep.

  • Reading large binary files into an array for parsing

    I have a large binary log file, consisting of binary data separted by header flags scattered nonuniformly thorughout the data.  The file size is about 50M Byte.  When I read the file into an array, I get the Labview Memory full error.  The design of this is to read the file in and then parse it fro the flags to determine where to separate the data blocks in the byte stream. 
    There are a few examples that I have read on this site but none seem to give a straight answer for such a simple matter.   Does anyone have an example of how I should approach this?

    I agree with Gerd.  If you are working with binaries, why not use U8 instead of doubles.
    If the file is indeed 50MB, then the array should be expecting 52428800 elements, not 50000000.  So if you read the file in a loop and populate an element at a time, you could run out of memory fast because any additional element insertion above 50000000 may require additional memory allocation of the size above 50000000 (potentially for each iteration).  This is just speculation since I don't see the portion of your code that populates the array.
    Question:  Why do you need an array?  What do you do with the data after you read it?  I agree with Altenbach, 50MB is not that big, so working with a file of such a size should not be a problem.

  • Disk Utility: Creating a new blank image receiving "file too large" error.

    Hello All!
    I'm trying to create a 10GB non-encrypted, non-compressed RW blank image via the disk utility. The DU runs for a few minutes then barfs out "file too large" error. I have over 30GB free on my HDD. I tried with a smaller size of 6GB to no avail. Also tried unsuccessfully to create from a file (about 4 GB). My ultimate goal is to create a case-insensitive image to run an extremely important program needed for high priority work productivity (i.e. WoW). Thanks in advance for any advice! You will be my new best friend if you help me resolve this. =D
    Hollie
    "There are only 10 types of people in this world: Those who understand binary, and those who don't."

    Hi Hollie, and welcome to the forums!
    Have you created images before successfully?
    Is this to/on your boot drive, or an external drive?
    Have you done any Disk/OS maintenance lately?
    We might see if there are some big temp files left or such...
    How much free space is on the HD, where has all the space gone?
    OmniDiskSweeper is now free, and likely the best/easiest...
    http://www.omnigroup.com/applications/omnidisksweeper/
    WhatSize...
    http://www.macupdate.com/info.php/id/13006/
    Disk Inventory X...
    http://www.derlien.com/
    GrandPerspective...
    http://grandperspectiv.sourceforge.net/

  • I've just changed from a PC to a mac and have a large number of downloaded WMA music files (not i-tunes purchases) which I just can't get into my i-tunes library. File converters don't seem to work either. Any ideas?

    I've just changed from a PC to a mac and have a large number of downloaded WMA music files which I can't get into i-tunes. When the library was in Windows, i-tunes would convert the files automatically, but this doesn't happen now. I've downloaded a couple of file converters, but these don't seem to work either. Any ideas?

    iTunes for Windows (but not iTunes for Mac) can import and convert unprotected WMA tracks. If the tracks are protected by DRM (Digital Rights Management) then it will not accept them.
    One option would be to install iTunes on your PC, do the conversion, and then transfer the converted tracks from iTunes on your PC to iTunes on your Mac.

  • How can I convert the binary file content to XML message

    Dear friends,
    I poll the binary file from a ftp server but the payload only includes the binary content, no XML structure in the payload, I hope to convert the binary content to a element node within the XML structure, how can I do that? via content conversion?
    Thanks and regards,
    Bean

    Read the binary file stream using java I/O standard functions and convert the read stream to Base64 format. Now map this content to one of the field in target XML structure.
    You need a java mapping for this.
    what is your target system?
    Thanks,
    Gujjeti.
    Hi Gujjeti,
    Thanks a lot for your kind help, my target system is R/3.
    Can I achieve that with a UDF or a simple way?
    Regards,
    Bean

  • NT 4.0, LabVIEW 6, Error 4 (END OF FILE) when trying to seek to byte offset 4096 (from start of file) when the file is larger than 2 Gig

    If I try to seek (or read) with the position mode wired and set for START, I get error 4 (END OF FILE) if the file is larger than 2 GB. I'm only trying to move the file pointer 4096 bytes, not trying to seek or read more than 2GB, but I get the error if the file is over 2Gig.
    Instead, I have to do reads, with the position mode unwired, until I get to the place in the file that I want to be.
    Is this expected behavior?

    Hello,
    LabVIEW File I/O functions use an I32 value to store the size of a file. This means that we are limited to ~2GB file sizes when using the File I/O functions in LabVIEW. This explains why you are seeing odd behavior when trying to read to the end of the file, since this is causing the byte count to exceed ~2GB.
    I hope this explanation sheds some light on the situation for you. Hopefully the workaround (repeated reads) is not too much of an inconvenience.
    Good luck with your application, and have a pleasant day.
    Sincerely,
    Darren Nattinger
    Applications Engineer
    National Instruments
    Darren Nattinger, CLA
    LabVIEW Artisan and Nugget Penman

Maybe you are looking for