Somewhat large binary file has trouble being opened

Hi,
I'm a new Labview user (Labview 8.0), and I'm trying to convert a
binary file into a wave file and do a lot of other things with that
binary file (like analyzing frequency via a spectrum graph). My file
works fine for files under 150MB, but once the file size is larger than
that, the computer slows way way down and I cannot collect the entire
data set. I already have 1GB of RAM (512x2) and I don't want to pay
more money if the issue can be fixed within Labview.
Is there a way to split up the binary file into little pieces when
being read, or is there another way to go about doing this? I'm using
the File Open dialog to read the file.
If anyone could provide a step-by-step solution to this, I'd be very
grateful. Thanks everyone.
Jennings

Thanks for the reply,
That would kind of work, but ultimately, I need a graph that
contains the entire file. In other words, I need to convert the binary
file to a wave file that I can view as a graph. If I only open one
half, then I would only see the graph from that first half, which is
not useful to the project. I need to view the wave file in its
entirety as a waveform graph, so is there any way to add the data from
the second half of the file to the first half?
I guess it comes down to the memory buffer size, right? Labview tries
to load the entire file onto the RAM, but I need it to load a portion
onto the RAM, save that portion on the hard drive, load a second
portion onto the RAM, and add that portion to the first portion on the
hard drive, and so forth.
Is there any function in labview that can do this? Or can I only
manipulate data using the RAM, and not the hard drive?
Thanks for any and all help!
Jennings
JLS wrote:
> Hello,
>  
> From your previous post, it sounds like you're able to open and read from the file - can you try reading half (the get file size function will be helpful) and immediately writing that half to another file?  If this works, you can try the same thing with the second half using the set file position function.  You could repeat this process until you had files which were of manageable size for your application/processing needs.
>  
> I hope this helps!
>  
> Best Regards,
>  
> JLS

Similar Messages

  • Convert Large Binary File to Large ASCI File

    Hello, I need some suggestions on how to convert a large binary file > 200MB to an ASCI File. I have a program that streams data to disk in binary format and now I would like to either add to my application or create a new app if necessary. Here is what I want to do:
    Open the Binary File
    Read a portion of the File into an array
    Convert the array to ASCI
    Save the array to a file
    Go back and read more binary data
    Convert the array to ASCI
    Append the array to the ASCI file
    Keep converting until the end of the binary file.
    I should say that the binary data is 32-bits and I do need to parse the data; bits 0-11, bits 12-23, and bits 31-28, but I can figure that out later. The problem I see is that the file will be very large, perhaps even greater than 1GB and I don't have a clue how to read a portion of the file and come back and read another portion and then stop at the end of the file. I hope to save the data in a spreadsheet.  If anyone has some experience with a similiar situation I'd appreciate any input or example code.
    Thanks,
    joe

    sle,
    In the future, please create a new thread for unrelated questions.  To answer your question, you can use "Split Number" from the Data Manipulation palette.
    Message Edited by jasonhill on 03-14-2006 03:46 PM
    Attachments:
    split number.PNG ‏2 KB

  • Read large binary file

    How would you read a large binary file into memory please? I had a thought about creating a Byte Array on the fly, however you cannot createa byte array with a long so what happens when you reach the maximum value an integer can store?

    a) You can map the file, instead of reading it physically.
    b) Let's suppose that you are running Sun JVM in Windows 2000/XP/2003 and you have 4-GB of RAM in your machine (memory is so cheap nowadays...)
    - Windows can not use the full 4GB (it reserves 2GB for itself, and if you buy a special server version, it reserves only 1GB for itself.)
    - Java can't use more than 1.6 GB in Windows due to some arcane reasons. (Someone posted the Bug Database explanation in this forum.)
    So you have an upper limit of 1.6 GB.
    1.6GB is a value smaller than the size of an int, so you could have an array as big (using the class java.nio.ByteBuffer and the like). Try and see.

  • Any info on CRC, checksum, or other file integity VIs for large binary files?

    Working on send rather large binary files (U16 stream to file) via internet. Would like to check for file integity via CRC or comparable checksum. Would appreciate any comments/suggestions

    Hi Brian,
    You said;
    "Would appreciate any comments/suggestions".
    You did not mention what transport mechanism you plan on using.
    As I understand ALL of the standard mechanism use CRC of some form to ensure the validity of the packet BEFORE it is ever passed up the OSI 7-Layer model.
    TCP/IP based protocols will see to it that all of the segments of a transfer are completed and in order.
    UDP on the other hand is a broadcast type protocol and does not ensure any packets are recieved.
    So,
    At the very worst you should be able to handle your "sanity checks" by simply using a sequence value that is included in your out-going message. The reciever should just have to check if the current seq value is equal to the previous +1.
    I co-developed an app that ut
    ilized this technique to transfer status messages from a RT platform to a Windows machine. The status messages in this app where concidered FYI, so the sequence counter served as a way of determining if anything was missed.
    I am insterested in others thoughts on this subject.
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • How to fix: "Not Authorized: Sorry, this file has been previously opened in another machine"?

    I recently downloaded an online coursepack for grad school, and it is a password protected document that opens in Adobe Reader version 11 or higher. I updated my Adobe Reader to the proper version, typed in the appropriate password, unchecked "Enable global object security policy" in Edit ->Preferences->JavaScript, and was able to access the file initially. Then a week later I must have updated my Adobe Reader and I could no longer access the file. The error kept popping up, "Not Authorized: Sorry, this file has been previously opened in another machine."
    I never opened the file on any other machine and even when I tried re-dowloading the coursepack the error did not go away.
    How can I access my coursepack file? I have finals coming up and I need access to my coursepack!

    EJP wrote:'Confirm that the file has been downloaded successfully from the servlet'.Well, actually should have been How can I let the servlet know that a client has downloaded the content in the servlet's output stream successfully? Anyway I corrected the title to be more concise. Any idea how this can be done?
    Edited by: user12239927 on Nov 23, 2010 1:06 AM

  • Large binary file reading

    Hi,
    I'm currently using a java.io.BufferedInputStream to read a large binary file.
    I recently discovered there is a chunk of data that shows up near the end of each file. (these files are binary and are XX to XXX mb in size)
    Loading it all to memory first would kill my performance so I'd like to be able to come up with an alternate method.
    Does Java or the class above offer a way I can
    1) get the length of the file
    2) seek to a point say 2000 bytes from the end so I can start reading the binary data?
    Ideally I'd like to do a backwards read as that would be quickest. Is there a way to change the operation so that a 'read' would be reading backwards (from end to beginning)?
    For me speed is the #1 thing i have to worry about. So to be able to seek forward several hundred thousand bytes at a time would help tremendously.

    how does the 'skip' method work? Probably by using OS specific calls to read to a point in the file.
    maybe I could
    'skip' length - 20k from the start or something like
    that.
    Yep.

  • Large txt files, any way to open them?

    Hi, I wonder if anyone has released an update to ipod (30gig photo) that allows the Ipod to open larger TXT files than 7kb.
    / Jonis

    http://www.versiontracker.com/dyn/moreinfo/vt3/16309

  • Large pdf files can't be opened in adobe reader on wp8

    i am using adobe reader on wp8, but it can't open pdf file if the pdf file is large (i am sure a pdf larger than 80M can't be opened), it says "the path is invaild". it works well with small pdf files.
    any reply will be appreciated~

    WP8 = Windows Phone? If so, you'll want to go to the Digital Editions forum.

  • How to check whether the Application Server file has already been opened?

    Hi Experts,
    I have a query related to Application Server file. I am using multithreading concept to process the data and write it in to a single file.
    For example, I have 4 workprocesses. Each workprocess will process the data and whenever it has a record available it will access the file and write it directly.
    Problem is the statements that are written in the workprocess is same and I want to check the status whether the file has been opened or not ?
    Thanks in advance!!!
    Thanks,
    Babu Kilari

    Depends on the structure, and whether the data needs to be sorted in some way in the final file.
    In any case, I don't think there will be a significant performance difference between using OPEN DATASET again and getting funny with Unix commands.
    If you don't need to sort the final file, you can use strings to read, concatenate and write the data even without line-based DO ENDDO loops, this works pretty fast.
    I hope we are not talking about GBytes of data
    Thomas

  • UTL_FILE write_error when writing large binary files to unix os

    I am trying to write large files to a folder in unix from a table containing a BLOB object. The procedure below is called by another procedure I have written to do this. It works in windows environment fine with files up to 360MB. When I run this exact same procedure in UNIX I get an initialization error. When I change the WB in the fopen call to W it works. I can store all the files I want up to 130MB in size. The next size larger file I have is 240MB and it fails after writing the first 1KB passing the utl_file.write_error message. If someone can help me to diagnose the problem, I would really appreciate it. i have been trying everything I can think of to get this to work.
    Specifics are, the windows version is 10GR2, on unix we are running on Sun Solaris 9 using 9iR2
    PROCEDURE writebin(pi_file_name IN VARCHAR2, pi_file_url IN VARCHAR2, pi_file_data IN BLOB)
    IS
    v_file_ref utl_file.file_type;
    v_lob_size NUMBER;
    v_raw_max_size constant NUMBER := 32767;
    v_buffer raw(32767);
    v_buffer_offset NUMBER := 1;
    -- Position in stream
    v_buffer_length NUMBER;
    BEGIN
    -- WB used in windows environment. W used in unix
    v_lob_size := dbms_lob.getlength(pi_file_data);
    v_file_ref := utl_file.fopen(pi_file_url, pi_file_name, 'WB', v_raw_max_size);
    v_buffer_length := v_raw_max_size;
    WHILE v_buffer_offset < v_lob_size
    LOOP
    IF v_buffer_offset + v_raw_max_size > v_lob_size THEN
    v_buffer_length := v_lob_size -v_buffer_offset;
    END IF;
    dbms_lob.READ(pi_file_data, v_buffer_length, v_buffer_offset, v_buffer);
    utl_file.put_raw(v_file_ref, v_buffer, TRUE);
    v_buffer_offset := v_buffer_offset + v_buffer_length;
    END LOOP;
    utl_file.fclose(v_file_ref);
    END writebin;
    Message was edited by:
    user599879

    check if this cample code helps -
    CREATE OR REPLACE PROCEDURE prc_unload_blob_to_file IS
    vlocation      VARCHAR2(16) := ‘LOB_OUTPUT’;
    vopen_mode     VARCHAR2(16) := ‘w’;
    bimax_linesize NUMBER := 32767;
    v_my_vr        RAW(32767);
    v_start_pos    NUMBER := 1;
    v_output       utl_file.file_type;
    BEGIN
    FOR cur_lob IN (SELECT vmime_type,
    blob_resim,
    vresim,
    dbms_lob.getlength(blob_resim) len
    FROM tcihaz_resim a
    WHERE rownum < 3 -- for test purposes
    ORDER BY a.nresim_id) LOOP
    v_output := utl_file.fopen(vlocation,
    cur_lob.vresim,
    vopen_mode,
    bimax_linesize);
    dbms_output.put_line(’Column length: ‘ || to_char(cur_lob.len) || ‘ for file: ‘ ||
    cur_lob.vresim);
    v_start_pos := 1;
    IF cur_lob.len < bimax_linesize THEN
    dbms_lob.READ(cur_lob.blob_resim,
    cur_lob.len,
    v_start_pos,
    v_my_vr);
    utl_file.put_raw(v_output,
    v_my_vr,
    autoflush => TRUE);
    dbms_output.put_line(’Finished Reading and Flushing ‘ || to_char(cur_lob.len) ||
    ‘ Bytes’ || ‘ for file: ‘ || cur_lob.vresim);
    ELSE
    dbms_lob.READ(cur_lob.blob_resim,
    bimax_linesize,
    v_start_pos,
    v_my_vr);
    utl_file.put_raw(v_output,
    v_my_vr,
    autoflush => TRUE);
    dbms_output.put_line(’Finished Reading and Flushing ‘ || to_char(cur_lob.len) ||
    ‘ Bytes’ || ‘ for file: ‘ || cur_lob.vresim);
    END IF;
    v_start_pos := v_start_pos + bimax_linesize;
    WHILE (v_start_pos < bimax_linesize) LOOP
    -- loop till entire data is fetched
    dbms_lob.READ(cur_lob.blob_resim,
    bimax_linesize,
    v_start_pos,
    v_my_vr);
    utl_file.put_raw(v_output,
    v_my_vr,
    autoflush => TRUE);
    dbms_output.put_line(’Finished Reading and Flushing ‘ ||
    to_char(bimax_linesize + v_start_pos - 1) || ‘ Bytes’ ||
    ‘ for file: ‘ || cur_lob.vresim);
    v_start_pos := v_start_pos + bimax_linesize;
    END LOOP;
    utl_file.fclose(v_output);
    dbms_output.put_line(’Finished successfully and file closed’);
    END LOOP;
    END prc_unload_blob_to_file;
    set serveroutput on
    set timing on
    create or replace directory LOB_OUTPUT as ‘/export/home/oracle/tutema/’;
    GRANT ALL ON DIRECTORY LOB_OUTPUT TO PUBLIC;
    exec prc_unload_blob_to_file ;
    Column length: 3330 for file: no_image_found.gif
    Finished Reading and Flushing 3330 Bytes for file: no_image_found.gif
    Finished successfully and file closed
    Column length: 10223 for file: OT311.gif
    Finished Reading and Flushing 10223 Bytes for file: OT311.gif
    Finished successfully and file closed
    PL/SQL procedure successfully completedWith 9iR2 PLSQL can write binary files using UTL_FILE put_raw function, prior to Oracle9iR2 you will need to create an external procedure with Java, C, VB or some 3gl language.
    Some references -
    http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:6379798216275
    Oracle® Database PL/SQL Packages and Types Reference 10g Release 2 (10.2)
    UTL_FILE - http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14258/u_file.htm#sthref14095
    http://psoug.org/reference/dbms_lob.html
    Metalink Note:70110.1, Subject: WRITING BLOB/CLOB/BFILE CONTENTS TO A FILE USING EXTERNAL PROCEDURES

  • Large pdf files can't be opened in adobe reader in wp8

    i am using adobe reader in wp8, but it can't open pdf file if the pdf file is large (i am sure a pdf larger than 80M can't be opened), it says "the path is invaild". it works well with small pdf files.

    WP8 = Windows Phone? If so, you'll want to go to the Digital Editions forum.

  • Reading large binary files into an array for parsing

    I have a large binary log file, consisting of binary data separted by header flags scattered nonuniformly thorughout the data.  The file size is about 50M Byte.  When I read the file into an array, I get the Labview Memory full error.  The design of this is to read the file in and then parse it fro the flags to determine where to separate the data blocks in the byte stream. 
    There are a few examples that I have read on this site but none seem to give a straight answer for such a simple matter.   Does anyone have an example of how I should approach this?

    I agree with Gerd.  If you are working with binaries, why not use U8 instead of doubles.
    If the file is indeed 50MB, then the array should be expecting 52428800 elements, not 50000000.  So if you read the file in a loop and populate an element at a time, you could run out of memory fast because any additional element insertion above 50000000 may require additional memory allocation of the size above 50000000 (potentially for each iteration).  This is just speculation since I don't see the portion of your code that populates the array.
    Question:  Why do you need an array?  What do you do with the data after you read it?  I agree with Altenbach, 50MB is not that big, so working with a file of such a size should not be a problem.

  • Large Binary file to array

    Hi,
    I'm relatively new to Labview and am having trouble with large arrays. I'm using Win32 IO functions to read a 8MB file into Labview and this products a large U8 array. I want to produce a binary array containing the second bit of each byte in this array. For example:
    File:00000000 00000010 00000000 00000010 00000010
    Binary array: 01011
    At the moment I'm using a For Loop and clearly its very inefficient. Is there any way to do the above in a relatively short period of time?
    I used a subvi containing a For Loop and enabled Reentrant execution but this isnt very fast either. 
    Any help would be greatly appreciated,
    John
    Attachments:
    image_bin.png ‏9 KB

    Hi John,
    You can use boolean operators on integers. No need to convert to boolean array. To extract a bit, just AND with a constant with the bit in question set. Also, you don't need the loop.
    See attached example.
    The quotient & remainder is one way to scale the resulting array to 1. Without this you'd get an array of 0's and 2's.
    Hope this helps,
    Daniel
    EDIT: Sorry for repeating 10degree, I was distracted while writing the message
    Message Edited by dan_u on 07-07-2009 03:52 PM

  • .php files are not being opened within Mozilla FF

    Well, I'm learning PHP, and, although a file whose code is illustrated at www.w3schools.com, is being displayed within Adobe Dreamweaver, it does not seem possible to open the same file within the browsers installed on my pc, including Mozilla FF. It seems that this problem was noted by other people in the past however, I came across no solution.
    '''index.php'''
    ''<html>
    <body>
    <?php
    echo addcslashes("failure","|");
    ?>
    </body>
    </html>''
    PHP 5 is installed on my Win XP 5 laptop. Any ideas please?

    Sorry I think the problem is simply that Adobe Dreamweaver is the default .php file viewer.

  • Large Photoshop file has caused system slow down.

    I have Dual 2.0Ghz G5 which has worked very well since I got it last Spring. Just recently however I've had some slow downs. There are a number of things that I've noticed such as more frequent spinning beachball and font redraw issues.
    These issued started since I was working one a massive Photoshop file. I was doing work on a Trade Show booth which eventually resulted in a 10Gb Photoshop file! I realized I probably should have taken steps to reduce the size of the file but that's neither here nor there. Despite the size of the file I was still quite impressed with the performance of my machine. That said, I've found the general performance of the machine since working on this file to be diminished.
    I've done all of the typical maintenace items such as repairing disk and permissions via Disk Utility, clearing caches and running Maintenance scripts using Cache Out X, MacJanitor and Tiger Cache Cleaner, but still the performance of my machine has taken a hit.
    It's only since this has happened that I've started launching Activity Monitor to try and determine if there is something obvious going on which is hogging resources or not. This for me has raised questions of what is "Normal" for some common tasks. Presently kernal_task is using under 5% of my CPU with 50 threads running which seems like a reasonable number. But it is also using almost 110Mb of RAM and 1.35Gb of Virtual Memory. Both of those numbers seem quite high to me, but I admit that I never looked at them before so I have no gage as to whether these numbers can be considered normal or not.
    By all measures the machine still performs pretty good, but as I use this machine every day I know that it's not the same as it was prior to working on that large file. Is there something else related to Photoshop, temporary files, application caches or anything like that, that I should take a look at?
    Any input would be appreciated.
    Thanks!
    Dual 2.0Ghz G5   Mac OS X (10.4.4)  

    Hi Troy,
    First I would try using a separate HD for the scratch disk.As PS tries to read and write,it has to compete with the OS as it tries to hit its swop files. If Version Cue is on, turn it off.Get rid of any virus checking apps.Check your Ram.If you haven't allowed Tiger to run its cron scripts on a regular basis,and only do periodic maintenance, I would take the time to do an Archive and Install, or even wipe the HD,zero it and install tiger again. This I realize is a drastic measure,time
    consuming, and a pain.But if your whole system is slowing down?? Or if you get a new drive set it up to be your primary drive first and test your system on it before moving any of your PS files over from the old HD.Use the old HD as scratch disk
    Have you included the use of other utilies to check your HD? Drive Genius, Disk Warrior, TechTool Pro.No Norton utilities allowed.Hope this helps as there are a lot of places to look for the problem.

Maybe you are looking for

  • Urgent! Nested If- Else statements.Help me

    Please help me, I gt a problem using a nested if else statements.I have 3 objects called bpdu objects. These 3 bpdu objects need to be compared with each other based on their Priority and Mac Address.The program below shows the program that have been

  • CS 6 missing file

    I have been trying to reinstall CS 6 Web and Design after my computer's hard drive failed. Everytime I download it, then attempt to open it I keep getting a popup window that says. Installer failed to initialize. This could be due to a missing file.

  • Element undefined

    I have a test code like this: <cfquery name="detail" datasource="fundfinder1" dbtype="ODBC"> SELECT edtrans.*, edtrans.princ_amt as tr_princ_amt, edtrans.income AS tr_income, edtrans.Val_share AS tr_val_share, edtrans.shares AS tr_shares FROM edtrans

  • My computer chrashed and i want to activate flash ,but can't deactivate old one

    I want to activate my flash player on the new computer and the old computer died with out deactivating flash. How can I deactivate it?

  • Mail not receiving - failing to connect on Port 0 (should be using 110 ??)

    Hi Apple, Would you please help? Mac Mail has ceased receiving email on my machine. The last time Mail on my iMac received email was Wednesday, Nov 7, 2007. My home setup has a DSL modem feeding a firewall (Astaro Version 6), which feeds a hub. The h