Logical block (LBA) information of a file

*How we can find out the Logical Block Address (LBA) of a file blocks. We need to find out the blocks which are using for storing a file in Memory.*

* log2phys.c - this Mac OS X program attempts to provide a physical disk
* map for the specified file(s).
* This is an EXPERIMENT! I think the code does what it says,
* but I have not verified the results.
* Bob Harris 14-Apr-2010 o initial coding
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <errno.h>
#define STRIDE (4096) /* ASSUMES the allocation unit is 4K */
void log2phys(char *file);
int
main(int ac, char **av)
int n;
if ( ac < 2 ) {
fprintf(stderr, "Usage: log2phys file [file ...]
exit(EXIT_FAILURE);
for(n=1; n < ac; n++) {
log2phys(av[n]);
exit(EXIT_SUCCESS);
void
log2phys(char *file)
int fd;
int sts;
off_t offset;
off_t previous;
off_t length;
struct log2phys phys;
printf("%s
", file); /* display the file name */
fd = open(file, O_RDONLY); /* open the file */
if ( fd < 0 ) {
fprintf(stderr,"open(%s,O_RDONLY)", file);
perror(" ");
exit(EXIT_FAILURE);
* Seek through the file 4K at a time, and obtain the disk offset
sts = 0;
previous = (off_t)-1;
length = 0;
for(offset=0; sts >= 0; offset += STRIDE) {
/* position to next offset in the file */
sts = lseek(fd, offset, SEEK_SET);
if ( sts < 0 ) {
fprintf(stderr,"lseek(%d, %lld, SEEK_SET)", fd, offset);
perror(" ");
exit(EXIT_FAILURE);
/* fetch the current physical location for file offset */
sts = fcntl(fd, F_LOG2PHYS, &phys);
if ( sts < 0 && errno == ERANGE ) {
/* we have gone past the end of the file */
break;
else if ( sts < 0 ) {
fprintf(stderr,"fcntl(%d, F_LOG2PHYS, &phys)", fd);
perror(" ");
exit(EXIT_FAILURE);
* Figure out if this is a non-contiguous allocation unit
if ( previous + length != phys.l2p_devoffset ) {
if ( length != 0 ) {
* We have accumulated some length from the previous run of
* allocation units, so display the length of the previous
* starting offset
printf(" length= %11lld
", length);
length = 0;
/* Display the offset of this new run of allocation units */
printf(" file_offset= %10lld volume_offset= %17lld",
offset, phys.l2p_devoffset);
/* save the new previous starting physical offset */
previous = phys.l2p_devoffset;
/* Count this allocation unit as part of the length */
length += STRIDE;
* Print the final length.
if ( length ) {
printf(" length= %11lld
", length);
close(fd);

Similar Messages

  • ORA-27046: file size is not a multiple of logical block size

    Hi All,
    Getting the below error while creating Control File after database restore. Permission and ownership of CONTROL.SQL file is 777 and ora<sid>:dba
    ERROR -->
    SQL> !pwd
    /oracle/SID/sapreorg
    SQL> @CONTROL.SQL
    ORACLE instance started.
    Total System Global Area 3539992576 bytes
    Fixed Size                  2088096 bytes
    Variable Size            1778385760 bytes
    Database Buffers         1744830464 bytes
    Redo Buffers               14688256 bytes
    CREATE CONTROLFILE SET DATABASE "SID" RESETLOGS  ARCHIVELOG
    ERROR at line 1:
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    '/oracle/SID/sapdata5/p11_19/p11.data19.dbf'
    ORA-27046: file size is not a multiple of logical block size
    Additional information: 1
    Additional information: 1895833576
    Additional information: 8192
    Checked in target system init<SID>.ora and found the parameter db_block_size is 8192. Also checked in source system init<SID>.ora and found the parameter db_block_size is also 8192.
    /oracle/SID/102_64/dbs$ grep -i block initSID.ora
    Kindly look into the issue.
    Regards,
    Soumya

    Please chk the following things
    1.SPfile corruption :
    Startup the DB in nomount using pfile (ie init<sid>.ora) create spfile from pfile;restart the instance in nomount state
    Then create the control file from the script.
    2. Check Ulimit of the target server , the filesize parameter for ulimit shud be unlimited.
    3. Has the db_block_size parameter been changed in init file by any chance.
    Regards
    Kausik

  • DB Cloning.file size is not a multiple of logical block size

    Dear All,
    I am trying to create database in windowsXP from the database files running in Linux.
    When i try to create control file, i m getting the following errors.
    ORA-01503: CREATE CONTROLFILE failed
    ORA-01565: error in identifying file
    'D:\oracle\orcl\oradata\orcl\system01.dbf'
    ORA-27046: file size is not a multiple of logical block size
    OSD-04012: file size mismatch (OS 367009792)
    Pls tell me the workarounds.
    Thanks
    Sathis.

    Hi ,
    I created database service by oradim. Now i m trying to create control file after editing the controlfile with the location of windows datafiles(copied from Linux)
    Thanks,
    Sathis.

  • Buffer I/O error on device hda1, logical block 81934

    I getting the follow erro message when I session into the CUE of the UC520. All advice appreciated. I read the similar post, however  I'm not given any prompts to make a selection.
    Buffer I/O error on device hda1, logical block 81934
    Processing manifests . Error processing file exceptions.IOError [Errno 5] Input/output error
    . . . . . . Error processing file zlib.error Error -3 while decompressing: invalid distances set
    . . . . . . . . Error processing file zlib.error Error -3 while decompressing: invalid distances set
    . . complete
    ==> Management interface is eth0
    ==> Management interface is eth0
    malloc: builtins/evalfile.c:138: assertion botched
    free: start and end chunk sizes differ
    Stopping myself.../etc/rc.d/rc.aesop: line 478:  1514 Aborted                 /bin/runrecovery.sh
    Serial Number:
    INIT: Entering runlevel: 2
    ********** rc.post_install ****************
    INIT: Switching to runlevel: 4
    INIT: Sending processes the TERM signal
    STARTED: cli_server.sh
    STARTED: ntp_startup.sh
    STARTED: LDAP_startup.sh
    STARTED: SQL_startup.sh
    STARTED: dwnldr_startup.sh
    STARTED: HTTP_startup.sh
    STARTED: probe
    STARTED: superthread_startup.sh
    STARTED: ${ROOT}/usr/bin/products/herbie/herbie_startup.sh
    STARTED: /usr/wfavvid/run-wfengine.sh
    STARTED: /usr/bin/launch_ums.sh
    Waiting 5 ...Buffer I/O error on device hda1, logical block 70429
    Waiting 6 ...hda: no DRQ after issuing MULTWRITE
    hda: drive not ready for command
    Buffer I/O error on device hda1, logical block 2926
    Buffer I/O error on device hda1, logical block 2927
    Buffer I/O error on device hda1, logical block 2928
    Buffer I/O error on device hda1, logical block 2929
    Buffer I/O error on device hda1, logical block 2930
    Buffer I/O error on device hda1, logical block 2931
    Buffer I/O error on device hda1, logical block 2932
    Buffer I/O error on device hda1, logical block 2933
    Buffer I/O error on device hda1, logical block 2934
    REISERFS: abort (device hda1): Journal write error in flush_commit_list
    REISERFS: Aborting journal for filesystem on hda1
    Jun 17 16:36:11 localhost kernel: REISERFS: abort (device hda1): Journal write error in flush_commit_list
    Jun 17 16:36:11 localhost kernel: REISERFS: Aborting journal for filesystem on hda1
    Waiting 8 ...MONITOR EXITING...
    SAVE TRACE BUFFER
    Jun 17 16:36:13 localhost err_handler:   CRASH appsServices startup startup.sh System has crashed. The trace buffer information is stored in the file "atrace_save.log". You can upload the file using "copy log" command
    /bin/startup.sh: line 262: /usr/bin/atr_buf_save: Input/output error
    Waiting 9 ...Buffer I/O error on device hda1, logical block 172794
    INIT: Sending processes the TERM signal
    INIT: cannot execute "/etc/rc.d/rc.reboot"
    INIT: no more processes left in this runlevel

    The flash card for CUE might be corrupt. Try and reinstall CUE and restore from backup to see if that fixes it. If it doesnt, try a different flash card.
    Cole

  • OSD-04001: invalid logical block size (OS 2800189884)

    My Windows 2003 crashed which was running Oracle XE.
    I installed Oracle XE on Windows XP on another machine.
    I coped my D:\oracle\XE10g\oradata folder of Win2003 to the same location in WinXP machine.
    When I start the database in WinXP using SQLPLUS i get the following message
    SQL> startup
    ORACLE instance started.
    Total System Global Area 146800640 bytes
    Fixed Size 1286220 bytes
    Variable Size 62918580 bytes
    Database Buffers 79691776 bytes
    Redo Buffers 2904064 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    I my D:\oracle\XE10g\app\oracle\admin\XE\bdump\alert_xe I found following errors
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 4 shared server(s) ...
    Oracle Data Guard is not available in this edition of Oracle.
    Wed Apr 25 18:38:36 2007
    ALTER DATABASE MOUNT
    Wed Apr 25 18:38:36 2007
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Wed Apr 25 18:38:36 2007
    ORA-205 signalled during: ALTER DATABASE MOUNT...
    ORA-00202: control file: 'D:\ORACLE\XE10G\ORADATA\XE\CONTROL.DBF'
    ORA-27047: unable to read the header block of file
    OSD-04001: invalid logical block size (OS 2800189884)
    Please help.
    Regards,
    Zulqarnain

    Hi Zulqarnain,
    Error OSD-04001 is Windows NT specific Oracle message. It means that the logical block size is not a multiple of 512 bytes, or it is too large.
    So what you can do? Well you should try to change the value of DB_BLOCK_SIZE in the initialization parameter file.
    Regards

  • Buffer I/O error on device sdd1, logical block, HDD failure? [SOLVED]

    Hello!
    I'm a bit puzzled here to be honest. Granted i'm not using linux as much as I used to (not after windows 7). I have archlinux running on my HTPC. Never had any issues before that was this severe, unless I upgraded and forgot to read news section. Booted the htpc today to be greeted by "Buffer I/O error on device sdd1, logical block" with a massive wall of text, a few seconds later "welcome to emergency mode."
    *This is NOT hdd where the linux kernel is residing on. What logical purpose would it serve for the kernel/userspace to abort everything just because fsck fails or something? If this was indeed my linux partition I would fully understand.
    Anyways, I used parted magic, ran fsck, smart. Sure enough fsck warned me about bad/missing superblock. Restored the superblock by using e2fsck. I had over 10 000 "incorrect size" chunks. Ran 2-3 SMART after that. fsck says okay, smart gives a 100% status report with no errors.
    Oh yeah, I have turned off FSCK completly in my fstab, thinking about at least turning it on my bigger hdds
    Questions:
    *Is SMART reliable? If it says it's alright, does that mean i'm safe? Would physical broken sectors turn up by SMART?
    *I know SMART warns the user in windows 7 if hdd failure is imment. Is this possible within linux as well? Since i'm NOT using a GUI, is this possible to send through a terminal/email?
    *Sometimes the HTPC have been forefully shut down (power breakage), could this be one of the causes of the I/O error?
    As always, thank you for your support.
    Last edited by greenfish (2013-10-23 13:23:21)

    graysky wrote:Any reallocated sectors in smartmontools?  If you run 'e2fsck -fv /dev/sdd1' does it complete wo/ errors?  Probably best to repeat for all linux partitions on that disk.
    Sorry for the late reply guys. Been busy with my other hdd that decided to screw with me. e2fsck first complained about bad sectors, and wrong size. Now it says all clean. I've decided to remove this HDD from server and mark it "damaged".
    Thank you again for your help
    alphaniner wrote:
    greenfish wrote:*Is SMART reliable? If it says it's alright, does that mean i'm safe? Would physical broken sectors turn up by SMART?
    *I know SMART warns the user in windows 7 if hdd failure is imment. Is this possible within linux as well? Since i'm NOT using a GUI, is this possible to send through a terminal/email?
    1) Don't trust the 'SMART overall-health self-assessment test result', run the diagnostics (short, long, conveyance, offline). The short and conveyance tests are quick so start with them. If they both pass run the long test. The offline test is supposed to update SMART attributes, but it generally takes longer than the long test, so save it for last if at all. Usually when I see bad drives the short or long tests pick them up.
    2) Look into smartd.service.
    greenfish wrote:What logical purpose would it serve for the kernel/userspace to abort everything just because fsck fails or something?
    Systemd craps itself if an fs configured to mount during boot can't be mounted, even if the fs isn't necessary for the system to boot. Rot sure about how it handles fsck failures. This 'feature' can be disabled by putting nofail in the fstab options. I add it to every non-essential automounting fs.
    Thank you for the useful information. I will save this post for future references.
    Will deff look into smartd.service, especially when I have so much data running 24/7.
    Will also update my fstab table with "nofail" like you suggested
    Thank You!

  • The IO operation at logical block address # for Disk # was retried

    Hello everyone,
    A warning appears in the system log:
    ===
    Log Name:      System
    Source:        disk
    Date:          2/20/2013 1:00:28 PM
    Event ID:      153
    Task Category: None
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      STRANGE.aqa.com.ru
    Description:
    The IO operation at logical block address af7ff for Disk 7 was retried.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="disk" />
        <EventID Qualifiers="32772">153</EventID>
        <Level>3</Level>
        <Task>0</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2013-02-20T09:00:28.199176700Z" />
        <EventRecordID>12669</EventRecordID>
        <Channel>System</Channel>
        <Computer>STRANGE.aqa.com.ru</Computer>
        <Security />
      </System>
      <EventData>
        <Data>\Device\Harddisk7\DR142</Data>
        <Data>af7ff</Data>
        <Data>7</Data>
        <Binary>0F01040003002C00000000009900048000000000000000000000000000000000000000000000000000020828</Binary>
      </EventData>
    </Event>
    ===
    This warning occurred in several seconds after the Windows Server Backup start. Our backup job finishes successfully. That server is in provisioning without a heavy workload, and we have not experienced any problem yet. But we do not want to face any problems
    due to this error in the production environment.
    All disks of the server are managed by the LSI MegaRAID controller, which doesn’t report any errors in the disk system.
    it is Windows Server 2012 with the latest updates.

    Wow, I have been having the exact same problems with Server 2012 WSB.  I thought I had it resolved but it started acting up again.  I tried 3 different external hard drives thinking they may be the problem.   The raid array also seems fine,
    it is not giving me any errors, no amber lights.
    If I run a backup system state + hyper-v it would fail 9/10 of the time on the host component.  I have posted every where and cannot find anything.   These are the event during any backup I run.  
    Source: Disk
    Event ID: 153
    The IO operation at logical block address 10a58027 for Disk 5 was retried.
    Source: VOLSNAP
    Event ID: 25
    The shadow copies of volume C: were deleted because the shadow copy storage could not grow in time.  Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied.
    Source: Filter Manager
    Event ID: 3
    Filter Manager failed to attach to volume '\Device\HarddiskVolume109'.  This volume will be unavailable for filtering until a reboot.  The final status was 0xC03A001C.
    Source:  VOLSNAP
    Event ID: 27
    The shadow copies of volume \\?\Volume{a21d0bb7-7147-11e2-93ed-842b2b0982fe} were aborted during detection because a critical control file could not be opened.
    Source:  VHDMP
    Event ID: 129
    Reset to device, \Device\RaidPort4, was issued.
    Source:  VOLSNAP
    Event ID: 25
    The shadow copies of volume G: were deleted because the shadow copy storage could not grow in time.  Consider reducing the IO load on the system or choose a shadow copy storage volume that is not being shadow copied.
    Windows backup gives me various errors for what did not backup.  Mainly this one:
    Error in backup of C:\ during enumerate: Error [0x80070003] The system cannot find the path specified.
    Application backup
    Writer Id: {66841CD4-6DED-4F4B-8F17-FD23F8DDC3DE}
       Component: Host Component
       Caption     : Host Component
       Logical Path: 
       Error           : 8078010D
       Error Message   : Enumeration of the files failed.
       Detailed Error  : 80070003
       Detailed Error Message : (null)
    Not just the host component, sometimes the entire C: ...
    So no one has any recommendations on fixing this?
    Is any one running dell AppAssure?  I have two servers backing up to this server with dell AppAssure.  Then I am using WSB to backup the this machines OS and 1 windows 7 VM.  

  • How to change the replication group information after db files are created

    Since group information is persisted in the database, I am wondering if there is a way to update the information.
    We want to implement some kind of Berkeley DB master relay mechanism for our two data centers, which has slow link in between. Basically have one master populating a database file and launch another two node as master to replay that to other nodes of its own group. It will be much efficient this way so we don't have to copy the data multiple times over the slow link.
    We periodically (once a day) update the Berkeley DB content from customer's feed on a backend node and upload (rsync) the Berkeley DB File to two the data centers. We would like to have a master node in each data center to read the pre-populated data file, replicate the changes to the web node (read only) while they are still running. I simulated local and if I trick the nodeName and nodeHostPort setting, it should work (basically, fake the replication node on the backend node using tampered hostfile so they get registerred). However, it is not very convenient and definitely a dangerous hack on the production servers.
    If there is a way, after the creation, to update the group information (for example, change all the nodes information) without corrupt the log file/replication stream, it will be much easier for us. Basically, we would like to have the node/group information and data file de-coupled?
    Any ideas how to do that, or is there a better way to design such a replay of data using Berkeley DB?
    Thanks in advance!

    2. You mentioned to not clean up the log file. Is there a point where I can safely call clean up on the environment when BDB is still online as I can imagine we will run out of space very soon if we don't clean up.The approach outlined above (steps 1 to 5) will ensure that no log files are deleted on A while you are updating B and C. The use of DbBackup ensures this. For more information on how this works, see the DbBackup javadoc.
    Whether this causes you to run out of disk space on A is something you'll have to evaluate for yourself. It depends on the write rate on A and how long it takes to do the copy to B and C. If this is a problem, you could make a quick local copy of the environment on A, and then transfer that copy to B/C. But you must prohibit log file deletion during the copy, using DbBackup, or the copy will be invalid.
    You should perform explicit JE log cleaning (including a checkpoint) before doing the copy to B/C. This will reduce the number of files that are copied to B/C, and will reduce the likelihood that you'll fill the disk on A. See the javadoc for Environment.cleanLog for details on how to do an explicit log cleaning.
    In your earlier post, it sounded like the updates to A were in batch mode -- done all at once at a specific time of day. If so, you can do the copy to B/C after the update to A. In that case, I don't understand why you are afraid of filling the disk on A, since updates would not be occurring during the copy to B/C.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Unable to load new information from configuration file /var/ldap/ldap_clien

    Hi all,
    When I run the command "ldapclient init", I got the error message:
    # ldapclient init -a proxyDN=cn=proxyagent,ou=profile,dc=example,dc=ca -a domainName=example.ca -a profileName=UserProfile -a proxyPassword=pwd 10.1.10.50
    Unable to load new information from configuration file '/var/ldap/ldap_client_file' ('Unable to open filename '/var/ldap/ldap_client_file' for reading (errno=2).').
    Any idea?
    Thanks a lot for your help!

    Does the profile UserProfile exist on your LDAP server?
    Do the logs on your LDAP server show access problems?
    Try using -v to get more verbose output

  • How can I see the information about a file used in a sequence?

    How can I see the information about a file used in a sequence?

    You can use pretty much any two or multiple button mouse on a Mac, right out of the box.
    I use this one: http://www.apple.com/mightymouse/

  • I can no longer edit information for streaming files in Get Info.

    I can no longer edit information for streaming files in Get Info.

    Similar problem here. My Ical refuses to edit or delete events. Viewing is possible, though sometimes the whole screen turns grey. Adding new events from mail is still possible. The task-pane completely disappeared. My local apple technic-centre messed about with disk utility for a bit and than told me to reinstall leopard. I could of course do that, but it seems to me that reinstalling Leopard just to fix iCal events is a bit invasive.
    I tried also tried removing everything, installing a new copy of iCal from the leopard-cd, software updates, all to no avail.
    At the moment I'm open to all suggestions that do not include a complete leopard reinstall.

  • Can I recover my data information from excel file?

    For any reason I already lost (clear all) my data information, have any option where import the data information from excel file to view responses sheet? or , how, can I recover my data information?
    Antonio

    If you had saved your Excel file, then you may revert the Excel file to the last saved version. Follow below steps to do this:
    On the File tab, click Open.
    Double-click the name of the file that you have open in Excel.
    Click Yes to reopen the Excel file.
    If you had not saved Excel file, then follow below steps to recover your file.
    Click the File tab.
    Click Recent.
    Click Recover Unsaved Workbooks.

  • Information Broadcast Email File - Where is the file in Unix directory?

    Hi,
    I am using Information Broadcasting to email the BI report.   We use Unix as the application server.
    In transaction SOSV, I can get detailed information on the file (attachment) and the send status.  There is a requirement to encrypt the file in Unix server.   Does anyone know where the file resides in Unix when the report is generated from Information Broadcasting?   What's the Unix directory?
    Any information would be appreciated.
    Thank you.
    Rebecca

    I added the path of the servlet.jar to the CLASSPATH. Now my modified
    file content is as follows:
    WINDOWS 2000 Environmental variable changes:(CLASSPATH)
    C:\Program Files\PhotoDeluxe BE 1.1\AdobeConnectables;%JAVA_HOME%\bin;%JAVA_HOME%\jre\bin;C:\jakarta-tomcat-4.1.12\common\lib\servlet.jar
    AUTOEXEC.BAT:
    set path=%path%;c:\onnet32
    SET PATH=C:\jdk1.3.1\bin;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;
    SET JAVA_HOME=C:\jdk1.3.1
    SET CATALINA_HOME=C:\jakarta-tomcat-4.1.12
    SET CLASSPATH=.;%JAVA_HOME%\bin;%JAVA_HOME%\jre\bin;C:\jakarta-tomcat-4.1.12\common\lib\servlet.jar
    With this I am executing the command: javac test.java. However, it is still giving lot of error messages
    All the messages are 'cannot resolve symbol' associated with the Servlet class, Servlet Request, Server Exception etc. In the very first error message it states that: 'package javax.servlet does not exist'.
    I would appreciate any help. Thanks in advance.

  • How to download the blocked ALV output to PDF file.

    How to download the blocked ALV output to PDF file.
    I am able to download the BLocked ALV output in PDF format,
    but the each bolck in ALV is displaying different pages of PDF.
    In my report I have 4 block in 1 page, I am able to see the output in PDF but in different page.
    How to avoid the Page-break in PDF.
    Thanks,
    Ravi Yasoda.

    hi,
    I believe that your have 4 containers on the screen with individual ALV display. in this case, there is no way to get combined PDF output to my knowledge.
    However you can use Smartform/Sapscript as output which would allow you to display ALV in blocks and also print it in one.
    Regards,
    Nirmal

  • Including scaling information in binary file, which can be plotted in graph

    I need to plot a graph from a from a bin file, I am able to plot the graph using read binary file vi but the scaling is nowhere close to what it needs to be.
    Its in auto scaling mode, if I set the values manually by giving maximum and minimum values the graph disappears,
    I believe it has something to do with some missing header file not included in the binary file as of now.
    Can anyone help , I need to know , how to add header in the binary file so that I include the information of scales or units of measurements of the data which has to be plotted.

    Hi Shrikant,
    you include "header information" the same way as you include "data" in your binary file: You simply write that information to the file...
    When trying to plot "the file" you read both parts (header and data), one after the other!
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

Maybe you are looking for