Efficient way to write to file

Hi !
i'm trying to wfite a big amount of lines into a file.
the insertion of my string is inside a while.
I'm using a
BufferedWriter out = new BufferedWriter(
                                      new OutputStreamWriter(
                                      new FileOutputStream(
                                      new File("file.txt))))But it seems that is uses a lot of mem and it does not write to the file unless it get out of the while statement... Any suggestions???
Thank you

Well let me give you a sample of my code.
I 'm connecting to a DataBase and getting some values
rs1 = statement1.executeQuery(query);
               while(rs1.next())
                                      //doing something
                                      String a=....
                                      String b=....
                                      String c=....
                                      String temp=a.concat(" :: ").concat(b).concat(" :: ").concat(c)
                 out.write(temp);
                 out.newLine();                 
            out.close();You think a flush could work or should i do smthng else?

Similar Messages

  • Most efficient way to consume log files

    Hello everyone,
    I've been absent from the forums for awhile but I'm back at it now... 
    I have a question about the most efficient way to consume log files.  I read in Powershell in action, by Bruce Payette that using a switch statement with a regex worked pretty well, that being said I haven't tried it yet. Select-string is working pretty
    well for me but I have about 10 different entry types that I need to search logs for every 5 minutes and I'm scanning about 15 GB of logs at every interval.  Anyway, if anyone has information about how to do something like that as quickly as possible
    I'd appreciate it.
    1.  piping log files that meet my criteria to select-string
       - This seems to work well but I don't like searching the same files over and over again
    2. running logs through get-content and then building a filter statement
      - This is ok but it seems to use up a fair bit of memory
    3. Some other approach that I haven't thought of yet.
    Anyway, I know this is a relatively nebulous question, sorry about that.  I'm hoping that someone on here knows a really good way to find strings in logs files quickly.
    Hope that helps! Jason

    You can sometimes squeeze out more speed at the expense of memory usage, but filters are pretty fast. I don't see a benefit to holding the whole file in memory, in this case.
    As I mentioned earlier, though, C# code will usually blow PowerShell away in terms of execution time.  Here's a rewrite of what I just did (just for the INI Section pattern, to keep the post size down):
    $string = @'
    #Comment Line
    [Ini-Style Section Line]
    Key = Value Line
    192.168.0.1 localhost
    Some line that doesn't match anything.
    Set-Content -Path .\test.txt -Value $string
    Add-Type -TypeDefinition @'
    using System;
    using System.Text.RegularExpressions;
    using System.Collections;
    using System.IO;
    public interface ILineParser
    object ParseLine(string line);
    public class IniSection
    public string Section;
    public class IniSectionParser : ILineParser
    public object ParseLine(string line)
    object o = null;
    Match match = Regex.Match(line, @"^\s*\[([^\]]+)\]\s*$");
    if (match.Success)
    o = new IniSection() { Section = match.Groups[1].Value };
    return o;
    public class LogParser
    public static IEnumerable ParseFile(string fileName, ILineParser[] lineParsers)
    using (StreamReader sr = File.OpenText(fileName))
    string line;
    while ((line = sr.ReadLine()) != null)
    foreach (ILineParser parser in lineParsers)
    object result = parser.ParseLine(line);
    if (result != null)
    yield return result;
    $parsers = @(
    New-Object IniSectionParser
    $results = [LogParser]::ParseFile("$pwd\test.txt", $parsers)
    $results
    Instead of defining separate classes for each type of line and output object, you could probably do something more generic with delegates (similar to how I used ScriptBlock.Invoke() in the PowerShell example), but it might sacrifice some speed to do so.

  • Most efficient way to load XML file data into tables

    I have a complex XML file running into MBs. I want to load it's data into 7-8 tables.
    Which way will be better:
    1) Use SQL Loader to actually load directly into the 7-8 tables directly by modifying the control card.
    Is this really possible and feasible? I am not even sure about it
    2) Load data as XML Type in a table and register it. Then extract from there to load into various tables.
    Please help. I have to find the most efficient way of doing it.
    Regards,
    Sudhir

    Yes it is possible to use SQL*Loader to parse and load XML, but that is not what it was designed for and so is not recommended. You also don't need to register a schema, just to load/store/parse XML in the DB either.
    So where does that leave you?
    Some options
    {thread:id=410714} (see page 2)
    {thread:id=1090681}
    {thread:id=1070213}
    Those talk some about storage options and reading in XML from disk and parsing XML. They should also give you options to consider. Without knowing more about your requirements for the effort, it is difficult to give specific advice. Maybe your 7-8 tables don't exist and so using Object Relational Storage for the XML would be the best solution as you can query/update tables that Oracle creates based off the schema associated to the XML. Maybe an External Table definition works better for reading the XML into the system because this process will happen just once. Maybe using WebDAV makes more sense for loading XML to be parsed (I don't have much experience with this, just know it is possible from what I've read on the forums). Also, your version makes a difference as you have different options available depending upon the version of Oracle.
    Hope all that helps as a starter.
    Edited by: A_Non on Jul 8, 2010 4:31 PM
    A great example, see the answers by mdrake in {thread:id=1096784}

  • What's the most efficient way to serve a file from a servlet?

    I have a servlet that does various different things depending on the needs. Sometimes it dynamically generates content, and sometimes all it does is send a file out, with no alteration.
    What is the most efficient way to just send a file?
    One option:
    OutputStream os = response.getOutputStream();
    InputStream is = new FileInputStream(...)
    (send all the bytes from is to os, the regular way using a buffer)Another option is to say:
    RequestDispatcher rd = response.getRequestDispatcher(fileName);
    rd.forward();Any other options? What's the prefered way of doing this?
    I know the rule of "don't optimize too early" but this is a situation where we need to get the maximum amount of files served with the hardware we have, and it's going to be a lot of static files, so efficiency is important.
    Thanks

    Ok, that's what I thought. It would be nice if there were a "response.sendStream(InputStream input)" method in the ServletResponse class. Even nicer would be a sendFile or sendChannel or something. This is probably a common usage and it's a place where the container has many opportunities for optimization. For example, it could call the operating systems send_file kernel call so the entire transfer would be done directly from the disk controller to the ether card (on systems that support that).
    For now I'll just do my own buffered copy.

  • Most efficient way to write 4 bytes at the start of any file.

    Quick question: I want to write 4 bytes at the start of a file without overriding the current bytes in the file. E.g. push bytes 0-4 along... Is my only option writing the bytes into a new file then writing the rest of the file after? RAF is so close but overrides :(.
    Thanks Mel

    I revised the code to use a max of 8MB buffers for both the nio and stdio copies...
    Looks like NIO is a pretty clear winner... but your milage may vary, lots... you'd need to test this 100's of times, and normalize, to get any "real" metrics... and I for one couldn't be bothered... it's one of those things that's "fast enough"... 7 seconds to copy a 250 MB file to/from the same physical disk is pretty-effin-awesome really, isn't it? ... looks like Vista must be one of those O/S's (mentioned in the API doco) which can channel from a-to-b without going through the VM.
    ... and BTW, it took the program which produced this file 11,416 millis to write it (from an int-array (i.e. all in memory)).
    revised code
    package forums;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import java.nio.channels.FileChannel;
    class NioBenchmark1
      private static final double NANOS = Math.pow(10,9);
      private static final int BUFF_SIZE = 8 * 1024 * 1024; // 8
      interface Copier {
        public void copy(File source, File dest) throws IOException;
      static class NioCopier implements Copier {
        public void copy(File source, File dest) throws IOException {
          FileChannel in = null;
          FileChannel out = null;
          try {
            in = (new FileInputStream(source)).getChannel();
            out = (new FileOutputStream(dest)).getChannel();
            final int buff_size = Math.min((int)source.length(),BUFF_SIZE);
            long n = -1;
            int pos = 0;
            while ( (n=in.transferTo(pos, buff_size, out)) == buff_size ) {
              pos += n;
          } finally {
            if(in != null) in.close();
            if(out != null) out.close();
      static class NioCopier2 implements Copier {
        public void copy(File source, File dest) throws IOException {
          if ( !dest.exists() ) {
            dest.createNewFile();
          FileChannel in = null;
          FileChannel out = null;
          try {
            in = new FileInputStream(source).getChannel();
            out = new FileOutputStream(dest).getChannel();
            final int buff_size = Math.min((int)in.size(),BUFF_SIZE);
            long n = -1;
            int pos = 0;
            while ( (n=out.transferFrom(in, 0, buff_size)) == buff_size ) {
              pos += n;
          } finally {
            if(in != null) in.close();
            if(out != null) out.close();
      static class IoCopier implements Copier {
        private byte[] buffer = new byte[BUFF_SIZE];
        public void copy(File source, File dest) throws IOException {
          InputStream in = null;
          FileOutputStream out = null;
          try {
            in = new FileInputStream(source);
            out = new FileOutputStream(dest);
            int count = -1;
            while ( (count=in.read(buffer)) != -1) {
              out.write(buffer, 0, count);
          } finally {
            if(in != null) in.close();
            if(out != null) out.close();
      public static void main(String[] arg) {
        final String filename = "SieveOfEratosthenesTest.txt";
        //final String filename = "PrimeTester_SieveOfPrometheuzz.txt";
        final File src = new File(filename);
        System.out.println("copying "+filename+" "+src.length()+" bytes");
        final File dest = new File(filename+".bak");
        try {
          time(new IoCopier(), src, dest);
          time(new NioCopier(), src, dest);
          time(new NioCopier2(), src, dest);
        } catch (Exception e) {
          e.printStackTrace();
      private static void time(Copier copier, File src, File dest) throws IOException {
        System.gc();
        try{Thread.sleep(1);}catch(InterruptedException e){}
        dest.delete();
        long start = System.nanoTime();
        copier.copy(src, dest);
        long stop = System.nanoTime();
        System.out.println(copier.getClass().getName()+" took "+((stop-start)/NANOS)+" seconds");
    output
    C:\Java\home\src\forums>"C:\Program Files\Java\jdk1.6.0_12\bin\java.exe" -Xms512m -Xmx1536m -enableassertions -cp C:\Java\home\classes forums.NioBenchmark1
    copying SieveOfEratosthenesTest.txt 259678795 bytes
    forums.NioBenchmark1$IoCopier took 14.333866455 seconds
    forums.NioBenchmark1$NioCopier took 7.712665715 seconds
    forums.NioBenchmark1$NioCopier2 took 6.206867074 seconds
    Press any key to continue . . .Having said that... The NIO has lost a fair bit of it's charm... testing transferTo's return value and maintaining your own position in the file is "cumbsome" (IMHO)... I'm not even certain that mine is completely correct (?n+=pos or n+=pos+1?).... hmmm..
    Cheers. Keiths.

  • Need an efficient way to write history record

    I need to keep the old images of the records in table A after any changes made by the user. So, I create a history table B which is exactly the same as A but have two more
    columns to store the the SYSDATE & USER.
    Currently, my program uses a cursor to loop thru records in A in order to insert every record to B with SYSDATE & USER and then delete the record in A.
    Any better method to deal with this ?
    null

    Hi,
    You can write a UPDATE trigger on A to write the record to B.

  • Efficient way of searching multiple xml files for multiple entries

    As I�m quite new to using xml in java I can't figure out how to solve my problem.
    I've got about 20 xml files, each about 500-1000kB big. each files contains about 500 questions, each with a unique ID.
    A user had to be capable of entering any number of ',' separated id's, and the program needs to show the user the questions.
    Using a SQL server this would be easy, but in this situation I can't. As this had to be a small program I can't add a 10MB jar file either, nor can I ask the users to install an additional program.
    Creating a brute search will be easy, but searching 20MB of xml files multiple times will be hard even for a modern PC.
    So my question is: What will be the most efficient way of searching these files?
    Hope that someone will be kind enough to respond :)
    Rick

    I'd still go with a database. There are databases that are significantly more light-weight than MS SQL Server.
    More concretely there are databases that run completely in memory. HSQLDB is one, Java DB (formerly Derby) is another one.
    I'd parse the XML files once, add them to the database and query from there later on.
    If even that is to complicated for you, then you could simply parse the XML files once and put the Questions into a HashMap with the ID as the key.

  • Cloud Newbie: What is the best way to sync content files between locations?

    Not new to Dreamweaver, but am new to the Creative Cloud.  I typically carry an external drive with my web files and update site from home or work computer using those files.  Now that I'm using Dreamweaver CC, I'm trying to find the most efficient way to sync content files between computers.  I see that the "Sync Settings" allows for syncing of preferences - but not content. Anyone have a recommendation? Or do I simply make my changes at work. Then go home and download the page I made changes to update my home files?
    Thanks,
    George

    The Cloud isn't going to synch your site files between 2 computers.  That's not what it's for.
    You might want to explore file check-in/out feature in DW. 
    http://help.adobe.com/en_US/dreamweaver/cs/using/WSc78c5058ca073340dcda9110b1f693f21-7ebfa .html
    This lets you check files in/out from your Remote server.  That way, you know you're working on the lastest copy. Check-in/out is typically used in collaborative environments where 2 or more people work on the site.  But it might serve your needs as well.
    Nancy O.

  • How to write a file into UNIX!

    Hi Group!
    I am trying to write an error file into UNIX directory using 'OPEN DATASET ERRFILE FOR OUTPUT IN TEXT MODE ENCODING DEFAULT'. But unfortunately i am getting a dump as it is giving the subrc 8 and dumping with CX_SY_FILE_OPEN_MODE.
    The path looks like /sap/DE1/batch/data/SCM_YM/partfiles/HGMEH011.20071001150344
    Is there any special command or way to write the file into UNIX?
    Quick response would be of great help.
    Suresh

    The detailed error is
    The file '&FILENAME&' was not opened, or was opened in the wrong mode.
    Can you please try to create a new file that does not exist.
    then we can check whether you have authorization or not.
    OPEN DATASET to a file already opened - in the same internal mode - triggers the exception.
    THtas why you just create a file with some rough name ( may be ur name) intially and then check.
    bye sasi

  • How to write excel file (.xlsx) using file adapter without using java code

    Hi All,
    In soa suite 11g Is there any ways to write the data to the excel ( xlsx ) file using fileadapter and not using java code..Thanks in Advance

    Hi Siva,
    I don't think there is any way to write .xls file directly. You'll have to use some Java API (iText etc.) to create an .xls file. However, you can write .csv file that can be easily converted into .xls at the target end. In MS, it opens as Excel file if delimiter is comma *,*.
    Regards,
    Neeraj Sehgal

  • More cost efficient way??

    Any suggestions as to a more efficient way to write the following statement?
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    ((PR.PERSONTYPEID IN (26706,26707,26708,26709,26710) AND
    (NM.STRIKEVEHICLEID<1 OR
    NM.STRIKEVEHICLEID IS NULL)) OR
    (PR.PERSONTYPEID IN (26704,26705,26711) AND
    NM.STRIKEVEHICLEID>0))
    ORDER BY 1,2,3,4,5,6;

    I would try this
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    PR.PERSONTYPEID BETWEEN 26706 AND 26710 AND
    NVL(NM.STRIKEVEHICLEID,0)<1
    UNION ALL
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    PR.PERSONTYPEID BETWEEN 26704 AND 26705 AND
    NM.STRIKEVEHICLEID>0
    UNION ALL
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    PR.PERSONTYPEID = 26711 AND
    NM.STRIKEVEHICLEID>0
    ORDER BY 1,2,3,4,5,6;

  • MDX - More efficient way?

    Hi
    I am still learning MDX and have written this code. It needs to recalculate all employees in a cost center (COSTCENTER is a property of the DIM EMPLOYEE) when one of the assumptions (e.g P00205 etc) change. These assumptions are planned on cost center level and planned against employee DUMMY. Is there a more efficient way to write this code as there are lots of accounts that needs to be posted to::
    *SELECT (%EMPLOYEE%, ID, EMPLOYEE, [COSTCENTER]  = %COSTCENTER_SET%)
    //Workmens Comp
    *XDIM_MEMBERSET P_ACCT = "IKR0000642000"
    *FOR %EMP% = %EMPLOYEE%   
             [EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00205],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
    *NEXT
    *COMMIT
    //Fringe Benefits Employer
    *XDIM_MEMBERSET P_ACCT = "IKR0000628100" 
    *FOR %EMP% = %EMPLOYEE%
             [EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00210],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
    *NEXT
    *COMMIT
    //Fringe Benefits Other
    *XDIM_MEMBERSET P_ACCT = "IKR0000626100" 
    *FOR %EMP% = %EMPLOYEE%
             [EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00209],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
    *NEXT
    *COMMIT

    Maybe the following?
    *SELECT (%EMPLOYEE%, ID, EMPLOYEE, [COSTCENTER]  = %COSTCENTER_SET%)
    *XDIM_MEMBERSET EMPLOYEE = %EMPLOYEE%
    *XDIM_MEMBERSET P_ACCT = IKR0000642000,IKR0000628100,IKR0000626100
    //Workmens Comp
    [P_ACCT].[#IKR0000642000] = ( [P_ACCT].[P00205],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
    //Fringe Benefits Employer
    [P_ACCT].[#IKR0000628100] = ( [P_ACCT].[P00210],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
    //Fringe Benefits Other
    [P_ACCT].[#IKR0000626100] = ( [P_ACCT].[P00209],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
    *COMMIT
    You should probably also restrict explicitly on all other dimensions in your applications so that none are accidentally left open that don't need to be.
    Ethan

  • What is the efficient way of insert some bytes into a file?

    Hello, everyone:
    If I want to insert some bytes into a file (for example, insert the bytes before all the original content of the file, or append the bytes to a file), and the size of the original file is very big. I am wondering what is the efficient way? Where can I get some sample codes?
    regards,
    George

    Thanks, DrClap.
    I have tried your method and you are correct. I have written a simple program which can be used to insert "Hello World " to the start of a file ("c:\\temp\\input.txt"), and I have verified that it can work. Please help to see whether it is correct and whether it has a more efficient way.
    public class TestDriver {
         public static void main(String[] args) {
              byte[] back_buffer = new byte [1024];
              byte[] write_buffer = new byte [1024];
              System.arraycopy("Hello World".getBytes(), 0, write_buffer, 0, "Hello World".getBytes().length);
              int write_buffer_length = "Hello World ".getBytes().length;
              int count = 0;
              FileInputStream fis = null;
              FileOutputStream fos = null;          
              try {
                   fis = new FileInputStream (new File("c:\\temp\\input.txt"));
                   fos = new FileOutputStream (new File("c:\\temp\\output.txt"));
                   while ((count = fis.read (back_buffer)) >= 0)
                        fos.write(write_buffer, 0, write_buffer_length);
                        System.arraycopy (back_buffer, 0, write_buffer, 0, count);
                        write_buffer_length = count;
                   //write the last block
                   fos.write(write_buffer, 0, write_buffer_length);
                   fis.close();
                   fos.close();
                   //copy content back into original file
                   fis = new FileInputStream (new File("c:\\temp\\output.txt"));
                   fos = new FileOutputStream (new File("c:\\temp\\input.txt"));
                   while ((count = fis.read (back_buffer)) >= 0)
                        fos.write(back_buffer, 0, count);
                   fis.close();
                   fos.close();
                   //remove temporary file
                   File f = new File ("c:\\temp\\output.txt");
                   f.delete();
              } catch (FileNotFoundException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
                   try {
                        fis.close();
                   } catch (IOException e1) {
                        // TODO Auto-generated catch block
                        e1.printStackTrace();
                   try {
                        fos.close();
                   } catch (IOException e2) {
                        // TODO Auto-generated catch block
                        e2.printStackTrace();
              } catch (IOException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
                   try {
                        fis.close();
                   } catch (IOException e1) {
                        // TODO Auto-generated catch block
                        e1.printStackTrace();
                   try {
                        fos.close();
                   } catch (IOException e2) {
                        // TODO Auto-generated catch block
                        e2.printStackTrace();
    }regards,
    George

  • What is the best way to write 10 channels of data each sampled at 4kHz to file?

    Hi everyone,
    I have developed a vi with about 8 AI channels and 2 AO channels... The vi uses a number of parallel while loops to acquire, process, and display continous data.. All data are read at 400 points per loop interation and all synchronously sampled at 4kHz...
    My questions is: Which is the best way of writing the data to file? The "Write Measurement To File.vi" or low-level "open/create file" and "close file" functions? From my understanding there are limitations with both approaches, which I have outlines below..
    The "Write Measurement To File.vi" is simple to use and closes the file after each interation so if the program crashes not all data would necessary be lost; however, the fact it closes and opens the file after each iteration consumes the processor and takes time... This may cause lags or data to be lost, which I absolutely do not want..
    The low-level "open/create file" and "close file" functions involves a bit more coding, but does not require the file to be closed/opened after each iteration; so processor consumption is reduced and associated lag due to continuous open/close operations will not occur.. However, if the program crashes while data is being acquired ALL data in the buffer yet to be written will be lost... This is risky to me...
    Does anyone have any comments or suggestions about which way I should go?... At the end of the day, I want to be able to start/stop the write to file process within a running while loop... To do this can the opn/create file and close file functions even be used (as they will need to be inside a while loop)?
    I think I am ok with the coding... Just the some help to clarify which direction I should go and the pros and cons for each...
    Regards,
    Jack
    Attachments:
    TMS [PXI] FINAL DONE.vi ‏338 KB

    One thing you have not mentioned is how you are consuming the data after you save it.  Your solution should be compatible with whatever software you are using at both ends.
    Your data rate (40kS/s) is relatively slow.  You can achieve it using just about any format from ASCII, to raw binary and TDMS, provided you keep your file open and close operations out of the write loop.  I would recommend a producer/consumer architecture to decouple the data collection from the data writing.  This may not be necessary at the low rates you are using, but it is good practice and would enable you to scale to hardware limited speeds.
    TDMS was designed for logging and is a safe format (<fullDisclosure> I am a National Instruments employee </fullDisclosure> ).  If you are worried about power failures, you should flush it after every write operation, since TDMS can buffer data and write it in larger chunks to give better performance and smaller file sizes.  This will make it slower, but should not be an issue at your write speeds.  Make sure you read up on the use of TDMS and how and when it buffers data so you can make sure your implementation does what you would like it to do.
    If you have further questions, let us know.
    This account is no longer active. Contact ShadesOfGray for current posts and information.

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
    Thanks in advance for your review and am hopeful for a reply.
    ITBobbyP85

    You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
    csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
    In the flat file connection manager, set the column delimiter to the pipe character.

Maybe you are looking for

  • How do i transfer music from one apple account to another through apple mac?

    How do i transfer music from one apple id account to another one usning my mac book air? i have bought it today and set up my own acount but all my music etc, are on my fathers account. Thus is lies the problem of getting them to my mac....?

  • Assigning values to element part of a variable

    Hello, I have a BPEL process which has a variable declared thusly:     <variable name="myMessageVariable" element="ns1:MyMessage"/>The MyMessage type is declared by an XSD file which is used as part of a WSDL definition which an adapter then uses to

  • How to change folder icons in Yosemite?

    Hi There, I have a bunch of dropbox folders aliased to my desktop - in Yosemite they display as OSX folders with arrows. In previous versions of OSX, I've been able to apply the Dropbox App icon to better distinguish between dropbox items and OSX fol

  • Dynamically insert data into PDF possible?

    This is the scenario and I am hoping someone can advise whether it is possible. We want to create a PDF fillable form. However there are a few fields that we want to be read only but to injected before the user downloads the PDF. e.g. There is a PDF

  • Problem in print

    when I set "No Line" for line color of any rectangle i cant print it on IE (Run from server)???? (I can print on client by Report Builder) what should i do to print it? Is there any special setting on application server?