[solved] split or manipulate file based on matching entries in column

I have a file in the following format:
500 +18588581234 +13035551212 0001 0001-0001
400 +18588581235 +13035551212 0001 0001-0001
100 +18588581236 +13035551212 0001 0001-0001
300 +16196191234 +14045551212 0001 0001-0001
200 +16196191235 +14045551212 0001 0001-0001
I'd like to end up with the following output where the entries in the first column are added if the 3rd column matches (I don't really care about the 2nd column)
In other words, 500 + 400 + 100 because the third column are all +13035551212
1000 +13035551212 0001 0001-0001
500 +14045551212 0001 0001-0001
I've been playing around with trying to split the file based on the 3rd column and then walking through it line by line but I thought there must be a graceful one-liner to do this.
I have any Arch tools (repo or AUR) available to me and don't really mind the method as long as it's sane.
The file is always sorted reverse-numerically on the first column and then numerically on the third but I can do whatever it needs to get it working if required
Last edited by oliver (2014-05-16 13:04:56)

Thank you both Trilby and Saint0fCloud... I found a similar example online using awk and modified it a little:
awk '{array[$3]+=$1} END { for (f in array) {print array[f], f, $4, $5}}' $DATAFILE
It's getting closer but not quite there (and if I modify my test data it's clearer why)
datafile
500 +18588581234 +13035551212 0003 0003-0003
400 +18588581235 +13035551212 0003 0003-0003
100 +18588581236 +13035551212 0003 0003-0003
300 +16196191234 +14045551212 0004 0004-0004
200 +16196191235 +14045551212 0004 0004-0004
$ ./testfile
500 +14045551212 0004 0004-0004
1000 +13035551212 0004 0004-0004
If I can't get it going in awk, I'll switch over to the zsh method
edit
Not sure this is the most graceful but it seems to work
awk '{array[$3," " $4," " $5]+=$1} END { for (f in array) {print array[f], f}}' $DATAFILE
1000 +13035551212 0003 0003-0003
500 +14045551212 0004 0004-0004
If anyone sees anything horribly wrong with the logic, feel free to point it out.
Last edited by oliver (2014-05-15 17:10:45)

Similar Messages

  • Split the log file based on size

    Hi All,
    I have created a log on oracle directory by using UTL_FILE. Now I need to create a new file based on the old file size. How to create a new file based on the size? Please provide your suggestion..
    Thanks & Regards
    Sami

    declare
      v_cnt NUMBER := 0;
    begin
        filehandle        := UTL_FILE.FOPEN('DIR_TEMP','oops1.txt','r');
        LOOP
         UTL_FILE.GET_LINE(filehandle, v_line);
         v_cnt NUMBER := v_cnt + 1;
         if v_cnt NUMBER  >= 1000 then
             -- create new file
            filehandle2        := UTL_FILE.FOPEN('DIR_TEMP','oops2.txt','w');
         end if;
        END LOOP;
        UTL_FILE.FCLOSE(filehandle);
    END;Edited by: oracle on Apr 5, 2012 3:47 AM
    Edited by: oracle on Apr 5, 2012 3:53 AM

  • How to split data into tables based on the entries in a column?

    My problem is very similar to this thread: how to link data from one numbers sheet to another sheet, however I could get it to work the way described there. I have one big table for entering data (the first one). I would like to have a few other tables populated automatically based on the entries in one of the columns of the first table. In my example below I put everything in one sheet for clarity. The selection to the other tables is to be done on the column "fruit" in the first table. (second one is "oranges", then "apples" and then "pears" -- had to cut the width of my screenshot due to the limitations of Apple's forums).
    Here is what it would look like, just cannot figure out how to make it happen automatically.
    Tried also importing similar Excel example to Numbers, but the import did not work correctly.
    Any help will be appreciated.
    LD

    Larry,
    Here's an approach that I've used...
    In the Purchases table, Aux column, the expression is:
    =COUNTIF($A$1:A2, A) & "-"&A
    Fill Down
    This expression builds a string that identifies the item and the ocurrance of that item.
    The Date column of the Summary tables, cell a3, contains:
    =IF(ROW()-3<COUNTIF(Purchases :: $A,$A$1), LOOKUP(ROW()-2&"-"&$A$1, Purchases :: $F, Purchases :: B), "")
    Fill Across, then fill down.
    Regards,
    Jerry

  • SSIS Script Component Conditional Split to Flat File Destination

    I have a flat file which needs to be split into multiple flat files based on value in RecordType column. 
    For example, if (RecordType == 20), then direct all rows to a new text file, 
    I have around 15 different record types. I have managed to write some C# code for Conditional Split, but 
    still trying to figure out what is the next step to save these rows to a text file. 
    I will be grateful if someone please point me to the right direction.
    Many Thanks
    #region Namespaces
    using System;
    using System.Data;
    using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
    using Microsoft.SqlServer.Dts.Runtime.Wrapper;
    using System.IO;
    #endregion
    [Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
    public class ScriptMain : UserComponent
    string copiedAddressFile;
    private StreamWriter textWriter;
    private string columnDelimiter = ",";
    private string filepath = @"C:\DestFiles";
    private string[] columns;
    public override void PreExecute()
    IDTSInput100 input = ComponentMetaData.InputCollection[0];
    columns = new string[input.InputColumnCollection.Count];
    for (int i = 0; i < input.InputColumnCollection.Count; i++)
    columns[i] = input.InputColumnCollection[i].Name;
    public override void Input0_ProcessInputRow(Input0Buffer Row)
    if (Row.intRecordType ==20)
    Row.DirectRowToRecordType20();
    else if (Row.intRecordType ==10)
    Row.DirectRowToRecordType10();

    see similar example
    http://www.sqlis.com/sqlis/post/Using-the-Script-Component-as-a-Conditional-Split.aspx
    Please Mark This As Answer if it solved your issue
    Please Mark This As Helpful if it helps to solve your issue
    Visakh
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • How to split a single file and then route it to different systems

    Hi All,
    I have a requirement like...my incoming file(single file) needs to be split into 3 files based on a field "company code".and then route it into different systems.
    my incoming file is like this .....
    Header,name,.....,.....,........,.....,.....
    Detail,.....,.....,.....,.....,......,companycode(100),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(101),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(100),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(102),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(101),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(102),....,.....,......
    EOF.
    I need to split this file and my ouput file should be like this
    For 1st system
    Header,name,.....,.....,........,.....,.....
    Detail,.....,.....,.....,.....,......,companycode(100),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(100),....,.....,......
    EOF.
    For 2nd system
    Header,name,.....,.....,........,.....,.....
    Detail,.....,.....,.....,.....,......,companycode(101),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(101),....,.....,......
    EOF.
    For 3rd system
    Header,name,.....,.....,........,.....,.....
    Detail,.....,.....,.....,.....,......,companycode(102),....,.....,......
    Detail,.....,.....,.....,.....,......,companycode(102),....,.....,......
    EOF.
    and then send it three systems based on company code.
    Can anyone tell me how this can be achieved?
    Thanks,
    Hemanth.

    Hi Nallam,
    I tried the same thing but in the input file as there are different company codes,It is not splitting the file into three parts and sending only concerned data to repective system instead the whole file is going to the all the systems.
    I came to know that the file has to splitted in the mapping only, we are able to do it by using mapping but the problem is in Routing,as in receiver determination we makes use of source structure for giving the condition. Can you please help me on this.
    Thanks,
    Hemanth.

  • Splitting large message (60MB) based on payload data

    Hi,
    I have a file (Flat) to file (xml) scenario. The source flat file is being read by FCC. Since the source flat file is large (upto 60MB) so I have to split it into small files then I have applied "recordset per messages" in FCC level to split large file into small ones. But the clients requirement is to split the large document based on payload data (that is DeliveryDate). That means I cannt split the message based on number of rows of flat file instead I have to split the file on the basis of DeliveryDate so that after splitting into small files, each small file should contain data for exactly one date (say in one file data for 15th NOV and in another file data for 16NOV and so on).
    Please suggest some solution to split the large file (60MB) based on payload data(DeliveryDate).
    Br,
    Madan Agrawal

    Hi Madan,
    in this case split the message in to different messages like 2 mb file,
    XI doesn't support 60 mb files, U have to split the flat files based one some condition.
    i have same requirement flat file having huge data i split that data using java map.]
    then i processed its work fine for me.
    I think u can also do it,
    But in my case i divided message based on sequence number uniquenumber to diffentiate data.
    if there is any sequence number split the message .
    Regards,
    Raj

  • Splitting a file based on the payload field - multimapping

    HI Everyone,
    I have a requirement of splitting a file based on the field .
    e.g When I am a file as :
    row1  David    US
    row2  Cindra   US
    row3   Peeru   CA
    row4   Jay       CA
    Then, I have to split the file into two files, one file with the US rows and  another file with the CA rows.
    There can be many countries in the input file, so number of target files need to be generated is not fixed.
    I have gone through the below links :
    /people/jin.shin/blog/2006/02/07/multi-mapping-without-bpm--yes-it146s-possible   ( in the blog we know that there are two receivers but in my case I dont know that )
    https://bond.newellco.com/irj/scn/,DanaInfo=www.sdn.sap.com,SSL+thread?messageID=6449801#6449801
    ( Everyone is providing JAVA mapping as a solution....)
    Is Java mapping the only option to resolve the problem case I have ?
    Thx
    PEERU IN

    Hi Peeru,
    I don't have access to the any FTP or file server of XI to check the file adapter as i am travelling right now(I am 100% sure that the file adapter splitting will work as i have done this for one of my requirement).
    Coming to your requirement:
    I wrote an java map which will read the XML file and create the multiple recordset based on number of different country exists in the file, then i am printing the final structure in an multi mapping layout (please see the attached input file Country.xml and out put file which the java map generated Final.XML) if we do this i think the file adapter will create the different files based on the number of recordset we have as you can see the output file is generated with the 3 recordset (<Country>) tag each country tag has its own records, now if i give the filename in variable substuition pointing to region then i think i should get 3 files with names
    us.xml
    us01.xml
    us02.xml
    i think by using the multi mapping i can generate 3 files and by using the variable substution i can give the 3 different names from the payload of each file and also can add the timestamp for each of the file, again i didn't have access to the file adapter in the project i am working on so i couldn't verify in XI server, but if you still looking for solution then let me know i will give the map details.
    Input File: Country.xml
    <?xml version="1.0"?>
    <check>
         <order>
              <name>Nisar1</name>
              <region>US</region>
         </order>
         <order>
              <name>Nisar2</name>
              <region>US</region>
         </order>
         <order>
              <name>Nisar3</name>
              <region>US</region>
         </order>
         <order>
              <name>Nisar4</name>
              <region>US01</region>
         </order>
         <order>
              <name>Nisar5</name>
              <region>US01</region>
         </order>
         <order>
              <name>Nisar6</name>
              <region>US01</region>
         </order>
         <order>
              <name>Nisar7</name>
              <region>US</region>
         </order>
           <order>
              <name>Nisar8</name>
              <region>US</region>
         </order>
           <order>
              <name>Nisar8</name>
              <region>US02</region>
         </order>
    </check>
    Output File : Final.XML
    <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
         <ns0:Message1>
              <country>
                   <Order>
                        <name>Nisar1</name>
                        <region>US</region>
                   </Order>
                   <Order>
                        <name>Nisar2</name>
                        <region>US</region>
                   </Order>
                   <Order>
                        <name>Nisar3</name>
                        <region>US</region>
                   </Order>
                   <Order>
                        <name>Nisar7</name>
                        <region>US</region>
                   </Order>
                   <Order>
                        <name>Nisar8</name>
                        <region>US</region>
                   </Order>
              </country>
              <country>
                   <Order>
                        <name>Nisar4</name>
                        <region>US01</region>
                   </Order>
                   <Order>
                        <name>Nisar5</name>
                        <region>US01</region>
                   </Order>
                   <Order>
                        <name>Nisar6</name>
                        <region>US01</region>
                   </Order>
              </country>
              <country>
                   <Order>
                        <name>Nisar8</name>
                        <region>US02</region>
                   </Order>
              </country>
         </ns0:Message1>
    </ns0:Messages>
    regards
    Nisar Khan

  • Split records into two files based on lookup table

    Hi,
    I'm new to ODI and want to know on how I could split records into two files based on a value in one of the columns in the table.
    Example:
    Table:
    my columns are
    account name country
    100 USA
    200 USA
    300 UK
    200 AUS
    So from the 4 records I maintain list of countries in a lookup file and split the records into 2 different files based on values in the file...
    Say I have records AUS and UK in my lookup file...
    So my ODI routine should send all records with country into file1 and rest to file2.
    So from above records
    File1:
    300 UK
    200 AUS
    File2:
    100 USA
    200 USA
    Can you help me how to achieve this?
    Thanks,
    Sam

    1. where and how do i create filter to restrict countries? In source or target? Should I include some kind of filter operator in interface.
    You need to have the Filter on the Source side so that we can filter records accordingly the capture the same in the File. To have a Filter . In the source data store click and drag the column outside the data store and you will have Cone shaped icon and now you can click and type the Filter.
    Please look into this link for ODI Documentation -http://www.oracle.com/technetwork/middleware/data-integrator/documentation/index.html
    Also look into this Getting started guide - http://download.oracle.com/docs/cd/E15985_01/doc.10136/getstart/GSETL.pdf . You can find information as how to create Filter in this guide.
    2. If I have include multipe countries like (USA,CANADA,UK) to go to one file and rest to another file; Can I use some kind of lookup file...? Instead of modifying filter inside interface...Can i Update entries in the file?
    there are two ways of handling your situation.
    Solution 1.
    1. Create Variable Country_Variable
    2. Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
    3. Create a new Package Country File Unload
    4. Call the Variable in Country_Variable in Set Mode and provide the Country (USA )
    5. Next call the First Interface
    6. Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
    7. Now run the package .
    Solution 2.
    If you need a solution to handle through Filer.
    1. Use this Method (http://odiexperts.com/how-to-refresh-odi-variables-from-file-%E2%80%93-part-1-%E2%80%93-just-one-value ) to call the File where you wish to create store the country name into the variable Country_Variable
    2. Pretty much the same Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
    3.Create a new Package Country File Unload
    4.Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
    5. Now run the package .
    Now through this way using File you can control the File.
    Please try and let us know , if you need any other help.

  • Split text file in multiple files based on a string

    Hey all,
    I want to split a text file into multiple files. I already found some examples where there is a split based on a number of files.
    http://forum.java.sun.com/thread.jspa?forumID=256&threadID=260930
    But I want to make a split based on a string (word) that I find in the file.
    Anyone that can help me ?
    Regards,
    Atmoz

    This is my testing code like it is now. Maybe there is a bug in there which causes a memory leak or so.
    public class test {
         public static void main(String args[]) {
              File sSourceDir = new File("D:\\Test\\");
              File sDestinationDir = new File("D:\\Test\\");
              File[] files = sSourceDir.listFiles(new Filter());
              for (int i=0; i<files.length; i++) {
                   File file = files;
                   if (file.isFile()) {
                        System.out.println("Splitting file: "+files[i]);
                        splitFile(file,sDestinationDir);
                   else {
                        System.out.println("Not a file: "+files[i]);
         public static File splitFile(File fSourceFile, File sDestinationDir) {
              int counter = 1;
              File fDestinationFile=new File(sDestinationDir,"NEW_"+counter+"_"+fSourceFile.getName());
              fDestinationFile.delete();
              String sLineOfData=null;
              boolean firstfile = true;
              try {
                   BufferedReader DataFileReader = new BufferedReader(new FileReader(fSourceFile));
                   PrintWriter outputStream = new PrintWriter(new FileWriter(fDestinationFile));
                   while ((sLineOfData = DataFileReader.readLine()) != null){
                        System.out.println(sLineOfData);
                        if (sLineOfData.indexOf("UNA:+") != -1) {
                             if (!firstfile) {
                                  counter++;
                                  fDestinationFile=new File(sDestinationDir,"NEW_"+counter+"_"+fSourceFile.getName());
                                  outputStream.close();
                                  outputStream = new PrintWriter(new FileWriter(fDestinationFile));
                                  outputStream.println(sLineOfData);     
                             else {
                                  firstfile = false;
                                  outputStream.println(sLineOfData);     
                        else {
                             outputStream.println(sLineOfData);
                   outputStream.close();
                   DataFileReader.close();
              } catch (FileNotFoundException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
              } catch (IOException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
              return fSourceFile;
    And this is an example of a file:
    PS: I cut out each long line (that line from 4000 chars)
    UNA:+,? '
    UNB+UNOC:3d+5499757493404:14+3014331700208:14+050114:1200+ACC302++STS.GZ++1++1'
    UNH+I15185477+UTILTS:D:03B:UN:E5BE03'BGM+E32::260+I15185477+9+NA'
    DTM+137:200501141151:203'DTM+735:?+0100:406'MKS+23'NAD+MR+3014331700208::9'
    UNA:+,? '
    UNB+UNOC:3+549975f7493404:14+3014331700208:14+050114:1200+ACC302++STS.GZ++1++1'
    UNH+I15185477+UTILTS:D:03B:UN:E5BE03'BGM+E32::260+I15185477+9+NA'
    DTM+137:200501141151:203'DTM+735:?+0100:406'MKS+23'NAD+MR+3014331700208::9'
    DTM+137:200501141151:203'DTM+735:?+0100:406'MKS+23'NAD+MR+3014331700208::8'
    UNA:+,? '
    UNB+UNOC:3g+5499757g493404:14+3014331700208:14+050114:1200+ACC302++STS.GZ++1++1'
    UNH+I15185477+UTILTS:D:03B:UN:E5BE03'BGM+E32::260+I15185477+9+NA'
    DTM+137:200501141151:203'DTM+735:?+0100:406'MKS+23'NAD+MR+3014331700208::9'Message was edited by:
    Atmozzz

  • Powershell to rename file based on output

    Good Afternoon
    I was wondering if someone could assist with the below
    1. We currently have a system in place whereby a document is scanned onto the system as an image and this is saved in a central location folder
    2. I then run tesseract within powershell on this location folder, which is a program that can extract the image to txt file and this converts the image to a txt file and saves in the same location with the same name
    3. What i would like to do is for powershell to then search the extracted txt file and look for a particular regexpression (which i have) and to then rename the original file from point 1 to the output
    Is the above possible at all
    This is what i have so far, what this seems to do is look in my ocr folder and run tesseract and the output txt files end up in the same folder
    $TargetFolder1 = “c:\ocr\test\file.txt”
    $regex = '[0-9]{5,6}[\.][0-9]{0,1}'
    $result = out-file ("c:\OCR\Match\""imtest" + ".txt")
    select-string -Path $Targetfolder1 -list -Pattern $regex | % { $_.Matches } | % { $_.Value } > $_result
    cd "C:\Program Files (x86)\Tesseract-OCR"
    Apologies of it seems confusing
    Barrie

    Hi
    Many thanks for the info so far, much appreciated
    I have read around on the Microsoft Powershell learning site but becoming slightly confused with some more piping
    I previously mentioned that the tesseract program converts the ocr scanned tif file into a text file - Is it possible instead to have this output to a variable of some sort and then for my reg expression carry out the search in the variable and
    then rename the tif file based on the result of the regexpression
    Previously i have ran tesseract on a tif file via powershell which in turn produces a text file of the document in the file location
    I think it would be best if i ran tesseract on the tif file, then for powershell to export the results somewhere and the reg expression query can be run and the final result will rename the original tif file with the output of the reg expression
    Hope this makes sense and many thanks for help so far
    Regards
    Barrie

  • Help renaming pdf files based on internal content

    I work for a company that has thousands of E-tickets coming in daily, weekly, monthly, etc..
    These tickets come in bafhakfbaifh.pdf and we have to manually rename them or print them all out and then sort through them and put them in order.
    What I would like to do is:
    1. Split any pdf's that have more than one page or "ticket" in my case. I know how to do this with automator easily, but I'd love to keep it all in one program.
    2. Search the file for Event Name (i.e. Madonna)
    3. Search the file for Date of event (August 12, 2012)
    4. Search the file for Section, Row, Seat (124 3 12)
    5. Rename the file based on content found (Madonna August 12 2012 124 3 12.pdf)
    6. Move from original download folder to organized folders based on artist/team.
    7. Automatically print in alphabetical or some sort of designated order.
    Any help is muchly appreciated. So far, I found a PC program called A-PDF rename, but it is not automated enough to be practical. Hazel is awesome at OCRing the pdf and moving from folder to folder, but does not do enough.
    Any help is muchly appreciated.
    Thank you.

    You're looking for a PDF Parser or PDF miner tool (PDFminer) as a starting framework, and you'll almost certainly be writing custom code around that as parsing a text file that's effectively free-form and originating from multiple different sources almost always (always?) involves writing customized processing code and an on-going series of tweaks as the suppliers of the PDF change their ticket formats.  (Even apparently-simple details such as the time and date formats, for instance, can vary by geography and language and by supplier, and can derail common processing.)
    In some cases that I can envision, it'd be entirely possible that the data you're after is actually located in an embedded image and not in text that can be parsed.
    The best approach is to get folks to send you JSON or XML or some other format intended for interchange, and avoid the whole mess that is parsing or mining a printer-oriented format.
    The other obvious option is to use something like Amazon's Mechanical Turk or some other explicitly outsourced help.  Depending on how often the formats change and how many of these PDF files you're dealing with and how varied the formats are, sometimes throwing staff at the problem can be the most cost-effective approach.

  • Why does a mail-merge in MS Publisher split into multiple files when printed as a pdf file?

    Why does a mail-merge in MS Publisher split into multiple files when printed as a pdf file?

    'Cuz that's what mail merges do.  They create multiple documents based on the parent doc and all the names/addresses in your data tables.
    Perhaps if you describe exactly what you're trying to do & why you need PDF, we can point you to relevant tutorials.
    Nancy O.

  • How can I automatically rename files based on an excel doc?

    I am a scientist and recieve data files from some of my experiments with arbitrary, computer-generated names. For example, if I perform an experiment with four samples, which I name Sample1, Sample2, Sample3, and Sample4, I get back 4 data files named J30935D05.ab1, J30935E05.ab1, J30935F05.ab1 and J30935G05.ab1 along with an Excel doc that lists my names for the samples (sample1, sample 2, etc.) in one column and the computer-generated names for their corresponding files (J30935D05.ab1, J30935E05.ab1, etc.) next to them in another. Therefore, I must open the Excel file and look up which file corresponds to which sample before I can begin processsing my data. Usually these experiments involve a large enough number of samples (70-100 or so) such that looking everything up in the Excel doc gets very tedious and is quite time-consuming. Is there any way to create an automator workflow, applescript, or some other solution to rename these files based on the Excel doc? To clarify, I would like an automator workflow that would take a folder of arbitrarily named files, look up the names I have for the samples in the Excel doc, and rename the files accordingly. In the example, my folder containing files J30935D05.ab1, J30935E05.ab1, J30935F05.ab1 and J30935G05.ab1 would be turned into a folder containing the files renamed as Sample1.ab1, Sample2.ab1, Sample3.ab1, and Sample4.ab1. This example is a bit simplified, however, and trying a simple trick of just systematically renaming files within the folder would not work--both the original and the new file names must be looked up in the Excel doc, as these change dramatically from experiment to experiment. I would also need to maintain the file extension on each file after being renamed. Any help would be greatly appreciated!!

    Try the freeware utility Renamer4Mac (VersionTracker or MacUpdate).
    Why reward points?(Quoted from Discussions Terms of Use.)
    The reward system helps to increase community participation. When a community member gives you (or another member) a reward for providing helpful advice or a solution to their question, your accumulated points will increase your status level within the community.
    Members may reward you with 5 points if they deem that your reply is helpful and 10 points if you post a solution to their issue. Likewise, when you mark a reply as Helpful or Solved in your own created topic, you will be awarding the respondent with the same point values.

  • Saved file doesn%27t match original text

    Using LV 2010.  I have an application where I'm encrypting a file and saving it to the hard drive as a text file.  The problem I'm having is that the string of characters that I'm writing to the file does not match the string of characters that are read from the file.  The files match for the first 10 characters, then in the original string, there is a \r character that is dropped from the file.  The rest of the file doesn't match too well after that.  Also, the size of the strings are different too by a few thousand characters.  Is there some setting in the file read or write that could be causing this?  Thanks.
    Solved!
    Go to Solution.

    Here's a quick example (LabVIEW 2010). Result is always true in my limited testing.
    Here's the string IO version:
    (It is a bit more complicated when using the binary file IO:
    (1) wire false to prepend array or string size when writing.
    (2) wire -1 to count when reading.
    This code is not attached, just a picture)
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    ValidateFileIO.png ‏5 KB
    ValidateFileIO.vi ‏7 KB
    ValidateFileIOBinary.png ‏5 KB

  • Splitting an input file using Transformation agent

    Hi, I am trying to use transformation agent (adobe output central pro v5.5) to take and input file and split it into many output files based on a text string in the file.
    However instead of splitting the file it, each output file containts all the input records upto the next file boundary. What I need is just the records between the file boundaries.
    Any ideas what I have got wrong, or could this be a feature of the Trans Agent?
    my TDF file is;
    O " N 1500 N N N N O
    F "^$PAGE 1" 1 8 -5
    E data1 * "" 1 0 * 1 60 0 0 ""
    #startscript Head
    ^job invoice_arch.pdf
    #endscript
    #startscript *
    #for data1
    @data1
    #endfor
    #endscript
    thanks in advance
    Stephen

    Hi there,
    If you are still interested in splitting a data file, please drop me a email and I will forward a document that I wrote to achive this.
    Andrew Purdy
    Enterprise Solutions
    [email protected]

Maybe you are looking for

  • Windows Vista can't read E drive

    Windows cannot detect some of my factory presets since I had my laptop "cleaned" for viruses.  It cannot read my E drive, which is the slot I use to insert my camera's SD card to upload pictures onto my computer.  I need this!  Help? Melanie1154

  • Photo attachment size in Mail

    Hi When I attach photos into a mail, they appear as actual size. I would like all my attachment photos to appear as icons. I can then change the size of the photos into an icon but have to change each one individually. Is there a way to do a defult s

  • Audio does not behave correctly when Bluetooth headset disconnects

    When a Bluetooth audio receiver disconnects, Windows reverts to another active audio connection.  However, the programs that were previously using the Bluetooth connection do not transfer to the new audio connection and reactivating the Bluetooth con

  • MacBook Pro Software Update 1.1 Refuses to Install

    Ok well I have a very annoying problem. Since the 18th, I've been getting daily notices about the MacBook Pro Software Update 1.1 thing, and daily I have been saying "OK, go ahead and interrupt what I'm doing, install it and then leave me alone." And

  • BP Query

    I need to create a query that will check the open deliveries against the open delivery column in OCRD.  I think that it would need to include returns also in the calculation.  Does anyone have a query like this? Thank you, Phyllis