Reading a comma delimited data stream

I need a clean, preferably simple way, to parse a comma delimited string that enters a local variable once a minute. The variable buffer needs to be cleared before the new string comes in. If you can help and have more questions about this I can send a detailed explanation. thanks in advance.
[email protected]

I placed my code inside a loop and have attached it. The data in the array is replaced every time that the function is called. You can see where I also added a step to reset the string. I am not clear what you mean by "clear what is not overwritten". Do you mean that the string length changes and you want to only display that amount of data? This example should do that. I added a random string length to show that behavior.
I feel like I am missing something about what you want. Do you have an example that shows what you are doing and you can point to where you wnat something different.
Attachments:
comma_delimetd.vi ‏37 KB

Similar Messages

  • Problems Reading SSL  server socket  data stream using readByte()

    Hi I'm trying to read an SSL server socket stream using readByte(). I need to use readByte() because my program acts an LDAP proxy (receives LDAP messages from an LDAP client then passes them onto an actual LDAP server. It works fine with normal LDAP data streams but once an SSL data stream is introduced, readByte just hangs! Here is my code.....
    help!!! anyone?... anyone?
    1. SSL Socket is first read into  " InputStream input"
    public void     run()
              Authorization     auth = new Authorization();
              try     {
                   InputStream     input     =     client.getInputStream();
                   while     (true)
                   {     StandLdapCommand command;
                        try
                             command = new StandLdapCommand(input);
                             Authorization     t = command.get_auth();
                             if (t != null )
                                  auth = t;
                        catch( SocketException e )
                        {     // If socket error, drop the connection
                             Message.Info( "Client connection closed: " + e );
                             close( e );
                             break;
                        catch( EOFException e )
                        {     // If socket error, drop the connection
                             Message.Info( "Client connection close: " + e );
                             close( e );
                             break;
                        catch( Exception e )
                             //Way too many of these to trace them!
                             Message.Error( "Command not processed due to exception");
                             close( e );
                                            break;
                                            //continue;
                        processor.processBefore(auth,     command);
                                    try
                                      Thread.sleep(40); //yield to other threads
                                    catch(InterruptedException ie) {}
              catch     (Exception e)
                   close(e);
    2 Then data is sent to an intermediate function 
    from this statement in the function above:   command = new StandLdapCommand(input);
         public StandLdapCommand(InputStream     in)     throws IOException
              message     =     LDAPMessage.receive(in);
              analyze();
    Then finally, the read function where it hangs at  "int tag = (int)din.readByte(); "
    public static LDAPMessage receive(InputStream is) throws IOException
        *  LDAP Message Format =
        *      1.  LBER_SEQUENCE                           --  1 byte
        *      2.  Length                                  --  variable length     = 3 + 4 + 5 ....
        *      3.  ID                                      --  variable length
        *      4.  LDAP_REQ_msg                            --  1 byte
        *      5.  Message specific structure              --  variable length
        DataInputStream din = new DataInputStream(is);
        int tag = public static LDAPMessage receive(InputStream is) throws IOException
        *  LDAP Message Format =
        *      1.  LBER_SEQUENCE                           --  1 byte
        *      2.  Length                                  --  variable length     = 3 + 4 + 5 ....
        *      3.  ID                                      --  variable length
        *      4.  LDAP_REQ_msg                            --  1 byte
        *      5.  Message specific structure              --  variable length
        DataInputStream din = new DataInputStream(is);
           int tag = (int)din.readByte();      // sequence tag// sequence tag
        ...

    I suspect you are actually getting an Exception and not tracing the cause properly and then doing a sleep and then getting another Exception. Never ever catch an exception without tracing what it actually is somewhere.
    Also I don't know what the sleep is supposed to be for. You will block in readByte() until something comes in, and that should be enough yielding for anybody. The sleep is just literally a waste of time.

  • Convert xls to comma delimited

    hi all...
    i have a xls file which needs to be converted to comma delimited dat file.
    can somebody help me on this.
    thnks,
    i need it programatically pls!!

    Check the below program :
    Upload xls file and you can see .txt file ( with comma delimted)
    Input ( XLS file )
    aaa     1245     2344     233     qwww
    233     2222     qwww     www     www
    Output ( .txt file with comma delimted)
    aaa,1245,2344,233,qwww
    233,2222,qwww,www,www
    REPORT ZFII_MISSING_FILE_UPLOAD no standard page heading.
    data : begin of i_text occurs 0,
           text(1024) type c,
           end of i_text.
    Internal table for File data
    data : begin of i_data occurs 0,
           field1(10) type c,
           field2(10) type c,
           field3(10) type c,
           field4(10) type c,
           field5(10) type c,
           end of i_data.
    data : begin of i_download occurs 0,
           text(1024) type c,
           end of i_download.
    data : v_lines type sy-index.
    data : g_repid like sy-repid.
    data v_file type string.
    data: itab like alsmex_tabline occurs 0 with header line.
    data : g_line like sy-index,
           g_line1 like sy-index,
           $v_start_col         type i value '1',
           $v_start_row         type i value '1',
           $v_end_col           type i value '256',
           $v_end_row           type i value '65536',
           gd_currentrow type i.
    selection-screen : begin of block blk with frame title text.
    parameters : p_file like rlgrap-filename obligatory.
    selection-screen : end of block blk.
    initialization.
      g_repid = sy-repid.
    at selection-screen on value-request for p_file.
      CALL FUNCTION 'F4_FILENAME'
           EXPORTING
                PROGRAM_NAME = g_repid
           IMPORTING
                FILE_NAME    = p_file.
    start-of-selection.
    Uploading the data into Internal Table
      perform upload_data.
    download the file into comma delimted file.
      perform download_data.
    *&      Form  upload_data
          text
    -->  p1        text
    <--  p2        text
    FORM upload_data.
      CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
           EXPORTING
                FILENAME                = p_file
                I_BEGIN_COL             = $v_start_col
                I_BEGIN_ROW             = $v_start_row
                I_END_COL               = $v_end_col
                I_END_ROW               = $v_end_row
           TABLES
                INTERN                  = itab
           EXCEPTIONS
                INCONSISTENT_PARAMETERS = 1
                UPLOAD_OLE              = 2
                OTHERS                  = 3.
      IF SY-SUBRC <> 0.
        write:/10 'File '.
      ENDIF.
      if sy-subrc eq 0.
        read table itab index 1.
        gd_currentrow = itab-row.
        loop at itab.
          if itab-row ne gd_currentrow.
            append i_data.
            clear i_data.
            gd_currentrow = itab-row.
          endif.
          case itab-col.
            when '0001'.
    first Field
              i_data-field1 = itab-value.
    second field
            when '0002'.
              i_data-field2 = itab-value.
    Third field
            when '0003'.
              i_data-field3 = itab-value.
    fourth field
            when '0004'.
              i_data-field4 = itab-value.
    fifth field
            when '0005'.
              i_data-field5 = itab-value.
          endcase.
        endloop.
      endif.
      append i_data.
    ENDFORM.                    " upload_data
    *&      Form  download_data
          text
    -->  p1        text
    <--  p2        text
    FORM download_data.
    loop at i_data.
    concatenate i_data-field1 ',' i_data-field2 ',' i_data-field3 ','
                i_data-field4 ',' i_data-field5 into i_download-text.
    append i_download.
    clear : i_download,
            i_data.
    endloop.
    CALL FUNCTION 'GUI_DOWNLOAD'
      EXPORTING
      BIN_FILESIZE                  =
        FILENAME                      =
        'C:\Documents and Settings\smaramreddy\Desktop\fff.txt'
       FILETYPE                      = 'ASC'
      APPEND                        = ' '
      WRITE_FIELD_SEPARATOR         = ' '
      HEADER                        = '00'
      TRUNC_TRAILING_BLANKS         = ' '
      WRITE_LF                      = 'X'
      COL_SELECT                    = ' '
      COL_SELECT_MASK               = ' '
      DAT_MODE                      = ' '
    IMPORTING
      FILELENGTH                    =
      TABLES
        DATA_TAB                      = i_download
    EXCEPTIONS
       FILE_WRITE_ERROR              = 1
       NO_BATCH                      = 2
       GUI_REFUSE_FILETRANSFER       = 3
       INVALID_TYPE                  = 4
       NO_AUTHORITY                  = 5
       UNKNOWN_ERROR                 = 6
       HEADER_NOT_ALLOWED            = 7
       SEPARATOR_NOT_ALLOWED         = 8
       FILESIZE_NOT_ALLOWED          = 9
       HEADER_TOO_LONG               = 10
       DP_ERROR_CREATE               = 11
       DP_ERROR_SEND                 = 12
       DP_ERROR_WRITE                = 13
       UNKNOWN_DP_ERROR              = 14
       ACCESS_DENIED                 = 15
       DP_OUT_OF_MEMORY              = 16
       DISK_FULL                     = 17
       DP_TIMEOUT                    = 18
       FILE_NOT_FOUND                = 19
       DATAPROVIDER_EXCEPTION        = 20
       CONTROL_FLUSH_ERROR           = 21
       OTHERS                        = 22
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    ENDFORM.                    " download_data
    Thanks
    Seshu

  • How do you separate UDP data streams

    Hi.  I'm currently using LabVIEW 2013  to read in a UDP data stream (packet) that contains various parameters.  I am successful in receiving the UDP packet using the UDP Open and UDP Read vi.  On the data out of the UDP Read I have it running into the "Unflatten to String" function and the type I have created a cluster containing 32-bit floating point (Single Precision in LabVIEW) numeric constant, since that is the type of data I am receiving.  To be able to separate the different parameters coming in, I have created the same number of numeric constants blocks to equal the number of parameters.  Is there an easier way to separate the data from the UDP data out so that I don't have to create so many numeric constants to represent the same number of data I have coming in?  Sometimes we will way more than what is listed below.
    Any help will be appreciated.  Thank you in advanced
    Attachments:
    Example layout (Not actual).vi ‏11 KB

    If all the data is the same type, then you can wire an array as the datatype instead of a cluster. Set the "data includes array or string size" input to False (the default is True). The output will be an array, and you can index out the values you want.

  • Reading comma delimited files

    Hi,
    What are the FMs that I can use to read comma delimited files. Also, will it be able to run in the background?
    Thanks,
    RT

    Hi Rob
       As far as i know, we can not upload data from
    persentation server in background. For that the file
    needs to be placed in application server and use Open dataset command.
    Below is just an example to help you have a feel of the
    same.
    eg:
    type: begin of t_data,
            vbeln like vbak-vblen,
            posnr like vbap-posnr,
            matnr like vbap-matnr,
            menge like vbap-menge,
          end of t_data.
    data: it_data type standard table of t_data.
    data: wa_data type t_data.
    data: l_content(100) type c.
      open dataset p_file for output in text mode.
      if sy-subrc ne 0.
    *** error reading file.
      else.
        do.
          read dataset p_file into l_content.
          if sy-subrc ne 0.
             close dataset p_file.
             exit.
          else.
             split l_content at ',' into wa_data-vbeln
               wa_data-posnr wa_data-matnr wa_data-menge.
             append wa_data to it_data.        
          endif
        enddo.
      endif.
    Hope this helps you.
    Kind Regards
    Eswar

  • How to read and parse a comma delimited file?  Help

    Hi, does anyone know to read and parse a comma delimited file?
    Should I use StreamTokenizer, StringTokenizer, or the oro RegEx packages?
    What is the best?
    I have a file that has several lines of data that is double-quoted and comma delimited like:
    "asdfadsf", "asdfasdfasfd", "asdfasdfasdf", "asdfasdf"
    "asdfadsf", "asdfasdfasfd", "asdfasdfasdf", "asdfasdf"
    Any help would be greatly appreciated.
    thanks,
    Spack

    import java.util.*;
    import java.io.*;
    public class ResourcePortalParser
        public ResourcePortalParser()
        public Vector tokenize() throws IOException
          File reportFile = new File("C:\\Together5.5\\myprojects\\untitled2\\accessFile.txt");
         Vector tokenVector = new Vector();
          StreamTokenizer tokenized = new StreamTokenizer(new FileReader(reportFile));
         tokenized.eolIsSignificant(true);
              while (tokenized.nextToken() != StreamTokenizer.TT_EOF)
                   switch(tokenized.ttype)
                        case StreamTokenizer.TT_WORD :
                        System.out.println("Adding token - " + tokenized.sval);
                             tokenVector.addElement(tokenized.sval);
                             break;
              return tokenVector;
    [\code]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • What does the Merge Data Stream processor do when there are multiple input streams to it from the same reader?

    Hi,
    I have a process with a reader of master data that outputs 5 records that feeds simultaneously into 3 different lookup and return processors.
    Each lookup and return processor brings some data back from a detail table. There can be multiple details so I follow each lookup processor with a split records from array processor. Hence I end up with 3 'streams' of data. Stream 1 has 8 records, stream 2 has 5 records and stream 3 has 6 records.
    I join all these streams to a Merge Data Streams processor.
    I end up with 9 records so although the help for the Merge Data Streams processor says 'Merge Data Streams does not perform any transformation, matching, or merging of records' there is clearly some merging going on.
    What is the behaviour of the merge data streams processor in this scenario?
    I have added attributes and flags into each of the streams. How many records should I see and what values should the added attributes/flags have (some records show attributes/flags from all 3 streams whereas others show just those attributes/flags from one stream).
    I have developed a test case simply to understand what the processor is doing but it's not obvious and furthermore it's probably unwise to develop EDQ processes where the processor behaviour is not documented and guaranteed to remain consistent. What I am trying to achieve is to bring all of a person's (the master data) various details (assignments, employers, etc.) together so we can check the data (some rules require data from multiple details).
    Thanks, Nik

    Cheers Mike - and for the explanation of the terms.
    I think I understand now how it's supposed to work.
    What I'm finding however is that when I set a flag to Y at the beginning of a path (that includes a lookup and return and then split records from array processor) that flag is showing no (i.e. an empty) value in SOME of the records shown in the subsequent MDS processor (it's fine the very last split processor before we get to the MDS but then again there are fewer records in that split processor than the MDS).
    In my case there are obviously more records in the MDS processor than there were in the original reader (because the lookup and returns are configured to have unlimited maximum matches). As mentioned, the different paths return different numbers of  records before being combined in the MDS. Say a reader has 5 records and path 1 returns 8 records in total including a path-specific flag (flag1, set to Y) but path 2 (that again adds its own path-specific flag (flag2, set to Y) returns just 5 records (since nothing was added from the lookups) are you saying that flag2 would show as 'Y' for all 8 records shown in the MDS?
    Hopefully you would be able to see what I mean if you try to create a process like the one I've described (or I can upload a package).
    Re. the purpose of the separate paths approach it is simply to allow the visualisation ('showing the working' as Neil puts it) of the different checks being carried out by the process.
    This is considered one of the benefits of the tool over writing SQL queries (with outer joins, query criteria, etc.).
    Also, as mentioned I was following an example that Neil put together for us to ensure that we are doing things in a 'proper' and supported way.
    If we put all the lookups, etc. for all the checks into one datastream then it no longer becomes so understandable and the value of joining processors in a process over simply writing SQL becomes questionable; arguably the EDQ process in fact becomes less easy to understand than simply writing SQL.
    Also, to go down this route I will need to revise the (what was previously substantially working until I revised it) processes that I have already developed.
    Thanks, Nik

  • Create a continuous data stream from C++, and read it in LabView

    Hello all.
    I'm working on a project which involves connecting to a motion tracker and reading position and orientation data from it in realtime. The code to get the data is in c++, so I decided that the best way to do this would be to create a c++ DLL file which contains all the necessary functions to first connect to the device and read the data from it, and use the Call Library Function node to feed this data into Labview. 
    I'm having trouble though, since ideally I would like a continuous stream of data from the c++ code into Labview, and I'm not sure how to achieve this. Putting the call library function node in a while loop seems like an obvious solution, but if I do it this way I would have to reconnect to the device every time I get the data, which is quite a bit too slow. 
    So my question is, if I created c++ function which created a data stream, could I read this into Labview without having to continually call a function? I'd prefer to only have to call a function once, and then read the data stream until a stop command is given.
    I'm using Labview 2010, version 10.0.
    Apologies if the question is poorly phrased, many thanks for your help.
    Dave
    Solved!
    Go to Solution.

    dr8086 wrote:
    This method sounds like an excellent suggestion, but I do have a few questions where I dont think I've understood fully.
    From what I understand the basic premise is to use one call library function node to access a DLL which creates an instance of the device object, and passes a pointer too it into labview. Then a seperate call library function node would pass this pointer to another DLL which could access the device object, update it and read the data. This part could be in a while loop and carry on reading the data until a stop command is given.
    That's it. I'm including some skeleton code as an example. I'm also including the code because I don't know how much you have experience with multi threading, so I'm showing how you'd have to use critical sections to guard the interactions between threads so that they don't lead to issues.
    // exported function to access the devices
    extern "C" __declspec(dllexport) int __stdcall init(uintptr_t *ptrOut)
    *ptrOut= (uintptr_t)new CDevice();
    return 0;
    extern "C" __declspec(dllexport) int __stdcall get_data(uintptr_t ptr, double vals[], int size)
    return ((CDevice*)ptr)->get_data(vals, size);
    extern "C" __declspec(dllexport) int __stdcall close(uintptr_t ptr, double last_vals[], int size)
    int r= ((CDevice*)ptr)->close();
    ((CDevice*)ptr)->get_data(last_vals, size);
    delete (CDevice*)ptr;
    return r;
    // h file
    // Represents a device
    class CDevice
    public:
    virtual ~CDevice();
    int init();
    int get_data(double vals[], int size);
    int close();
    // only called by new thread
    int ThreadProc();
    private:
    CRITICAL_SECTION rBufferSafe; // Needed for thread saftey
    vhtTrackerEmulator *tracker;
    HANDLE hThread;
    double buffer[500];
    int buffer_used;
    bool done; // this HAS to be protected by critical section since 2 threads access it. Use a get/set method with critical sections inside
    //cpp file
    DWORD WINAPI DeviceProc(LPVOID lpParam)
    ((CDevice*)lpParam)->ThreadProc(); // Call the function to do the work
    return 0;
    CDevice::~CDevice()
    DeleteCriticalSection(&rBufferSafe);
    int CDevice::init()
    tracker = new vhtTrackerEmulator();
    InitializeCriticalSection(&rBufferSafe);
    buffer_used= 0;
    done= false;
    hThread = CreateThread(NULL, 0, DeviceProc, this, 0, NULL); // this thread will now be saving data to an internal buffer
    return 0;
    int CDevice::get_data(double vals[], int size)
    EnterCriticalSection(&rBufferSafe);
    if (vals) // provides a way to get the current used buffer size
    memcpy(vals, buffer, min(size, buffer_used));
    int len= min(size, buffer_used);
    buffer_used= 0; // Whatever wasn't read is erased
    } else // just return the buffer size
    int len= buffer_used;
    LeaveCriticalSection(&rBufferSafe);
    return len;
    int CDevice::close()
    done= true;
    WaitForSingleObject(hThread, INFINITE); // handle timeouts etc.
    delete tracker;
    tracker= NULL;
    return 0;
    int CDevice::ThreadProc()
    while (!bdone)
    tracker->update();
    EnterCriticalSection(&rBufferSafe);
    if (buffer_used<500)
    buffer[buffer_used++]= tracker->getRawData(0);
    LeaveCriticalSection(&rBufferSafe);
    Sleep(100);
    return 0;
    dr8086 wrote:
    My main concern is that the object may go out of memory or be deallocated, since it wouldnt be held in any namespace or anything.
    Since you create the object with new, the object won't expire until either the dll is unloaded or the process (LabVIEW) closes. So the object will stay valid between dll calls provided LabVIEW didn't unload the dll (which it does if the VIs are closed). When that happens, I'm not exactly sure what happens to live objects (i.e. if you forgot to call close), I imagine the system reclaims the memory but the device might still be open.
    What I do to make sure that everything gets closed when the dll unloads before I could call close and delete the object is to everytime I create a new object in the dll I add it to a list, when the dll unloads, if the object is still on the list I delete it.
    dr8086 wrote:
    I also have a more general programming question about the purpose of the buffer. Would the buffer basically be a big table of position values, which are stored until they can be read into the rest of the VI? 
    Yes, see the example code.
    However, depending on the frequency with which you need to collect data from the device you might not need this buffer at all. I.e. if you collect a sample about every 100ms then you could remove all threading and buffer related functions and instead read the data from the read function itself like this:
    double CDevice::get_data()
    tracker->update();
    return tracker->getRawData(0);
     Because you'd only need a buffer and a seperate thread if you collect data at a high frequency and you cannot lose any data.
    Matt

  • I have a acrobat reader, can I import text delimited data format to a PDF Form so that it can auto fill into forms that was created? If not, what about FDF and XML data

    I have a acrobat reader, can I import text delimited data format to a PDF Form so that it can auto fill into forms that was created? If not, what about FDF and XML data

    Yes, you can do all of that via Tools - Forms - More Form Options - Import Data, if you have Acrobat.
    If you only have the free Reader then you can still do it, but it requires a script.

  • Passing data to different internal tables with different columns from a comma delimited file

    Hi,
    I have a program wherein we upload a comma delimited file and based on the region( we have drop down in the selection screen to pick the region).  Based on the region, the data from the file is passed to internal table. For region A, we have 10 columns and for region B we have 9 columns.
    There is a split statement (split at comma) used to break the data into different columns.
    I need to add hard error messages if the no. of columns in the uploaded file are incorrect. For example, if the uploaded file is of type region A, then the uploaded file should be split into 10 columns. If the file contains lesser or more columns thenan error message should be added. Similar is the case with region B.
    I do not want to remove the existing split statement(existing code). Is there a way I can exactly pass the data into the internal table accurately? I have gone through some posts where in they have made use of the method cl_alv_table_create=>create_dynamic_table by passing the field catalog. But I cannot use this as I have two different internal tables to be populated based on the region. Appreciate help on this.
    Thanks,
    Pavan

    Hi Abhishek,
    I have no issues with the rows. I have a file with format like a1,b1,c1,d1,e1, the file should be uploaded and split at comma. So far its fine. After this, if the file is related to region A say Asia, then it should have 5 fields( as an example). So, all the 5 values a1,b1..e1 will be passed to 5 fields of itab1.
    I also have region B( say Europe)  whose file will have only 4 fields. So, file is of the form a2,b2,c2,d2. Again data is split at comma and passed to itab2.
    If some one loads file related to Asia and the file has only 4 fields  then the data would be incorrect. Similar is the case when someone tries to load Europe file with 5 fields related data. To avoid this, I want to validate the data uploaded. For this, I want to count the no. of fields (seperated by comma). If no. of fields is 5 then the file is related to Asia or if no. of fields is 4 then it is Europe file.
    Well, the no. of commas is nothing but no. of fields - 1. If the file is of the form a1,b1..e1 then I can say like if no. of commas = 4 then it is File Asia.But I am not sure how to write a code for this.Please advise.
    Thanks,
    Pavan

  • SQL to match a single value in a field with comma-delimited text

    I have a column that can contain none, one or many recordIDs
    (from another table) stored as comma-delimited text strings (i.e.,
    a list). I need to retrieve all records that match a single value
    within that list.
    For example, if I want to match all values that equal
    recordID 3, and I use ... WHERE MyColumn IN ('3') ... , I will get
    all records that have EXACTLY 3 as the value of MyColumn, but not
    any MyColumn records whose values include 3, if they are instances
    such as "3,17" or the like.
    Also using the LIKE operator -- as WHERE MyColumn LIKE '%3%'
    -- will get me unwanted records with values such as 35 or 13 ...
    Can I use some sort of intervening ColdFusion list processing
    to output only the desired records?

    Normalize your database so that your data becomes
    accessible.

  • SSIS importing comma delimited with double quote text qualifier - Works in VS - SQL Job ignores text qualifier and fails (truncation)

    I've created an SSIS package to import a comma delimited file (csv) with double quotes for a text qualifier ("). Some of the fields contain the delimiter inside the qualified text. An example row is:
    15,"Doe, John",IS2,Alabama
    In SSIS I've specified the text qualifier as ". The sample output in the connection manager looks great. The package runs perfectly from VS and when manually executed on the SSIS server itself. The problem comes when I schedule the package to run via SQL
    job. At this point the package ignores the text qualifier, and in doing so pushes half of a field into the next available column. But instead of having too many columns, it concatenates the last 2 columns ignoring the delimiter. For example (pipes are fields):
    15|"Doe| John"|IS2,Alabama
    So the failure happens when the last half of a field is too large to fit into the next available field. In the case above _John" is 6 characters where the IS2 field is char(3). This would cause a truncation failure, which is the error I receive from the
    job history.
    To further test this I created a failure flow in my data flow to capture the records failing to be pulled from the source csv. Out of ~5000 records, ~1200 are failing, and every one of the failures has a comma delimiter inside the quoted text with a 'split'
    length greater than the next ordinal field. The ones without the comma were inserted as normal and records where the split fit into the next ordinal column where also imported, with the last field being concatenated w/delimiter. Example records when selected
    from table:
    25|"Allan Percone Trucking"|GI6|California --Imported as intended
    36|"Renolds| J."|UI6,Colorado --Field position offset by 1 to right - Last field overloaded
    To further ensure this is the problem, I changed the csv file and flat file connection manager to pipe delimited, and sure enough every record makes it in perfectly from the SQL job execution.
    I've tried comma delimited on the following set ups. Each set up failed.
    SSIS Server 2008 R2 RTM
    DB Server 2008 RTM
    DB Compat 80
    SSIS Server 2008 R2 RTM
    DB Server 2008 R2 RTM
    DB Compat 80
    SSIS Server 2008 R2 RTM
    DB Server 2008 RTM
    DB Compat 100
    SSIS Server 2008 R2 RTM
    DB Server 2008 R2 RTM
    DB Compat 100
    Since a lot of our data comes in via comma delimited flat files, I really need a fix for this. If not I'll have to rebuild all my files when I import them to use pipe delimiters instaed of commas. I'd like to avoid the extra processing overhead if possible.
    Also, is this a known bug? If so can someone point me to the posting of said bug?
    Edit: I can't ask the vendor to change the format to pipe delimited because it comes from a federal government site.

    Just wanted to share my experience of this for anyone else since I wasted a morning on it today.
    I encountered the same problem where I could run the package fine on my machine but when I deployed to a server and ran the package via dtexec, the " delimeter was being replaced with _x0022_ and columns all slurped up together and overflowing columns/truncating
    data etc.
    Since I didn't want to manually hack the DTS XML and can't install updates, the solution I used was to set an expression on the textdelimiter field of the flat file connection with the value "\"" (a double quote). That way, even if the one stored in XML
    gets bodged somewhere along the way, it is overridden with the correct value at run time. The package works fine everywhere now.

  • Comma delimited file in application server.

    How to extarct a comma delimited file from an internal table in to SAP application server.

    Hi Arun,
    Can you be a bit more clearer? Are you uploading the dat to the appl server or downloading?
    1) Comma separated info from itab to file server.
    loop at itab.
    concatenate itab-field1
                itab-field2
                 itab-field3
    into itab_new-data separated by ','.
    append itab_new.
    clear itab_new.
    endloop. 
    open dataset dsn for output in text mode.
    if sy-subrc = 0.
    loop at itab_new.
    transfer itab_new to dsn.
    endloop.
    endif.
    close dataset.
    2) Comma separated info from file to itab.
    open dataset dsn for input in text mode.
    if sy-subrc = 0.
    do.
    read datset dsn into itab-data.
    if sy-subrc <> 0.
    exit.
    else.
    append itab.
    clear itab.
    endif.
    enddo.
    endif.
    close dataset.
    loop at itab.
    split itab-data at ',' into  itab_new-field1
                itab_new-field2
                 itab_new-field3.
    append itab_new.
    clear itab_new.
    endloop. 
    REgards,
    Ravi

  • Wrapping A SqlBulkCopy Class To Export A Proper Comma Delimited Table

    Hello!
    If anyone can point me in the right direction it would be greatly appreciated!
    In my search, it seems SQL Server cannot export tables to a proper comma delimited format. I have not found a out of the box solution for SQL Server to properly escape commas and double quotes within fields on a record.
    Which brings me here. I'm searching for a way to leverage the SqlBulkCopy class (or a alternative) to quickly export a record's values in a array or list so I can feed it into my custom DelimitedStringBuilder class.
    So far it seems the out of the box SqlBulkCopy class only supports import into other tables and not a text file.
    One thought (I have no idea how I would implement it) would be to attach a DataReader class to the SqlBulkCopy class and then pass the values from the DataReader to my DelimitedStringBuilder class.
    Has anyone been able to write something like this? Or is there a simpler solution?

    Have you tried just using SQL and ExecuteNonQuery? Below is an example:
    Dim TextConnection As New System.Data.OleDb.OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;" & _
    "Data Source=" & "C:\Documents and Settings\...\My Documents\My Database\Text" & ";" & _
    "Extended Properties=""Text;HDR=NO;""")
    TextConnection.Open()
    'New table
    Dim TextCommand As New System.Data.OleDb.OleDbCommand("SELECT * INTO [Orders#csv] FROM [Orders] IN '' [ODBC;Driver={SQL Server Native Client 11.0};Server=.\SQLExpress;Database=Northwind;Trusted_Connection=yes];", TextConnection)
    TextCommand.ExecuteNonQuery()
    TextConnection.Close()
    Paul ~~~~ Microsoft MVP (Visual Basic)

  • Why my Firefox v6.02 browser always open *.csv (comma delimited) file in another tab rather than downloading a file? I do not have right click option to save this file.

    My Firefox v6.02 browser always open *.csv (comma delimited) file in another tab rather than downloading a file? I do not have right click option to save this file. My OP is windows XP sp2.

    Hi mha007,
    Go to this site "http://www.nseindia.com/content/indices/ind_histvalues.htm"
    Select any Index type and from dates select just one day or any number of days. Click "Get Details" button. This will open tabular data on loaded page (with link CSV file link at bottom).
    When I click this link, it opens CSV file in new tab (in Firefox browser) and does not open download window (just as in case of other file formats like *.zip, *.rar or *.xls).
    Right-click of link does not give "save as" or "save file" option.
    Firefox is my preferred and default browser.
    ========
    surprisingly, When I do same thing in IE6 browser, data-table page shows 2 options below table, i.e. (1) download file (2) open file.

Maybe you are looking for

  • Terrible battery life after going to ML (Macbook PRO 13" Early 2011 Mountain Lion 10.8.2)

    I upgraded to ML little bit after it came out and kept the OSX uptodate. I have 10.8.2 right now and battery still SUkS. I dont think my battery is bad because it says "Condition: Normal". Is there any solution out there for this? or going back to Li

  • Every email address ever used in my contacts- help!

    I've got a new Blackberry Pearl 9105 3g, on the network 3. No idea what operating system, sorry! I set up my emails to come to my phone- yahoo and hotmail. And now, every email address that I have EVER sent an email to is in my phone contacts list!!!

  • Solution Manager data upload/synchronize from SMSY to Marketplace

    Hi Experts, One of our SAP System data has got deleted from SAP Marketplace system data maintenance section. The data is available in SMSY transaction of our solution manager system. I would like to know if there is any way that we can populate/synch

  • DVD Low Quality

    Hello. I am importing images from my Nikon D70 but when i am exporting them as a DVD sldieshow the images appear jagged and at a low quality. Can anyone advise? Thanks.

  • What if OTR texts in JS contains " ?

    I have a BSP page that contains the following <script...> alert("<%otr(ITSM/mytext)%>"); </script> I the original language, the text was i.e. "Wrong value in field 'printer'!". But unfortunately, the translators changed the 's to "s in some languages