Remove commas between double quotes in a comma seperated CSV

Hi All,
I have text like this.
,335,9,,"XYZ, V ABC",838300,"Cityname, CA 95610",2167390,N,N,,,
I want to replace the comma between quotes to semicolon. Please help me
Thanks in advance
Uma

try this
String str1 ="335,9,,'XYZ, V ABC',838300,'Cityname, CA 95610',2167390";
          String newString = new String(str1);
          String s[]= StringUtils.substringsBetween(str1, "'", "'");
          for(String s1: s)
               newString = StringUtils.replace(newString, s1, StringUtils.replace(s1, ",", ""));
          System.out.println(newString);
output : 335,9,,'XYZ V ABC',838300,'Cityname CA 95610',2167390

Similar Messages

  • Regular Expression: Extract Subgroup Content between Double Quotes

    I am using java Pattern and Matcher to match the content between the double quotes. But the result is not what I expected.
    Here are some of my code:
    String s="\"C:\abc\"'; //notice the inside double quotes are escaped double quotes.
    Pattern p= Pattern.compile("\"(.+)\"");
    Matcher m=p.matcher(s);
    if(m.matches()){....}
    Result====:
    m.groupCount()=1,
    m.group(0)="C:\abc"
    But what I want is C:\abc instead of "C:\abc". I am also confused about the group number. If group(0) matches the whole string implicitly, group(1) should represent the subgroup (.+) in the above example, right? Thanks.

    Hi,
    You probably have another error in your code.
    Execute this.
    class Test {
        public static void main(String[] args) {
            String s= "\"C:\\abc\"";
            Pattern p= Pattern.compile("\"(.+)\"");
            Matcher m=p.matcher(s);
            if(m.matches()){
                System.out.println(m.groupCount());
                System.out.println(m.group(0));
                System.out.println(m.group(1));
    }It prints
    1
    "C:\abc"
    C:\abc/Kaj

  • FTP "PWD" and unescabe double quotes

    HI
    I am using LV8.2.1 and the internet toolkit.
    There is a FTP function called "PWD"(print working directory) who I think got a bug.
    It don't put out the working direktory.
    It take the reply text ( ex: Current working directory is "fs")and use the subVI "unescabe double quotes.vi" to remove the quotes, but this subVI need to have a string where the hole text is surrounded with double qoutes.
    The FTP "PWD" function send a string like this( ex: Current working directory is "fs") to the subVI and that result in an empty string out.
    Take a look at the attached pictures.
    regards Bjarne
    Attachments:
    FTP [PWD]_BD.png ‏5 KB
    Unescape Double Quotes_BD.png ‏11 KB

    Hi Bjarne,
    what FTP server are you using? It seems your FTP server doesn't follow the protocol. The response to a "PWD" should be like this:
    257 "/path/on/server" [is current directory.]
    It should start with the 257 code, followed by the path surrounded by double quotes.
    If the FTP server follows the protocol the PWD VI works just fine.
    As a workaround, you can use the "reply string" output of the PWD VI and use a match pattern to find the portion string between double quotes.
    Daniel
    Message Edited by dan_u on 03-11-2009 02:34 PM

  • SSIS importing comma delimited with double quote text qualifier - Works in VS - SQL Job ignores text qualifier and fails (truncation)

    I've created an SSIS package to import a comma delimited file (csv) with double quotes for a text qualifier ("). Some of the fields contain the delimiter inside the qualified text. An example row is:
    15,"Doe, John",IS2,Alabama
    In SSIS I've specified the text qualifier as ". The sample output in the connection manager looks great. The package runs perfectly from VS and when manually executed on the SSIS server itself. The problem comes when I schedule the package to run via SQL
    job. At this point the package ignores the text qualifier, and in doing so pushes half of a field into the next available column. But instead of having too many columns, it concatenates the last 2 columns ignoring the delimiter. For example (pipes are fields):
    15|"Doe| John"|IS2,Alabama
    So the failure happens when the last half of a field is too large to fit into the next available field. In the case above _John" is 6 characters where the IS2 field is char(3). This would cause a truncation failure, which is the error I receive from the
    job history.
    To further test this I created a failure flow in my data flow to capture the records failing to be pulled from the source csv. Out of ~5000 records, ~1200 are failing, and every one of the failures has a comma delimiter inside the quoted text with a 'split'
    length greater than the next ordinal field. The ones without the comma were inserted as normal and records where the split fit into the next ordinal column where also imported, with the last field being concatenated w/delimiter. Example records when selected
    from table:
    25|"Allan Percone Trucking"|GI6|California --Imported as intended
    36|"Renolds| J."|UI6,Colorado --Field position offset by 1 to right - Last field overloaded
    To further ensure this is the problem, I changed the csv file and flat file connection manager to pipe delimited, and sure enough every record makes it in perfectly from the SQL job execution.
    I've tried comma delimited on the following set ups. Each set up failed.
    SSIS Server 2008 R2 RTM
    DB Server 2008 RTM
    DB Compat 80
    SSIS Server 2008 R2 RTM
    DB Server 2008 R2 RTM
    DB Compat 80
    SSIS Server 2008 R2 RTM
    DB Server 2008 RTM
    DB Compat 100
    SSIS Server 2008 R2 RTM
    DB Server 2008 R2 RTM
    DB Compat 100
    Since a lot of our data comes in via comma delimited flat files, I really need a fix for this. If not I'll have to rebuild all my files when I import them to use pipe delimiters instaed of commas. I'd like to avoid the extra processing overhead if possible.
    Also, is this a known bug? If so can someone point me to the posting of said bug?
    Edit: I can't ask the vendor to change the format to pipe delimited because it comes from a federal government site.

    Just wanted to share my experience of this for anyone else since I wasted a morning on it today.
    I encountered the same problem where I could run the package fine on my machine but when I deployed to a server and ran the package via dtexec, the " delimeter was being replaced with _x0022_ and columns all slurped up together and overflowing columns/truncating
    data etc.
    Since I didn't want to manually hack the DTS XML and can't install updates, the solution I used was to set an expression on the textdelimiter field of the flat file connection with the value "\"" (a double quote). That way, even if the one stored in XML
    gets bodged somewhere along the way, it is overridden with the correct value at run time. The package works fine everywhere now.

  • Why does text field in InfoPath 2010 show user name with two commas between first name and last?

    Here's the problem. I have a text field called Manager in an InfoPath 2010 form that is getting populated by a drop down field called
    Business Unit.
    The Business Unit drop-down field is pulling information from
    column A in a custom list via a managed data connection. The custom List has two columns:
    Title & Manager. 
    The Manager column in the custom list is a 'Person or Group' type column.
    The Data Connection pulls both the Title, Manager
    (and ID) data.
    There is a rule on the Business Unit drop-down field to change the value of the
    Manager text field with the Manager data on the custom list. The rule pulls the
    Manager information and filters the value to match the
    Business Unit on the Data Connection with the Business Unit drop-down
    field value (Main).
    The Business Unit drop-down field works great and pulls the value from the custom list, and the rule populates the Manager text field. The problem is that Manager text field shows the name as such:
    [smith,, john]. Notice the two commas between the last and first name. There should only be
    one comma!
    Anyone have an idea why the text field is appearing with two commas?
    Arnel

    Hi all,
    I have a workaround for this. I have an InfoPath 2010 form pulling data from a SP2010 list. The user chooses a System (Business Unit) from the dropdown list and that choice auto populates the associated user (Manager) for that system. I had to
    use concatenation, substring before and substring after to display the correct data.
    concat(substring-before(DisplayName, ",, "), ", ", substring-after(DisplayName, ",, "))
    DisplayName is the original data for the field. I am able to cut & paste this field into the formula. Therefore, I added a few spaces to separate it from the actually formula for future use.
    Select Insert Function
    Select concat
    Select the first link to double-click to insert field
    and then select Insert Function
    Select the Text category and choose substring-before
    Select the third link to double-click to insert field
    and then select Insert Function
    Select the Text category and choose substring-after
    Copy and paste your fieldname (in my case DisplayName) in the
    double-click to insert field
    parts of the substring-before and substring-after links.
    Delete the middle double-click to insert field link
    Add quotes, spaces and commas so that they match the following format:
    concat(substring-before(DisplayName, ",, "), ", ", substring-after(DisplayName,
    Translation:
    concat(substring-before(Doe,, John, ",, "), ", ", substring-after(Doe,,
    John, ",, "))
    Bring together all of the text before ",," (i.e. Doe) with ", " (comma, space) and all text after ",," (i.e. John).
    It should return
    Doe, John.
    I hope this makes sense. I have these instructions with screenshots if you need them, contact me.

  • To remove Double quotes while uploading a file

    Hi All,
    I have a requirement, where i have to Remove double quotes after uploading a file.
    I am uploading the file using GUI_UPLOAD.
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        filename                      = v_file
      FILETYPE                      = 'ASC'
      has_field_separator           = '"'
      HEADER_LENGTH                 = 0
      READ_BY_LINE                  = 'X'
      DAT_MODE                      = ' '
    IMPORTING
      FILELENGTH                    =
      HEADER                        =
      TABLES
        data_tab                      = it_tab
    EXCEPTIONS
       file_open_error               = 1
       file_read_error               = 2
       no_batch                      = 3
       gui_refuse_filetransfer       = 4
       invalid_type                  = 5
       no_authority                  = 6
       unknown_error                 = 7
       bad_data_format               = 8
       header_not_allowed            = 9
       separator_not_allowed         = 10
       header_too_long               = 11
       unknown_dp_error              = 12
       access_denied                 = 13
       dp_out_of_memory              = 14
       disk_full                     = 15
       dp_timeout                    = 16
       OTHERS                        = 17
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    LOOP AT it_tab INTO wa_tab.
      TRANSLATE wa_tab-line USING '" '.
      SHIFT wa_tab-line LEFT DELETING LEADING space.
      MODIFY it_tab FROM wa_tab.
      CLEAR wa_tab.
    ENDLOOP.
    The above logic is replacing '"' with a blank space. I dont want this blank space which is generated. The double quote should be removed and the next character should shift to left in place of Double quote.
    Note : I am on 4.6C and it doesnt have replace all occurrences of.
    How to achieve this ?
    Thanks in advance,
    Best regards,
    Prashant

    Hi Prashant ,
    There is one simple method for it . When you find <b>"</b>  in you table , you shift the previous data into other tab (table2) . Continue searching and when you get next <b>"</b> again shift the data to table2 .
    Hope this helps .
    Edit : If you dont want to use another table , you can use function TRIM . To know more about it click
    <a href="http://help.sap.com/saphelp_erp2005/helpdata/en/a8/2afd3f2d14e869e10000000a155106/content.htm">TRIM</a>
    Message was edited by: Shounak  M
    Regards ,
    Shounak M.
    Message was edited by: Shounak  M

  • What 's the difference between quote ' and double quote "?

    what's the difference between quote' and double quote "?
    when we use each one?
    Exemple of use of each case?

    'c' is a char, ie. a primitive type representing a single character
    "c" is an instance of the String class length 1.

  • Intergration between Siebel and AIA in Comm PIP

    I need your help in understanding “how interfacing is happening between Siebel and AIA in Comm Pack” for an outbound flow from Siebel. I will take an example of Order Synchronization flow between Siebel and BRM through AIA.
    Step 1: When the order is confirmed / approved in Siebel, an event is captured and Sales Order ABM XML message is generated and put it into a JMS queue.
    I have few doubts in Step 1 only where I need your help :
    - What is the type of JMS Queue ? Is it OJMS (JMS Interface to Oracle Database Streams Advanced Queueing (AQ)) or OracleAS JMS (native Java implementation that provides file-based persistence).
    - How the data is put into the JMS Queue? Is it through the “ProcessSalesOrderSiebelJMSConsumer” BPEL Service or through some other mechanism.
    - If it is through “ProcessSalesOrderSiebelJMSConsumer” BPEL Service, does the Siebel call “ProcessSalesOrderSiebelJMSConsumer” as a Web Service or is it through JCA Resource Adapter.
    - Other thing is if it is through WebService Call (Invoking “ProcessSalesOrderSiebelJMSConsumer” BPEL Service), then what if the SOA Server is down during the call ? This will mean that we have lost the data as we haven’t put in the queue yet. And I don’t think Order will be created / confirmed / approved in Siebel again. Also Siebel can directly invoke the “ProcessSalesOrderSiebelReqABCSImpl” BPEL Service directly rather the invoking JMS Producer if data has to be passed through webservice call.
    Thanks.

    - What is the type of JMS Queue ? Is it OJMS (JMS Interface to Oracle Database Streams Advanced Queueing (AQ)) or OracleAS JMS (native Java implementation that provides file-based persistence).
    Siebel will call a WebService of AIA. AIA will store this into his JMS queue.- How the data is put into the JMS Queue? Is it through the “ProcessSalesOrderSiebelJMSConsumer” BPEL Service or through some other mechanism.
    AIA Will handle this, out of the box functionality.- If it is through “ProcessSalesOrderSiebelJMSConsumer” BPEL Service, does the Siebel call “ProcessSalesOrderSiebelJMSConsumer” as a Web Service or is it through JCA Resource Adapter.
    It will call the producer webservice.- Other thing is if it is through WebService Call (Invoking “ProcessSalesOrderSiebelJMSConsumer” BPEL Service), then what if the SOA Server is down during the call ?
    Siebel gets an error back; server not reachable.Regards,
    Marc
    http://orasoa.blogspot.com

  • How come there isn't a comma between the city and state when I go to make an envelope with an address book contact?

    How come there isn't a comma between the city and state when I go to make an envelope with an address book contact?

    On an envelope, there's not supposed to be. The US Postal Serveice prefers no punctuation, which can interfere with machine sorters. USPS Address Format

  • Removing Double Quotes while uploading CSV file in APEX

    Whenever I upload a CSV file using data load option in APEX there are values which get uploaded in Double Quotes.
    e.g Oracle Polska Sp. z o.o -----------value like this gets uploaded with Double Quotes."Oracle Polska Sp. z o.o"
    How can I overcome this problem? I am using Apex 4.2.

    Did you specify the "Optionally Enclosed By" option with the double quote sign?

  • Can't import csv fields starting with double quotes but lack ending ones

    Hi all,
    When I'm trying to used external table to import a csv file, specified as using comma as delimiter optionally enclosed by double quotes, some records are rejected because a field in the record has starting double quotes, but without ending ones.
    Assume the customer really want these starting double quotes, how do I change my external table specification such that these starting double quotes are treated as part of the field data and can be successfully inserted into db?
    Many thanks.

    I have no access to Oracle during weekends, so nothing can be tested. So here it goes:
    Suggestion: DELIMITED BY '","' and of course omitt ENCLOSED BY '"'
    You will have to update each row of the rows just loaded setting the first_field to substr(first_field,2) and the last_field to substr(last_field,-2)
    If all the fields are not enclosed in double quotes (TRUE for strings and FALSE for numbers and dates) the situation (syntax diagrams allow two delimiters only) is somehow more complicated, anyway you can specify DELIMITED BY ',' and update all varchar2 fields in each row of the rows just loaded setting the varchar2_field to substr(varchar2_field,2,length(varchar2_field) - 2).
    Regards
    Etbin
    After posting I noticed it's difficult to distinguish between single and double quotes:
    the first DELIMITED BY should read {single quote){double quote}{comma}{double quote}{single quote)
    the ENCLOSED BY should read {single quote){double quote}{single quote)
    Message was edited by: Etbin
    user596003

  • Read CSV file each field is enveloped within double quotes

    Hi All,
    I have a txt file in Apps server which is separated by comma but each values are in double quotes.
    Reason some text fields might have comma as value and not as separator
    Example:-
    "00601100001","","","","","BIJOUTERIE ,SARAH III","","1500"
    How will i read this file.
    thanks in advance.
    SG

    Hi,
    try this code... file contains 3 fields , separtde by comma...
    * report name
    REPORT zupload_csv_file.
    * data declarations
    TYPES: BEGIN OF ttab,
    rec(1000) TYPE c,
    END OF ttab.
    TYPES: BEGIN OF tdat,
    fld1(10) TYPE c,
    fld2(10) TYPE c,
    fld3(10) TYPE c,
    END OF tdat.
    DATA: itab TYPE TABLE OF ttab WITH HEADER LINE.
    DATA: idat TYPE TABLE OF tdat WITH HEADER LINE.
    DATA: file_str TYPE string.
    * selection screen design
    PARAMETERS: p_file TYPE localfile.
    * at selection screen for field
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
    CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
    EXPORTING
    static = 'X'
    CHANGING
    file_name = p_file.
    * start of selection
    START-OF-SELECTION.
    file_str = p_file.
    CALL FUNCTION 'GUI_UPLOAD'
    EXPORTING
    filename = file_str
    TABLES
    data_tab = itab
    EXCEPTIONS
    file_open_error = 1
    file_read_error = 2
    no_batch = 3
    gui_refuse_filetransfer = 4
    invalid_type = 5
    no_authority = 6
    unknown_error = 7
    bad_data_format = 8
    header_not_allowed = 9
    separator_not_allowed = 10
    header_too_long = 11
    unknown_dp_error = 12
    access_denied = 13
    dp_out_of_memory = 14
    disk_full = 15
    dp_timeout = 16
    OTHERS = 17.
    * process and display output
    LOOP AT itab.
    CLEAR idat.
    SPLIT itab-rec AT ',' INTO idat-fld1
    idat-fld2
    idat-fld3.
    APPEND idat.
    ENDLOOP.
    LOOP AT idat.
    WRITE:/ idat-fld1, idat-fld2, idat-fld3.
    ENDLOOP.
    With Rgds,
    S.bharani

  • Double quotes issue with GUI_UPLOAD

    Hello Gurus,
    I'm uploading a tab delimited file from my PC using GUI_UPLOAD. I've a description field (Char 40) in my input file which contains characters like double quotes or commas or single quotes. Whenever there is a comma or a double quote in the desc field in my input file, the field value is uploaded into the program as the actual value with double quotes in front of the value and at the end of the value.
    Say if I'm uploading <b>steel 1"</b>, then it is uploaded into the program as <b>"steel 1""</b>. Any ideas why is this and how to fix it.
    JJ

    Appreciate ur answer Anand. I want to know why I get the value with double quotes in front of it and towards the end of it. I know that I can get rid of the extra double quotes with offsetting and replace statement...Thats not what I was expecting. I want to know why the extra double quotes are coming into the picture.
    FYI, I'm on ECC 6.0 and the snippet of my code is:
      DATA: lv_file TYPE string.
      lv_file = p_file.
      CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          filename            = lv_file
          filetype            = 'ASC'
          has_field_separator = 'X'
        TABLES
          data_tab            = itab_file.

  • Double quotes missing in CSV file but exist in text file from AL11

    Hi I am sending a file to AL11 with one of the fields having double quotes Like "field value" . When i download into Text file  i am seeing the quotes but not when download as CSV file from Al11. Any SAP notes for this . If i add  multiple quotes ' """" ' to have in csv I am seeing more quotes in text file which is not accepted . Please reply if anyone worked on this before . Thanks Kamesh

    The CSV file also has the quotes (check in notepad), but when opened in MS Excel, Excel has a feature to ignore double quotes and consider the comma inside the quotes as part of field, not as a field separator.

  • Converting data to char in double quotes.

    I need to create a comma separated txt data load file. Each column has to be within double quotes.
    "Material","Plant","SOrg" .....
    Currently I am doing something like this. ..
    Loop at table, for each field, concatenate " field value ".
    But I find this solution to be cumbersome as I have to change the data type for each field to character. Is there a better way to accomplish this?
    Thank you.

    You can create simple Method or Subroutine or a Macro for this purpose.
    If you have lot of fields, you can even use RTTS to get the field names and field-symbols instead of writing all the fields.
    TYPES:
      BEGIN OF lty_data,
        fld1 TYPE char10,
        fld2 TYPE dmbtr,
      END   OF lty_data.
    DATA: li_data TYPE STANDARD TABLE OF lty_data.
    DATA: lwa_data LIKE LINE OF li_data.
    DATA: lv_char TYPE char255.
    DEFINE _con.
      lv_char = &1.
      condense lv_char.
      concatenate &2 '"' lv_char '"' into &2 .
    END-OF-DEFINITION.
    lwa_data-fld1 = 'c1'.
    lwa_data-fld2 = '12345.23'.
    APPEND lwa_data TO li_data.
    lwa_data-fld1 = 'c2'.
    lwa_data-fld2 = '72345.23'.
    APPEND lwa_data TO li_data.
    DATA: lv_output TYPE string.
    LOOP AT li_data INTO lwa_data.
      _con lwa_data-fld1 lv_output.
      CONCATENATE lv_output ',' INTO lv_output.
      _con lwa_data-fld1 lv_output.
      WRITE: / lv_output.
      CLEAR: lv_output.
    ENDLOOP.
    Regards,
    Naimesh Patel

Maybe you are looking for