CSV File Handling Issue

Hi All,
We have IDoc to File(CSV) Scenario.
Target Field Values have comma(,) character in it.  Field Separator : comma(,)
Fields contain comma disturbs the File Generation Sequence.
eg.,
Header
field 1, field 2, field 3, field 4
field 1=test
field 2=sample
field 3=firstname,lastname
field4 = address
Output CSV:
field1, field2 , field 3, field 4
test,sample,firstname,lastname,address
Field 3 Value has been splitted into two. How to handle this case. Kindly help
Best Regards,
Suresh S

Hi,
Double Quotes inclusion at Mapping level and following FCC Parameters helped to resolved that issue.
However, we just need to exclude again the double quotes in the field before posting it to end Application which can be handled through FTP Module level configuration.
Does anyone have idea about Standard Adapter Module which handle my requirement
Best Regards,
Suresh S

Similar Messages

  • Excel 2007 csv file formatting issue

    Our users create .csv files for upload to SAP. Their habit is to include a number of blank lines in excel to make it more readable.
    In Excel 2003, blank lines were handled as, literally, blank lines, and opening in a text editor shows exactly that, a blank line (with a CR-LF character to terminate the row).
    In Excel 2007 however, the blank line consists of a number of commas equal to the no. of columns, followed by the CR-LF termination. Hope that makes sense.
    While the 2003-generated .CSVs are fine, the 2007 versions cause SAP to throw an exception  ("Session never created from RFBIBL00") and the upload fails. The question therefore is, has anyone ever come across anything similar, or is anyone aware of any remediation that might be possible? Haven't been able to find any documentation on this Excel 2003-2007 change sonot able to address the issue through Excel config.
    Thanks!
    Duncan

    Hello
    Please refer to the consulting note 76016 which will provide information on the performance of the standard program
    RFBIBL00.
    Regards.

  • Ssrs 2008 export to csv file display issue

    In a new SSRS 2008 report, I would like to know if there is a way to automatically expand the width of some of the rows when the data is exported to a CSV file so the data is displayed correctly. Here are examples that I am referring to:
    1. In one column where there is suppose to be a date that looks like 12/11/2014, the value of ########## is displayed. The value of 12/11/2014 is what is setup in the SSRS formatting option.
    2. In a number field that is suppose to look like 6039267049 the value that is displayed is 6E+09.
    Basically if I manually expand the width of the columns that I am referring to above, the data is displayed correctly. Thus can you tell me what I can do so that the data is disaplayed correctly in the CSV file and ther user does not need to manually expand
    the column to see the data?

    Hi wendy,
    After testing the issue in my local environment, I can reproduce it when use Excel to open the csv file. As per my understanding, there is no width when we export the report to csv, Excel is just using the default cell sizes when we open the csv. So when
    a date value is wider than the default cell size, it would be displayed as ##########. When a number value is wider than the default cell size, it would use Scientific format.
    As to the date value, we can use the expression =cstr(Fields!Date.Value) to replace the former one =Fields!Date.Value. In this way, the width of the value is narrower than the default cell size, so that the date value can be displayed correctly. For the
    number value, it already narrow down the width with Scientific format. Or we can Select all the cells in the csv file, then click Format| AutoFit Column Width to change all the cells width to fit its corresponding values in Excel level.
    Besides, we can try to export to excel instead of csv. Excel format can inherit the column width in the report. So in this way, we can directly change the width to fit the values in Reporting Services level.
    Hope this helps.
    Thanks,
    Katherine Xiong
    If you have any feedback on our support, please click
    here.
    Katherine Xiong
    TechNet Community Support

  • SSIS CSV FILE READING ISSUE

    hi can u reply me for the below post
    Actually iam using a FF connection manger to read the csv file .. with the row delimiter to {CR}{LF}.
    Suddenly in looping through the files package got failed because it cant able to read the csv file 
    Again i changed  the row delimiter to {LF} then it worked for the file which i faced issue with the {CR}{LF} delimiter.
    Now i want to know why the package is failing for the row delimiter issue..
    can any one help me on this.
    Please share me what actually the difference between those

    Please share me what actually the difference between those
    CR = Carriage Return = CHAR(13) in SQL
    This character is used in Mac OS as new line.
    When this character is used, cursor goes to first position of the line
    LF = Line Feed = CHAR(10) in SQL
    This character is used in Unix as new line.
    When this character is used, cursor goes to next line of the line (Old days typewritter when paper moves up)
    CR LF
    New line in Windows system. Combination of CR and LF.
    Best thing is to open the test flat file in Notepadd++ and enable show symbols - show chars and see what exactly you have as row delimiter
    Cheers,
    Vaibhav Chaudhari
    [MCTS],
    [MCP]

  • CSV file generation issue

    Hello All,
    We are facing below issue during CSV file generation :
    Generated file shows field value as 8.73E+11 in output and when we are clicking inside this column result shown is approximate of the correct value like 873684000000. We wish to view the correct value 872684000013.
    Values passes from report program during this file generation are correct.
    Please advice to resolve this issue.
    Thanks in Advance.

    There is nothing wrong in your program, it is the property of excel that if the value in the cell is larger than
    default size it shows in output of that format.

  • Spool output to .csv file - having issues with data display

    Hi,
    Need to deliver the output of a select query which has around 80000 records to a .csv file. A procedure is written for the select query and the procedure is being called in the spool script. But few of the columns have comma(,) in the values. For Example, there is a personal_name column in the select query which says the name as " James, Ed". Then output is displayed in different columns. Hence the data is being shifted to the right for the remaining columns.
    Could some one help fix this issue. I mainly used a procedure as the select query is about three pages and hence want the script to look clear.
    Script is,
    set AUTOPRINT ON ;
    set heading ON;
    set TRIMSPOOL ON ;
    set colsep ',' ;
    set linesize 1000 ;
    set PAGESIZE 80000 ;
    variable main_cursor refcursor;
    set escape /
    spool C:\documents\querys\personal_info.csv
    EXEC proc_personal_info(:main_cursor);
    spool off;

    Hi,
    set PAGESIZE 80000 ;is not valid and it will print header as default every 14 rows.
    You can avoid printing the header in this way:
    set AUTOPRINT ON ;
    set heading ON;
    set TRIMSPOOL ON ;
    set colsep ',' ;
    set linesize 1000 ;
    set PAGESIZE 0 ;
    set escape /
    set feedback off
    spool c:\temp\empspool.csv
      SELECT '"'||ename||'"', '"'||job||'"'
      FROM emp;
    spool offThe output will look like this in this case
    "SMITH"     ,"CLERK"
    "ALLEN"     ,"SALESMAN"
    "WARD"      ,"SALESMAN"
    "JONES"     ,"MANAGER"
    "MARTIN"    ,"SALESMAN"
    "BLAKE"     ,"MANAGER"
    "CLARK"     ,"MANAGER"
    "SCOTT"     ,"ANALYST"
    "KING"      ,"PRESIDENT"
    "TURNER"    ,"SALESMAN"
    "ADAMS"     ,"CLERK"
    "JAMES"     ,"CLERK"
    "FORD"      ,"ANALYST"
    "MILLER"    ,"CLERK"You can also consider creating a unique column by concatenating the columns in this way:
    spool c:\temp\empspool.csv
      SELECT '"'||ename||'","'||job||'"' In this case the output will look without spaces between columns:
    "SMITH","CLERK"
    "ALLEN","SALESMAN"
    "WARD","SALESMAN"
    "JONES","MANAGER"
    "MARTIN","SALESMAN"
    "BLAKE","MANAGER"
    "CLARK","MANAGER"
    "SCOTT","ANALYST"
    "KING","PRESIDENT"
    "TURNER","SALESMAN"
    "ADAMS","CLERK"
    "JAMES","CLERK"
    "FORD","ANALYST"
    "MILLER","CLERK"Regards.
    Al
    Edited by: Alberto Faenza on May 2, 2013 5:48 PM

  • Duplicate File Handling Issues - Sender File Adapter - SAP PO 7.31 - Single Stack

    Hi All,
    We have a requirement to avoid processing of duplicate files. Our system is PI 7.31 Enh. Pack 1 SP 23. I tried using the 'Duplicate File Handling' feature in Sender File Adapter but things are not working out as expected. I processed same file again and again and PO is creating successful messages everytime rather than generating alerts/warnings or deactivating the channel.
    I went through the link  Michal's PI tips: Duplicate handling in file adapter - 7.31  . I have maintained similar setting but unable to get the functionality achieved. Is there anything I am missing or any setting that is required apart from the Duplicate file handling check box and a threshold count??
    Any help will be highly appreciated.
    Thanks,
    Abhishek

    Hello Sarvjeet,
    I'd to write a UDF in message mapping to identify duplicate files and throw an exception. In my case, I had to compare with the file load directory (source directory) with the archive directory to identify whether the new file is a duplicate or not. I'm not sure if this is the same case with you. See if below helps: (I used parameterized mapping to input the file locations in integration directory rather than hard-coding it in the mapping)
    AbstractTrace trace;
        trace = container.getTrace();
        double archiveFileSize = 0;
        double newFileSizeDouble = Double.parseDouble(newFileSize);
        String archiveFile = "";
        String archiveFileTrimmed = "";
        int var2 = 0;
        File directory = new File(directoryName);
        File[] fList = directory.listFiles();
        Arrays.sort(fList, Collections.reverseOrder());
        // Traversing through all the files
        for (File file : fList){   
            // If the directory element is a file
            if (file.isFile()){       
                            trace.addInfo("Filename: " + file.getName()+ ":: Archive File Time: "+ Long.toString(file.lastModified()));
                            archiveFile = file.getName();
                          archiveFileTrimmed = archiveFile.substring(20);       
                          archiveFileSize = file.length();
                            if (archiveFileTrimmed.equals(newFile) && archiveFileSize == newFileSizeDouble ) {
                                    var2 = var2 + 1;
                                    trace.addInfo("Duplicate File Found."+newFile);
                                    if (var2 == 2) {
                                            break;
                            else {
                                    continue;
        if (var2 == 2) {
            var2 = 0;
            throw new StreamTransformationException("Duplicate File Found. Processing for the current file is stopped. File: "+newFile+", File Size: "+newFileSize);
    return Integer.toString(var2);
    Regards,
    Abhishek

  • BO 4.0 save as csv file format issue

    Hi All,
    We are using BO 4.0 Webi for reporting on SAP BW 7.3 system. Some of our reports have to be scheduled to bring the output in CSV file format. When I schedule the report in CSV format, the final output has data in two sets. The first set has the list of columns which I have selected in my report. Starting on the next row I get to see the second set of data with all the objects selected in the query panel including the detail objects.
    We only need the data for the columns which is selected in the report, but it is bringing the dump from all the objects in the dataprovider.
    Can anyone tell me how to get rid of the second set of data in the same csv file ?
    Thanks,
    Prasad

    Hi,
    CSV format is reserved for 'data only' dataprovider dumps. it exports the entire webi microcube (query results)
    You don't get that option when going 'save as' - which preserves the report formatting - in which case you should consider .xls
    regards,
    H

  • Csv file loading issue

    I am trying to load data from csv file into oracle table.
    The interface executes successfully but the problem is-
    there are 501 rows in the original csv file.
    When i load it in the file model it shows only 260 rows.
    What is the problem? Why all the rows are not loaded?

    Just forget about the interface.
    I am creating a new datastore of file type.
    In the resource name, i m giving my files path.
    when i reverse engineer it and check for the data, it shows only 260.
    But there are 501 records in my csv file.

  • File handling issues

    Hi All,
    I have following doubts in processing the file.
    Ex
    00BSD003062                                                  
    01UN100010000000010306190005400000151778          
    02UN100010200001193633970001          A19000003285N
    02UN100010200002183557200001          A19000002236N
    1)In second record if i want to consider only 5 chars to one filed rest i need not consider..Is it possible in XI ignoring un wanted chars , how to ignore the rest of chars?
    2)By  considering 1000102 as account number i need to reconsile it , means the above two rows(3 and 4) amounts  should be added and the result only should be posted.Is it possible and pls guide me how to do this.
    3) Is it possible of audit logging? means how many records posted (counting number of records) successfully at the end of file processing?
    4)Is it possible of sorting in XI and how?
    Thanks in advance.
    Regards,
    venu

    > 1)In second record if i want to consider only 5 chars
    > to one filed rest i need not consider..Is it possible
    > in XI ignoring un wanted chars , how to ignore the
    > rest of chars?
    explore the filedFixedLengths option in your content conversion config. this link has details on it http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm
    > 2)By  considering 1000102 as account number i need to
    > reconsile it , means the above two rows(3 and 4)
    > amounts  should be added and the result only should
    > be posted.Is it possible and pls guide me how to do
    > this.
    Before posting to ur target application, you can always manipulate your payload using mapping programs.
    If you are looking at doing it when reading the file itself, then you may have to write a module processor.
    > 3) Is it possible of audit logging? means how many
    > records posted (counting number of records)
    > successfully at the end of file processing?
    One option is to write a module processor, deploy it as an addon to ur file adapter,
    parse the payload and get the recordcount.
    > 4)Is it possible of sorting in XI and how?
    I doubt this feature is available. file adapter will parse thru the file starting Line 1 till eof.
    explore sort options using std functions/UDFs in your Message Mapping
    Regards
    Saravana

  • File handling issue

    Dear All,
    I have 1 test program say ZTEST,
    The above prog is sending data to application server. The file path is like 'path /usr/sap/tmp/ZFILE.txt'
    I have 2 files,
    first file contains data like
    1
    2
    3
    Second file contains data like
    A
    B
    C
    If I execute the program ZTEST for the first file. The first file data 1 2 3 is sending to file and the file 'path /usr/sap/tmp/ZFILE.txt' is having
    1
    2
    3
    If I execute the program ZTEST for second time for the second file. The second file data A B C is sending to file and the file 'path /usr/sap/tmp/ZFILE.txt' is having
    A
    B
    C
    If I execute the program ZTEST for first file and second file simultaniously. The first file data 1 2 3 or  second file data  A B C should send, but it is sending mimatching data to file 'path /usr/sap/tmp/ZFILE.txt' . now file is having mixup data
    1
    B
    3
    or
    A
    2
    C
    But I want to send the data in the below format. 
    1
    2
    3
    or
    A
    B
    C
    Please let me know if you have any concerns....
    Thanks in advance
    Kind regards,
    M.Rayapureddy

    try changing the properties od the dataset at run time using SET DATASET
    TYPE-POOLS:
      dset.
    DATA:
      dsn  TYPE STRING,
      fld  TYPE STRING,
      attr TYPE dset_attributes.
    OPEN DATASET dsn IN LEGACY TEXT MODE FOR INPUT.
    attr-changeable-indicator-code_page = 'X'.
    attr-changeable-code_page           = '0100'.
    attr-changeable-indicator-repl_char = 'X'.
    attr-changeable-repl_char           = '*'.
    SET DATASET dsn ATTRIBUTES attr-changeable.
    READ DATASET dsn INTO fld.
    WRITE / fld.
    CLOSE DATASET dsn.

  • How to handle a comma in a field in CSV file during FCC ?

    Hi,
    I am having a requirement where we have to convert a CSV file into XML using File Content Conversion . The issue is one of the field in the file is having a comma inside. So the XML parser is taking it as a field separator and throwing an error.
    The contents of the file are as follows:
    "02975859","New Key","9","Failed, rejected by RTI server"
    How to handle a comma inside field "Failed, rejected by RTI server".
    Any help would be appreciated.
    Regards
    Pravesh

    Hi ,
    You have to write an java mapping programm to perdromance this task , in a estandar way i think is not possible , because the fiel adapter have just one option for the delimiter character.
    Here's some code that could help you
    Supouse a file in this way:
    1,rahul,siemens,mumbai
    2,consultant,12032005
    1,viswanath,sisl,hyderabad
    2,systemeng,23052005
    package TXTMapping;
    import java.io.BufferedReader;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.InputStream;
    import java.io.InputStreamReader;
    import java.io.OutputStream;
    import java.util.Map;
    import com.sap.aii.mapping.api.StreamTransformation;
    public class TMapping implements StreamTransformation {
    private Map map;
    public void setParameter (Map param){
    map = param;
    public void execute (InputStream in, OutputStream out){
    try{
    out.write("<?xml version ='1.0' encoding='UTF-8'?>".getBytes());
    out.write("<ns0:Output_Data xmlns:ns0=\"urn:javamapping_test\">".getBytes());
    String line = null;
    BufferedReader bin = new BufferedReader(new InputStreamReader(in));
    StringBuffer buffer = new StringBuffer();
    while((line = bin.readLine())!= null){
    String Company = null;
    String Name = null;
    String Place = null;
    String Desgn = null;
    String Since = null;
    char[] str= new char[100];
    str = line.toCharArray();
    String[] Data = new String[10];
    int S1 = 0;
    int s2 = 2;
    for (int i=2; i<line.length(); i++)
    if (str<i>==',' && str[0]=='1')
    Data[S1]= line.substring(s2,i);
    S1=S1+1;
    s2 = i+1;
    if (i == line.length()-1 && str[0] == '1')
    Data[S1]= line.substring(s2,i+1);
    Name = Data[0];
    Company = Data[1];
    Place = Data[2];
    out.write ("<Data>".getBytes());
    out.write ("<Header>".getBytes());
    out.write (("<Name>"Name"</Name>").getBytes());
    out.write (("<Company>"Company"</Company>").getBytes());
    out.write (("<Place>"Place"</Place>").getBytes());
    out.write ("</Header>".getBytes());
    if (str<i>==',' && str[0]=='2')
    Data[S1]= line.substring(s2,i);
    S1=S1+1;
    s2 = i+1;
    if (i == line.length()-1 && str[0] == '2')
    Data[S1]= line.substring(s2,i+1);
    Desgn = Data[0];
    Since = Data[1];
    out.write ("<Item>".getBytes());
    out.write (("<Designation>"Desgn"</Designation>").getBytes());
    out.write (("<Since>"Since"</Since>").getBytes());
    out.write ("</Item>".getBytes());
    out.write ("</Data>".getBytes());
    out.write("</ns0:Output_Data>".getBytes());
    catch(Throwable t){
    t.printStackTrace();

  • OBIEE 11g - issue when export to a csv file

    Hello,
    I have an issue when I try to export to a csv file a report having numeric feild, when I export and open the file in notepad the trailing zeros to the right of decimal points do not show up. When export to another formats (such as excel, pdf) the trailing zeros appears correctly. e..g, 100.00 appears as 100, 100.50 appears as 100.5.
    Do you know what can be done in order to have trailing zeros to the right of decimal points to appear in a csv?
    Thanks

    This is a bug see below.
    Bug 13083479 : EXPORT DATA TO CSV FORMAT DOES NOT DOWNLOAD EXPANDED HIERARCHICAL COLUMNS
    Proposed workaround : OBIEE 11.1.1.5: Pivot Table Report Export To Csv Does Not Include Hierarchical Data [ID 1382184.1]
    Mark if answered.!
    Thanks,
    SVS

  • Issue when creating a CSV file

    Stupid issue but I've tried to fix it all day and it's not working...
    I need to create a CSV file made up of 3 columns, 2 number and 1 char. The char column is a company name that has spaces in it. When I run the query in SQL Developer, the char column looks fine. When I run in in BiPub as a CSV, I get double-quotes around the char column. It looks fine if I would look at it as DATA or any other format.
    I can't use the REPLACE function because there's nothing wrong with the data.
    We're using BI Pub v.10.1.3.4.
    Any thoughts?
    Thanks!!
    Kris

    First I tried using a basic rtf file and just selected CSV as the type. Then I tried using the 'e-text template' and selecting CSV, same result. Now I'm trying to use an .xsl file instead of a .rtf to see if that works.
    Kris

  • File upload - issue with European .csv file format

    All,
    when uploading the .csv file for "Due List for Planned Receipts" in the File Transfer Upload center I receive an error.  It appears that it is due to the european .csv file format that is delimited by semicolon rather than comma. The only way I could solve this issue is to change Regional and Language options to English. However, I don't think this is a great solution as I can't ask all our suppliers to change their settings.  Has anyone come across this issue and another way of solving it?
    Thank you!
    Have a good day,
    Johanna

    Igor thank you for your suggestion. 
    I found this SAP note:
    +If you download a file, and the formatting of the CSV file is faulty, it is possible that your column separator does not match the standard settings of the SAP system. In the standard SAP system, the separator is ,.
    To ensure that the formatting is correct, set your global default for column separation in your system, so that it matches that of the SAP system you are using.+
    To do that Microsoft suggests to change the "List separator" in the Regional and Language Options Customize view. Though like you suggest that does not seem to do the trick. Am I missing something?
    However, if I change the whole setting from say German to English (UK) the .csv files are comma delimited and can be easily uploaded.  Was hoping there would be another way of solving this without need for custom development.

Maybe you are looking for

  • 10.6.8 update Macbook Pro (2006) audio dropouts

    Hi, I've got this nightmare problem, since I updated to OS10.6.8, my Macbook Pro late 2006 drops the audio, randomly, rendering the whole caboodle absolutely useless as a studio tool. It still works out of the headphone output, but not with my MOTU 8

  • Round trip,[edit in ........]

    Hi , I use Photoshop cs5 and Lightroom 3.2 and I have a problem with "edit in Photoshop CS5" how do SAVE in Photoshop does not see a copy of the pictures in lightroom.

  • What do I need to mirror my ipad2 display to my projector?

      Projector only has inputs for VGA, s-video, and RCA.  I have an off brand 30-pin to RCA that was working for YouTube videos on my original iPad, but the ipad2 won't show anything using that cord.

  • Picture in header?

    Hi, Is it possible to load a picture in my header using the print VI's? If not is there a work around? thanks in advance

  • Still no highlight in iMovie?!

    I've been asked to make a recruiting video of a friend's son's football season. Is there really still no plug-in or easy option for highlighting a player on the field? Perhaps there is a different Mac program that has this feature built in?