TSV or CSV files with fraction values

Hello
In the Pages forum, an user asked about the behavior of fractions values.
As every user know, the input parser try to decipher them as dates and if it can't, it try to reduce the fractions.
When we enter values by hand, the workaround is to set the cells format to text before entering the values.
The problem remains when the values are imported from a TSV or a CSV file.
In such case, we can't define the cells formats before the import task.
This is why I wrote two scripts, one for TSV and one for CSV.
They read the contents of such a file and, when a value contains a slash, it add a single quote in front of this value.
So, during the import process, Numbers will treat these values as text and will not try to convert them.
--[SCRIPT insertsingle_quote_in_front_of_fractions_inCSV]
Enregistrer le script en tant que Script : insertsingle_quote_in_front_of_fractions_inCSV.scpt
déplacer le fichier ainsi créé dans le dossier
<VolumeDeDémarrage>:Users:<votreCompte>:Library:Scripts:
Aller au menu Scripts , choisir insertsingle_quote_in_front_of_fractions_inCSV
Naviguer jusqu'au fichier CSV à traiter.
Le script placera une apostrophe devant toute valeur contenant un slash.
Numbers traitera alors ces valeurs comme des chaînes de caractères et ne les convertira pas.
On peut également enregistrer le script en tant qu'application.
Glisser déposer un fichier CSV sur l'icône de l'application exécutera le script.
--=====
L'aide du Finder explique:
L'Utilitaire AppleScript permet d'activer le Menu des scripts :
Ouvrez l'Utilitaire AppleScript situé dans le dossier Applications/AppleScript.
Cochez la case "Afficher le menu des scripts dans la barre de menus".
Sous 10.6.x,
aller dans le panneau "Général" du dialogue Préférences de l'Éditeur Applescript
puis cocher la case "Afficher le menu des scripts dans la barre des menus".
--=====
Save the script as a Script: insertsingle_quote_in_front_of_fractions_inCSV.scpt
Move the newly created file into the folder:
<startup Volume>:Users:<yourAccount>:Library:Scripts:
Go to the Scripts Menu, choose "insertsingle_quote_in_front_of_fractions_inCSV"
Browse to the CSV file to treat.
The script will put a single quote in front of every value containing a slash.
So Numbers will no longer convert these values into dates.
We may also save the script as an application.
Drag and drop the CSV onto the application's icon will run the script.
--=====
The Finder's Help explains:
To make the Script menu appear:
Open the AppleScript utility located in Applications/AppleScript.
Select the "Show Script Menu in menu bar" checkbox.
Under 10.6.x,
go to the General panel of AppleScript Editor’s Preferences dialog box
and check the “Show Script menu in menu bar” option.
--=====
Yvan KOENIG (VALLAURIS, France)
2010/11/08
--=====
on run
my commun(choose file of type {"public.csv", "public.comma-separated-values-text"} without invisibles)
end run
--=====
on open (sel)
local csvFile, uti
set csvFile to first item of sel
tell application "System Events"
set uti to type identifier of disk item ("" & csvFile)
end tell
if uti is not in {"public.csv", "public.comma-separated-values-text"} then
if my parle_Anglais() then
error "The file “" & csvFile & "” isn’t a CSV file !"
else
error "Le fichier « " & csvFile & " » n’est pas un fichier CSV !"
end if
end if
my commun(csvFile)
end open
--=====
on commun(le_fichier)
local delim, deci, des_paragraphes, un_paragraphe, une_liste, i
set {delim, deci} to my get2LocalizedDelimiters()
set des_paragraphes to paragraphs of (read le_fichier)
repeat with un_paragraphe in des_paragraphes
set une_liste to my decoupe(contents of un_paragraphe, delim)
repeat with i from 1 to count of une_liste
if item i of une_liste contains "/" then set item i of une_liste to "'" & item i of une_liste
end repeat -- i
set contents of un_paragraphe to my recolle(une_liste, delim)
end repeat -- with un_paragraphe
set eof of le_fichier to 0
write my recolle(des_paragraphes, return) to le_fichier
end commun
--=====
on decoupe(t, d)
local oTIDs, l
set oTIDs to AppleScript's text item delimiters
set AppleScript's text item delimiters to d
set l to text items of t
set AppleScript's text item delimiters to oTIDs
return l
end decoupe
--=====
on recolle(l, d)
local oTIDs, t
set oTIDs to AppleScript's text item delimiters
set AppleScript's text item delimiters to d
set t to l as text
set AppleScript's text item delimiters to oTIDs
return t
end recolle
--=====
Set the parameter delimiters which must be used in Numbers formulas
set {delim, deci} to my get2LocalizedDelimiters()
on get2LocalizedDelimiters()
if character 2 of (0.5 as text) is "." then
return {",", "."}
else
return {";", ","}
end if
end get2LocalizedDelimiters
--=====
on parle_Anglais()
return (do shell script "defaults read 'Apple Global Domain' AppleLocale") does not start with "fr_"
end parle_Anglais
--=====
--[/SCRIPT]
--[SCRIPT insertsingle_quote_in_front_of_fractions_inTSV]
Enregistrer le script en tant que Script : insertsingle_quote_in_front_of_fractions_inTSV.scpt
déplacer le fichier ainsi créé dans le dossier
<VolumeDeDémarrage>:Users:<votreCompte>:Library:Scripts:
Aller au menu Scripts , choisir insertsingle_quote_in_front_of_fractions_inTSV
Naviguer jusqu'au fichier TSV à traiter.
Le script placera une apostrophe devant toute valeur contenant un slash.
Numbers traitera alors ces valeurs comme des chaînes de caractères et ne les convertira pas.
On peut également enregistrer le script en tant qu'application.
Glisser déposer un fichier CSV sur l'icône de l'application exécutera le script.
--=====
L'aide du Finder explique:
L'Utilitaire AppleScript permet d'activer le Menu des scripts :
Ouvrez l'Utilitaire AppleScript situé dans le dossier Applications/AppleScript.
Cochez la case "Afficher le menu des scripts dans la barre de menus".
Sous 10.6.x,
aller dans le panneau "Général" du dialogue Préférences de l'Éditeur Applescript
puis cocher la case "Afficher le menu des scripts dans la barre des menus".
--=====
Save the script as a Script: insertsingle_quote_in_front_of_fractions_inTSV.scpt
Move the newly created file into the folder:
<startup Volume>:Users:<yourAccount>:Library:Scripts:
Go to the Scripts Menu, choose "insertsingle_quote_in_front_of_fractions_inTSV"
Browse to the TSV file to treat.
The script will put a single quote in front of every value containing a slash.
So Numbers will no longer convert these values into dates.
We may also save the script as an application.
Drag and drop the CSV onto the application's icon will run the script.
--=====
The Finder's Help explains:
To make the Script menu appear:
Open the AppleScript utility located in Applications/AppleScript.
Select the "Show Script Menu in menu bar" checkbox.
Under 10.6.x,
go to the General panel of AppleScript Editor’s Preferences dialog box
and check the “Show Script menu in menu bar” option.
--=====
Yvan KOENIG (VALLAURIS, France)
2010/11/08
--=====
on run
my commun(choose file of type {"public.plain-text"} without invisibles)
end run
--=====
on open (sel)
local tsvFile, uti
set tsvFile to first item of sel
tell application "System Events"
set uti to type identifier of disk item ("" & tsvFile)
end tell
if uti is not in {"public.plain-text"} then my notatsvFile(tsvFile)
my commun(tsvFile)
end open
--=====
on commun(le_fichier)
local le_texte, des_paragraphes, un_paragraphe, une_liste, i
set le_texte to (read le_fichier)
if le_texte contains tab then
set des_paragraphes to paragraphs of le_texte
repeat with un_paragraphe in des_paragraphes
set une_liste to my decoupe(contents of un_paragraphe, tab)
repeat with i from 1 to count of une_liste
if item i of une_liste contains "/" then set item i of une_liste to "'" & item i of une_liste
end repeat -- i
set contents of un_paragraphe to my recolle(une_liste, tab)
end repeat -- with un_paragraphe
set eof of le_fichier to 0
write my recolle(des_paragraphes, return) to le_fichier
else
my nota_tsvFile(lefichier)
end if
end commun
--=====
on nota_tsvFile(unfichier)
if my parle_Anglais() then
error "The file “" & un_fichier & "” isn’t a TSV file !"
else
error "Le fichier « " & un_fichier & " » n’est pas un fichier TSV !"
end if
end notatsvFile
--=====
on decoupe(t, d)
local oTIDs, l
set oTIDs to AppleScript's text item delimiters
set AppleScript's text item delimiters to d
set l to text items of t
set AppleScript's text item delimiters to oTIDs
return l
end decoupe
--=====
on recolle(l, d)
local oTIDs, t
set oTIDs to AppleScript's text item delimiters
set AppleScript's text item delimiters to d
set t to l as text
set AppleScript's text item delimiters to oTIDs
return t
end recolle
--=====
Set the parameter delimiters which must be used in Numbers formulas
set {delim, deci} to my get2LocalizedDelimiters()
on get2LocalizedDelimiters()
if character 2 of (0.5 as text) is "." then
return {",", "."}
else
return {";", ","}
end if
end get2LocalizedDelimiters
--=====
on parle_Anglais()
return (do shell script "defaults read 'Apple Global Domain' AppleLocale") does not start with "fr_"
end parle_Anglais
--=====
--[/SCRIPT]
Yvan KOENIG (VALLAURIS, France) lundi 8 novembre 2010 11:30:20

Hello Zahid,
It is currently possible to create a relational multi-source universe with one data provider to an Excel document.
for this to work for Web Intelligence the 62 bit drivers must be installed on the Webi Server for Excel.
Here is a link to the kbase
https://service.sap.com/sap/support/notes/1828466
Also here is a link to the Information Design Tool Guide for how to create a multi-source universe:
http://service.sap.com/sap/support/notes/1898185
One of the new features in BI 4.0 is creating a list of values on a Table or custom SQL.
also, here is the link to the available tutorials:
Official Product Tutorials – SAP BusinessObjects Business Intelligence Platform 4.x
Scroll down to the information design tool.
Hope this helps!
Jacqueline

Similar Messages

  • Is there a way to open CSV files with more than 255 columns?

    I have a CSV file with more than 255 columns of data.  It's a fairly standard export of social media data that shows volume of posts by day for the past year, from which I can analyze the data and publish customized charts. Very easy in Excel but I'm hitting the Numbers limit of 255 columns per table. Is there a way to work around the limitation? Perhaps splitting the CSV in two? The data shows up in the CSV file when I open via TextEdit, so it's there. Just can't access it in Numbers. And it's not very usable/useful for me in TextEdit.
    Regards,
    Tim

    You might be better off with Excel. Even if you could find a way to easily split the CSV file into two tables, it would be two tables when you want only one.  You said you want to make charts from this data.  While a series on a chart can be constructed from data in two different tables, to do so takes a few extra steps for each series on the chart.
    For a test to see if you want to proceed, make two small tables with data spanning the tables and make a chart from that data.  Make the chart the normal way using the data in the first table then repeat the following steps for each series
    Select the series in the chart
    Go to Format sidebar
    Click in the "Value" box
    Add a comma then select the data for this series from the second chart
    Press Return
    If there is an easier way to do this, maybe someone else will chime in with that info.

  • Unable to Load CSV file with comma inside the column(Sql Server 2008)

    Hi,
    I am not able load an CSV file with Comma inside a column. 
    Here is sample File:(First row contain the column names)
    _id,qp,c
    "1","[ ""0"", ""0"", ""0"" ]","helloworld"
    "1","[ ""0"", ""0"", ""0"" ]","helloworld"
    When i specify the Text Qualifier as "(Double quotes) it work in SQL Server 2012, where as fail in the SQL Server 2008, complaining with error:
    TITLE: Microsoft Visual Studio
    The preview sample contains embedded text qualifiers ("). The flat file parser does not support embedding text qualifiers in data. Parsing columns that contain data with text qualifiers will fail at run time.
    BUTTONS:
    OK
    I need to do this in sql server 2008 R2 with Service pack 2, my build version is 10.50.1600.1.
    Can someone let me know it is possible in do the same way it is handle in SQL Server 2012?
    Or
    It got resolved in any successive Cumulative update after 10.50.1600.1?
    Regards Harsh

    Hello,
    If you have CSV with double quotes inside double quotes and with SSIS 2008, I suggest:
    in your data flow you use a script transformation component as Source, then define the ouput columns id signed int, Gp unicode string and C unicode string. e.g. Ouput 0 output colmuns
    Id - four-byte signed
    gp - unicode string
    cc - unicode string
    Do not use a flat file connection, but use a user variable in which you store the name of the flat file (this could be inside a for each file loop).
    The user variable is supplied as a a readonly variable argument to the script component in your dataflow.
    In the script component inMain.CreateNewOutputRows use a System.IO.Streamreader with the user variable name to read the file and use the outputbuffer addrow method to add each line to the output of the script component.
    Between the ReadLine instraction and the addrow instruction you have to add your code to extract the 3 column values.
    public override void CreateNewOutputRows()
    Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
    For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
    string line;
    System.IO.StreamReader file = new System.IO.StreamReader( Variables.CsvFilename);
    while ((line = file.ReadLine()) != null)
    System.Windows.Forms.MessageBox.Show(line);
    if (line.StartsWith("_id,qp,c") != true) //skip header line
    Output0Buffer.AddRow();
    string[] mydata = Yourlineconversionher(line);
    Output0Buffer.Id = Convert.ToInt32(mydata[0]);
    Output0Buffer.gp = mydata[1];
    Output0Buffer.cc = mydata[2];
    file.Close();
    Jan D'Hondt - SQL server BI development

  • I also have a .csv file with the name of a jpeg in one column and a text description of each jpeg in a second column. Is there a way to automatically insert one jpeg (photo) and its corresponding text, each pair on one page, into a Indesign document?

    I also have a .csv file with the name of a jpeg in one column and a text description of each jpeg in a second column. Is there a way to automatically insert one jpeg (photo) and its corresponding text, each pair on one page, into a Indesign document?

    I would also recommend to write the description into the meta data. This would allow to place a text frame above the image and it is possible to add meta information and file name automatically together with the image, when you place it or even in a prepared template.
    Meta data information can be written easily in Bridge in the Meta File Workspace.

  • How to import data from CSV file with columns separated by semicolon?

    I migrate database from MS SQL 2008 to ORACLE 11g
    I export data to CSV file from MS SQL
    then I try to import it to Oracle
    several tables goes fine using Import data option in the SQL Developer
    Standard CSV files with data separated by comma were imported.
    chars, date (with format string), and integer data are imported via import wizard without problems
    the problems were when I try to import table with noninteger numbers with modal part separated by comma not by dot
    comma is the standard separator for columns in CSV file
    so I must change the standard separator to semicolon
    then the import wizard have problem to correct recognize the columns data because it use only standard CSV comma separator :-/
    In SQL Developer 1.5.3 Tools -> Preferences -> Migration -> Data Move Options
    I change "End of Column Delimiter" to ; but it doens't work
    Is this possible to change the standard column separator for import data wizzard in SQL Developer 1.5.3?
    Or maybe someone know how to import data in SQL Developer 1.5.3 from CSV when CSV colunn separator is set to semicolon?

    A new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.

  • Reading a csv file with a large number of columns

    Hello
    I have been attempting to read data from large csv files with 38 columns by reading a line using readline and scanning the linebuffer using scan.
    The file size can be up to 100 MB.
    Scan does not seem support the large number of fields.
    Any suggestions on reading the 38 comma separated fields. There is one header line in the file.
    Thanks
    Solved!
    Go to Solution.

    see if strtok() is useful  http://www.elook.org/programming/c/strtok.html

  • How to generate a second csv file with different report columns selected?

    Hi. Everybody:
    How to generate a second csv file with different report columns selected?
    The first csv file is easy (report attributes -> report export -> enable CSV output Yes). However, our users demand 2 csv files with different report columns selected to meet their different needs.
    (The users don't want to have one csv file with all report columns included. They just want to get whatever they need directly, no extra columns)
    Thank you for any help!
    MZ

    Hello,
    I'm doing it usually. Typically example would be in the report only the column "FIRST_NAME" and "LAST_NAME" displayed whereas
    in the csv exported with the UTL_FILE the complete address (street, housenumber, additions, zip, town, state ... ) is written, these things are needed e.g. the form letters.
    You do not need another page, just an additional button named e.g. "export_to_csv" on your report page.
    The csv export itself is handled from a plsql procedure "stored procedure" ( I like to have business logic outside of apex) which is invoked by pressing the button "export_to_csv". Of course the stored procedure can handle also parameters
    An example code would be something like
    PROCEDURE srn_brief_mitglieder (
         p_start_mg_nr IN NUMBER,
         p_ende_mg_nr IN NUMBER
    AS
    export_file          UTL_FILE.FILE_TYPE;
    l_line               VARCHAR2(20000);
    l_lfd               NUMBER;
    l_dateiname          VARCHAR2(100);
    l_datum               VARCHAR2(20);
    l_hilfe               VARCHAR2(20);
    CURSOR c1 IS
    SELECT
    MG_NR
    ,TO_CHAR(MG_BEITRITT,'dd.mm.yyyy') AS MG_BEITRITT ,TO_CHAR(MG_AUFNAHME,'dd.mm.yyyy') AS MG_AUFNAHME
    ,MG_ANREDE ,MG_TITEL ,MG_NACHNAME ,MG_VORNAME
    ,MG_STRASSE ,MG_HNR ,MG_ZUSATZ ,MG_PLZ ,MG_ORT
    FROM MITGLIEDER
    WHERE MG_NR >= p_start_mg_nr
    AND MG_NR <= p_ende_mg_nr
    --WHERE ROWNUM < 10
    ORDER BY MG_NR;
    BEGIN
    SELECT TO_CHAR(SYSDATE, 'yyyy_mm_dd' ) INTO l_datum FROM DUAL;
    SELECT TO_CHAR(SYSDATE, 'hh24miss' ) INTO l_hilfe FROM DUAL;
    l_datum := l_datum||'_'||l_hilfe;
    --DBMS_OUTPUT.PUT_LINE ( l_datum);
    l_dateiname := 'SRNBRIEF_MITGLIEDER_'||l_datum||'.CSV';
    --DBMS_OUTPUT.PUT_LINE ( l_dateiname);
    export_file := UTL_FILE.FOPEN('EXPORTDIR', l_dateiname, 'W');
    l_line := '';
    --HEADER
    l_line := '"NR"|"BEITRITT"|"AUFNAHME"|"ANREDE"|"TITEL"|"NACHNAME"|"VORNAME"';
    l_line := l_line||'|"STRASSE"|"HNR"|"ZUSATZ"|"PLZ"|"ORT"';
         UTL_FILE.PUT_LINE(export_file, l_line);
    FOR rec IN c1
    LOOP
         l_line :=  '"'||rec.MG_NR||'"';     
         l_line := l_line||'|"'||rec.MG_BEITRITT||'"|"' ||rec.MG_AUFNAHME||'"';
         l_line := l_line||'|"'||rec.MG_ANREDE||'"|"'||rec.MG_TITEL||'"|"'||rec.MG_NACHNAME||'"|"'||rec.MG_VORNAME||'"';     
         l_line := l_line||'|"'||rec.MG_STRASSE||'"|"'||rec.MG_HNR||'"|"'||rec.MG_ZUSATZ||'"|"'||rec.MG_PLZ||'"|"'||rec.MG_ORT||'"';          
    --     DBMS_OUTPUT.PUT_LINE (l_line);
    -- in datei schreiben
         UTL_FILE.PUT_LINE(export_file, l_line);
    END LOOP;
    UTL_FILE.FCLOSE(export_file);
    END srn_brief_mitglieder;Edited by: wucis on Nov 6, 2011 9:09 AM

  • How to create a csv file with NCS attributes?

    Hi
    i installed Cisco Prime NCS and trying to perform bulk update of device credentials with csv file.
    How to create a csv file with all required attributes?
    This is part of NCS online help talking about this topic:
    Bulk Update Devices—To update the device credentials in a bulk, select Bulk Update Devices from the Select a command drop-down list. The Bulk Update Devices page appears.You can choose a CSV file.
    Note        The CSV file contains a list of devices to be updated, one device per line. Each line is a comma separated list of device attributes. The first line describes the attributes included. The IP address attribute is mandatory.
    Bellow is test csv file i created but does not work:
    10.64.160.31,v2c,2,10,snmpcomm,ssh,zeus,password,password,enablepwd,enablepwd,60
    10.64.160.31,v2c,2,10,snmpcomm,ssh,zeus,password,password,enablepwd,enablepwd,60
    The error i am getting while importing this file:
    Missing mandatory field [ip_address] on header line:10.64.160.31,v2c,2,10,snmpcomm,ssh,zeus,password,password,enablepwd,enablepwd,60
    Assistance appreciated.

    It looks like the IP address field is incorrectly set.,
    It should be as follows
    {Device IP},{Device Subnet Mask}, etc etc
    so a practical example of the aboove could be (i dont know if this is completely correct after the IP address / Subnet Mask)
    10.64.160.31,255.255.255.0,v2c,2,10,snmpcomm,ssh,zeus,password,password,enablepwd,enablepwd,60
    below is a link to the documentation
    http://www.cisco.com/en/US/docs/wireless/ncs/1.0/configuration/guide/ctrlcfg.html#wp1840245
    HTH
    Darren

  • How to upload .csv file with tab as delimiter.

    HI,
    I want to upload a .csv file with tab as delimiter to unix path in background.                                                                                                             
    I know there is function module 'GUI_UPLOAD', but in my case data is available in an internal table .
    upload the *.CSV :
    OPEN DATASET lv_filename
             FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
        IF sy-subrc = 0.
      *     Write  file records into file on application server
          LOOP AT gt_datatab INTO gs_datatab.
            TRANSFER gs_datatab TO lv_filename.
          ENDLOOP.
    CLOSE DATASET lv_filename.

    Bhanu,
    Define a local variable of type CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB.
    LV_TAB TYPE CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB
    LOOP AT  GT_DATATAB INTO GS_DATATAB.
    CONCATENATE GS_DATATAB-FIELD1
                             GS_DATATAB-FIELD2
                             GS_DATATAB-FIELDN
                             INTO LV_STRING SEPARATED BY LV_TAB.
    TRANSFER LV_STRING TO lv_filename.
    ENDLOOP.
    Thanks,
    Vikram.M

  • Write the results script of results log pane to XLS or CSV file with VBA.

    Hi,
    How can I write the results script of results log pane to XLS or CSV file with VBA code or something? I tried so hard but i can't.
    Thanks

    MoGas,
    This is actually not a trivial process. You need to use the results object and code it to write to your file (it is described in the help files).
    e-Tester automatically saves the results log as a text file so you may just want to stick with that for simplicity.

  • Download int table into csv file with each column in separate column in csv

    Hi All,
    I want to download the data in internal table to CSV file. but each column in the table should come as separate column in csv format.
      CALL FUNCTION 'GUI_DOWNLOAD'
        EXPORTING
          FILENAME                = GD_FILE
          FILETYPE                = 'ASC'
          WRITE_FIELD_SEPARATOR   = 'X'
        tables
          DATA_TAB                = I_LINES_NEW
        EXCEPTIONS
          FILE_OPEN_ERROR         = 1
          FILE_READ_ERROR         = 2
          NO_BATCH                = 3
          GUI_REFUSE_FILETRANSFER = 4
          INVALID_TYPE            = 5
          NO_AUTHORITY            = 6
          UNKNOWN_ERROR           = 7
          BAD_DATA_FORMAT         = 8
          HEADER_NOT_ALLOWED      = 9
          SEPARATOR_NOT_ALLOWED   = 10
          HEADER_TOO_LONG         = 11
          UNKNOWN_DP_ERROR        = 12
          ACCESS_DENIED           = 13
          DP_OUT_OF_MEMORY        = 14
          DISK_FULL               = 15
          DP_TIMEOUT              = 16
          OTHERS                  = 17.
      IF SY-SUBRC NE 0.
        WRITE: 'Error ', SY-SUBRC, 'returned from GUI_DOWNLOAD SAP OUTBOUND'.
        SKIP.
      ENDIF.
    with the above values passd , I am getting csv file but all the columns in one column separated by some square symbol.
    How to separate them into different columns.
    Thanks in advance
    rgds,
    Madhuri

    Below example might help you understand on dowloading CSV file:
    TYPE-POOLS: truxs.
    DATA: i_t001 TYPE STANDARD TABLE OF t001,
          i_data TYPE truxs_t_text_data.
    SELECT * FROM t001 INTO TABLE i_t001 UP TO 20 ROWS.
    CALL FUNCTION 'SAP_CONVERT_TO_TEX_FORMAT'
      EXPORTING
        i_field_seperator          = ','
    *   I_LINE_HEADER              =
    *   I_FILENAME                 =
    *   I_APPL_KEEP                = ' '
      TABLES
        i_tab_sap_data             = i_t001
    CHANGING
       i_tab_converted_data       = i_data
    EXCEPTIONS
       conversion_failed          = 1
       OTHERS                     = 2.
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
              WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    DATA: file TYPE string VALUE 'C:\testing.csv'.
    CALL METHOD cl_gui_frontend_services=>gui_download
      EXPORTING
        filename                = file
      CHANGING
        data_tab                = i_data[]
      EXCEPTIONS
        file_write_error        = 1
        no_batch                = 2
        gui_refuse_filetransfer = 3
        invalid_type            = 4
        no_authority            = 5
        unknown_error           = 6
        header_not_allowed      = 7
        separator_not_allowed   = 8
        filesize_not_allowed    = 9
        header_too_long         = 10
        dp_error_create         = 11
        dp_error_send           = 12
        dp_error_write          = 13
        unknown_dp_error        = 14
        access_denied           = 15
        dp_out_of_memory        = 16
        disk_full               = 17
        dp_timeout              = 18
        file_not_found          = 19
        dataprovider_exception  = 20
        control_flush_error     = 21
        not_supported_by_gui    = 22
        error_no_gui            = 23
        OTHERS                  = 24.
    IF sy-subrc <> 0.
      MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
                 WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    Regards
    Eswar

  • IDoc to csv file with required fields

    All,
    I have a source IDoc going to XI.  For example it contains source fields SF1 (required), SF2 (required), SF3(optional) and SF4(optional).
    I want to produce a target file using file content conversion.  I want to produce target fields TF1(required), TF2(required), TF3(optional), and TF4(optional).  In my mapping SF1 maps to TF1, SF2 maps to TF2, etc...
    I want to produce a comma separated file, I am using file content conversions and I am using NameA.fieldSeparator (using a comma)  in my file content conversion parameters.  I have no problem when all 4 source fields are populated.  In this case if the value of my source fields are: ABC 123 XYZ and 789 then I get a flat file with the result:
    ABC,123,XYZ,789
    The problem is when my optional fields are blank I currently get the following in my csv file:
    ABC,123
    When instead I want:
    ABC,123,,
    I know I've seen threads relating to this issue but I haven't had any success locating them.  Any insight is appreciated.

    Hi Shaun,
    Source : SF1 (required), SF2 (required), SF3(optional) and SF4(optional).
    Target: TF1(required), TF2(required), TF3(optional), and TF4(optional)
    For Optional in mapping , If the source optional field is not there, then for target assign with blank constant. You will get the output as below, because at target side the element will be there with blank value and FCC will process that.
    ABC,123,,
    Otherwise Try with NameA.fieldFixedLengths in FCC.
    Regards,
    Prasanna

  • Gui_upload for csv file with numeric n date fields

    Hi ,
      i searched on net on how to use GUI_UPLOAD for csv files using split function but the spilt function requires that the data type of all fields of the internal table should be character whereas in my case there is a numeric n data field too.So kindly help me how to do it.
    One way ppl might suggest that give all the fields as char in the internal n after recieving the data in the internal convert the required fields back to data n numeric n then finally insert to the database but if i have 80 fields u would really dont want to do that as it requires precision to convert it to the same type n length as in the original database for all the fields.

    yeah i guess that would be a bit longer if i m having more no of fields (80) compared to ORACLE where this can be done in a shorter way.
    SO i shall use 2 internal tables here rite?? i mean first one(with all fields as char) for storing the split function values and the other one (with actual data types) .CAN u plz tell me how to convert or cast from one data type to another n while moving values from 1st internal table to another  which way shall i do.
    Thanks for ur help n can u suggest if there is any other method for uploading csv also other than this split method n whether this method is better n faster than others.
    THANX

  • Problem importing a CSV file with forward slashes in a column

    I have an Excel csv file of a product database (contains about 6500 products) that contains product codes with forward slahes such as 499/1, 499/3, 499/5.
    These are different sizes of a product and as such have different prices etc.
    When I import the file into Numbers these numbers appear as 499, 166 1/3, and 99 4/5 respectively.
    What seems to be happening is that Numbers is interpreting the forward slash (/) as a divide by command and subsequently performing a calculation on that number on import and hence totally changing the value of the cell so that it is impossible to look up the price related to a product as the product code no longer exists.
    Excel can import these files with no problems, why can't Numbers treat each cell as text and leave it alone on import.
    Is there any way round this or do I have to revert to using Excel for the import of csv files.
    Thanks
    Steve

    I know I'm a bit(!) late (a year) coming to this party, but there is a simple solution, that worked for me, enclose the field in double quotes, and add a single quote before the number:
    Instead of
    499/1,"Super Widget 3",12.34
    do
    "'499/1","Super Widget 3",12.34
    In fact the single (unclosed) quote without double-quotes works as well:
    '499/1,"Super Widget 3",12.34
    I've always found it better to enclose strings with double quotes. This works on loading the file into Mac Numbers and should work with Excel too if that helps. It opens with OpenOffice 4 on the Mac too - if you select "comma" in the "Separated by" checkbox
    I can't remember where I picked this info up....
    Hope this helps someone, albeit late.
    Andy

  • Loading a CSV file with Umlaut characters (àáä)

    Hai,
    We are uploading a CSV file though a Custom JSP page built based on Oracle JTF framework.
    The JSP page is loading the data into FND_LOBS table using JTF object, oracle.apps.jtf.amv.ServletUploader.
    The CSV file in the FND_LOBS table stored properly with the umlaut characters.
    Now the JSP page invokes a Java object to read and parse the data. We are selecting the data first into BLOB object and then using the Input Stream Reader to get the data.
    Here is the sample code:
    oraclepreparedstatement = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT FILE_DATA FROM FND_LOBS WHERE FILE_ID = :1 ");
    oraclepreparedstatement.defineColumnType(1, 2004);
    oraclepreparedstatement.setLong(1, <file id>);
    oracleresultset = (OracleResultSet)oraclepreparedstatement.executeQuery();
    blob = (BLOB)oracleresultset.getObject(1);
    InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream());
    lineReader = new LineNumberReader(inputstreamreader )
    lCSVLine = lineReader.readLine();
    I tried printing the character set used by the InputStreamReader and it returned as ASCII
    Now I tried setting the different character sets to read Umlaut characters(german chars) but nothing has worked.
    InputStreamReader inputstreamreader = new InputStreamReader(blob.getBinaryStream(),"UTF-8");
    Can someone please let me know where and how to set the Character Set to accept the Umlaut characters like àáä?
    Thanks,
    Anji

    Thank you for the quick response.
    Requirement:
    I need to retrive the BLOB object with umlaut characters from database, parse the data with comma delimeter into strings and store in database.
    I am viewing the umlaut data from the database table using TOAD utility tool.
    I tried the same code example provided above but it is not working as expected. The umlaut characters are translated to 'ýýý.
    CODE EXAMPLE:
    Input:
    test_umlaut (sno NUMBER, col1 VARCHAR2(100), col3 BLOB);
    insert into test_umlaut(sno,col3) values(200, utl_raw.cast_to_raw('äöüÄÖÜ' ))
    Note: Verified that the database is showing the umlaut characters on selecting the col3 and storing in a flat file
    --- code
    OraclePreparedStatement oraclepreparedstatement10 = null;
    OracleResultSet rs = null;
    oraclepreparedstatement10 = (OraclePreparedStatement)oracleconnection.prepareStatement(" SELECT col3 FROM test_umlaut WHERE sno = 200 ");
    oraclepreparedstatement10.defineColumnType(1, 2004);
    rs = (OracleResultSet)oraclepreparedstatement10.executeQuery();
    while(rs.next()) {
    BLOB b = (BLOB)rs.getObject(1);
    InputStream is = b.getBinaryStream();
    InputStreamReader r = new InputStreamReader(is,"utf8");
    BufferedReader br = new BufferedReader(r);
    String line;
    while( (line = br.readLine()) != null) {
         System.out.println(line);
         OraclePreparedStatement oraclepreparedstatement12 = null;
         OracleResultSet oracleresultset12 = null;
         oraclepreparedstatement12 = (OraclePreparedStatement)oracleconnection.prepareStatement(" INSERT INTO test_umlaut(sno,col1) VALUES (300,?) ");
         oraclepreparedstatement12.setString(1,line);
         oraclepreparedstatement12.executeUpdate();
    br.close();
    r.close();
    is.close();
    Output: Verified the output from the database table which is inserted in the loop above.
    select col1 from test_umlaut where sno=300
    ýýý

Maybe you are looking for