Detecting line-breaks within a column of an uploaded tab-delimited file.

Suppose you upload a tab-delimited file from your laptop and split each row of the file into some structure that you append to an itab.
Is there a way inside ABAP to detect that a field of the uploaded file has a CR or CRLF in it?  And if so, where it is ?
Thanks in advance ...

You can use any of the following for those char.
DATA:   head_crnl(1)   TYPE c VALUE cl_abap_char_utilities=>cr_lf,
          top_crnl(1)    TYPE c VALUE cl_abap_char_utilities=>cr_lf,
          end_crnl(1)    TYPE c VALUE cl_abap_char_utilities=>cr_lf,
          blank_crnl(1)  TYPE c VALUE cl_abap_char_utilities=>cr_lf,
          final_crnl(1)  TYPE c VALUE cl_abap_char_utilities=>cr_lf,
          first_pgbr(1)  TYPE c VALUE cl_abap_char_utilities=>form_feed.
Declare the above variables and check if they occur in the file. Hope this helps.

Similar Messages

  • Truncated last blank column in fixed width tab delimited file

    Hi All,
               I've to generate fixed width tab delimited file in application server.As per my requiremnt  Last column in the file should be blank. When i have generated file last column is truncated because it is populated blank.
    Need solution to get the last column with blank data.
    Thanks
    Kiran

    Hi,
    The last column is not truncated I hope,bcoz it is space,u were not able to find it.
    Read the file into internal table and see in the debugging mode if it is present or not.

  • Large line to internal tables from  tab delimited file

    Dear All
    I am trying to upload the large file of tab delimited data into a SAP internal table. I am basically stuck with the fact that there are multiple lines and multiple columns in tab delimited file. There are around 300 columns which are tab delimited and separated
    For e.g  (* indicates tab)
    1material*****************1**9888**********5**********34*********3*********346************************-->upto 5000 columns
    1material*****************1**99338************4***********************************6************7************-->upto 5000 columns
    1material*****************1**22888********************5*********7*********************6*****7**************-->upto 5000 columns
    1material*****************1**44844************************5***5*********************************************-->upto 5000 columns
    1material***********34****1**54*******33********33*****33**************************************************-->upto 5000 columns
    1material*****************1**99888*****************************************************************************-->upto 5000 columns
    below upto 500 rows or more
    I want to read this file into a columner internal table.
    I am trying several ways . I have file on APP server. However Line breaks after 1024 characters or comes on another line.
    Currently I am not able to load it in single line of internal table. The structure of file is dynamic .. not static
    Amol

    Hi Amolsonaikar,
    you may try like this:
    TYPES:
      begin of line,
        t_field type table of string,
      end of line,
      t_line type table of line.
    DATA:
      lt_line  type t_line,
      lv_line type string,
      lt_field type table of string.
    open dataset 'XYZ' for input in text mode encoding default.
    while sy-subrc = 0.
      read dataset into lv_line.
      split lv_line at '|' into lt_field.
      append lt_field to lt_line.
    endwhile.
    Regards,
    Clemens

  • Feature request : import of multi-line notes in CSV or tab-delimited file

    This is a feature request to the AddressBook development team:
    Following a question posted here and asked to support (thanks to Barry and Julia from ADC), it turns out that AddressBook is not ready yet to import multi-line notes from a CSV or tab-delimited file.
    I would like to kindly suggest the AddressBook development team to update the import methods for such flat files to parse the text of what is assigned to the Notes field, and convert documented escape sequences to their pre-determined corresponding special character.
    I would think that the following would be pretty easy to implement, and easy for the targeted audience (enthousiasts power-users and up) to remember:
    "\n " : mark a new line (same as ^p in Word)
    "\t " : replace by a tab
    I would keep the trailing space to avoid problems with DOS-type file path problems.
    and of course...
    "\\n " and "\\t " for whoever would want to import notes about \n and \t
    For example, in a Notes column, the text:
    "Escape sequence \\n :\n move the text to a new line\n \n while escape sequence \\t :\t move the text to the next horizontal tab"
    would give the following (just consider text between <Note> marks):
    <Note>
    Escape sequence \n:
    move the text to a new line
    while escape sequence \t: move the text to the next horizontal tab
    </Note>
    Thanks for reading...
    Frédéric Denis

    The developers do not necessarily read these USER discussion pages. If you want to make suggestions and know they will be read (which is at least a step towards implementation) use the OS X feedback page.
    AK

  • SAP PI 7.3.1 XML line break within node

    Good Morning All,
    I've an issue with link breaks in XML. Basically, the scenario is IDoc (SAP) to File (XML 3rd Party System).
    I've mapped a text field from the IDoc to the XML node (<comments>) .  The text data comes across with a delimiter !$! to indicate where a line break should occur within the <comment>  node in the XML
    For example the text line comes across in the IDoc as  line 1!$!This is Line 2!$! this one is line 3!$!  and within the xml file I need to the <comment> node to look like  <comment>line 1
                                       This is line 2
                                       this one is line 3</comment>
    We need this for formatting in the 3rd party system. I've tried various replaces etc. but the new line never seems to get recognized. 
    Any help with this would be very much appreciated.
    Thanks
    G

    Use OS specific new line character by using System.getProperty("line.separator"):
    Here is complete code:
    UDF with Context option
    Parameters:String var[], ResultList result)
    String temp[] = var[0].split("!$!");
    for(int i = 0; i < temp.length; i++)
         result.addValue(temp[i] + System.getProperty("line.separator"));
    Check it should work, I have not tested...
    --Divyesh

  • Line Break for the column

    Hi,
    I have a BIP Report.
    The report includes the layout of a free text column. This text goes towards the right side for sufficient length.
    I want to wrap the text or line break this text.
    In the rtf template, I am using the following formula:
    <?html2fo: OUTPUT_NOTES?>
    Any Ideas?

    Dear,
    Add the UDF's on the marketing document you want the tax break up.
    Apply FMS using below queries and call these UDF's on your PLD.
    FOR BED
    DECLARE @Amount as Numeric(19,2)
    DECLARE @Rate as Numeric(19,0)
    DECLARE @TAmount as Numeric(19,2)
    set @TAmount =$[$38.21.Number]
    SELECT    @Rate=STA1.Rate
    FROM         OSTC INNER JOIN
                          STC1 ON OSTC.Code = STC1.STCCode INNER JOIN
                          STA1 ON STC1.STACode = STA1.StaCode Where   OSTC.Code=$[$38.160.0] And STA1.SttType='9' order by STA1.EfctDate desc
    set @Amount=(@TAmount * @Rate)/100
    Select @Amount
    For Cess
    DECLARE @Amount as Numeric(19,2)
    DECLARE @Rate as Numeric(19,0)
    DECLARE @TAmount as Numeric(19,2)
    set @TAmount = $[$38.U_BED.Number]
    SELECT    @Rate=STA1.Rate
    FROM         OSTC INNER JOIN
                          STC1 ON OSTC.Code = STC1.STCCode INNER JOIN
                          STA1 ON STC1.STACode = STA1.StaCode Where   OSTC.Code=$[$38.160.0] And STA1.SttType='8' order by STA1.EfctDate desc
    set @Amount=(@TAmount * @Rate)/100
    Select @Amount
    For HCess
    DECLARE @Amount as Numeric(19,2)
    DECLARE @Rate as Numeric(19,0)
    DECLARE @TAmount as Numeric(19,2)
    set @TAmount = $[$38.U_BED.Number]
    SELECT    @Rate=STA1.Rate
    FROM         OSTC INNER JOIN
                          STC1 ON OSTC.Code = STC1.STCCode INNER JOIN
                          STA1 ON STC1.STACode = STA1.StaCode Where   OSTC.Code=$[$38.160.0] And STA1.SttType='10' order by STA1.EfctDate desc
    set @Amount=(@TAmount * @Rate)/100
    Select @Amount
    This will help you.
    regards,
    Neetu

  • Inserting line break within label

    Hey guys,
    i've run into a small problem with the coding. right now im
    creating image viewer within a horizontal list. at first, i had
    each object coded inside the horizontal list but then i decided to
    place everything in an xml file to make changes easier. since i've
    switched, i have not been able to figure out how to insert a line
    break for each label. this is what the code looked like when i was
    naming each object.
    <mx:HorizontalList id="PosterSelect" height="352"
    columnCount="3" columnWidth="200" width="580"
    rollOverColor="#ff3344" themeColor="#DC240B"
    itemClick="itemClicked(event)">
    <mx:dataProvider>
    <mx:Array>
    <mx:Object id="object1" label="March 12, 2008&
    #13;Chicago, IL" data="events"/>
    <mx:Object id="object2" label="March 12, 2008&
    #13;Chicago, IL" data="events"/>
    <mx:Object label="March 12, 2008& #13;Chicago, IL"
    icon="{event3}" data="events"/>
    <mx:Object label="March 12, 2008& #13;Chicago, IL"
    icon="{event4}" data="events"/>
    <mx:Object label="March 12, 2008& #13;Chicago, IL"
    icon="{event5}" data="events"/>
    </mx:Array>
    </mx:dataProvider>
    </mx:HorizontalList>
    within the label property, i used the & #13; character to
    insert a line break. now im using an item renderer to call the
    information from my xml file. the problem is... i cannot figure out
    how to insert a line break like i did previously. from what i
    understand... the text in the label field of the xml file is
    already parsed when it comes into flex. so using the & #13; or
    \n characters will not work. i also tried hitting enter to insert a
    new line in the xml file but that did not work either. does anyone
    know how i could work around this?? below is my current code and
    xml
    <eventinfo>
    <events>
    <label>March 12, 2008#13;Chicago, IL</label>
    <group>group name</group>
    <location>Detroit, MI</location>
    <icon>posters/event1.png</icon>
    <fullsize>posters/event1.png</fullsize>
    </events>
    <events>
    <label>March 12, 2008#13;Chicago, IL</label>
    <group>group name</group>
    <location>Detroit, MI</location>
    <icon>posters/event4.png</icon>
    <fullsize>posters/event1.png</fullsize>
    </events>
    </eventinfo>
    <mx:HorizontalList id="HorizontalCanvas" height="337"
    columnWidth="180" width="672"
    rollOverColor="#ff3344" themeColor="#DC240B"
    itemClick="callJavaScript()" x="10" y="33" borderStyle="solid"
    dataProvider="{eventinfo.events}" borderColor="#000000">
    <mx:itemRenderer>
    <mx:Component>
    <mx:VBox width="100%" height="350"
    horizontalAlign="center">
    <mx:Image source="{data.icon}"/>
    <mx:Label text="{data.label}"/>
    </mx:VBox>
    </mx:Component>
    </mx:itemRenderer>
    </mx:HorizontalList>

    hey atta,
    sry, i was using the &#13; character but for some reason
    when i posted the character it turned into a space so i added the
    space inbetween the & and the #13;
    but yea i think it was the height... changed it to 40 and it
    worked.. i cant believe i didnt notice that lol... oh well... thx
    for the help!!!

  • How to detect line break while reading input ?

    Hi all,
    I am reading the user input from standard input.
    I want to detect the line break. So that I can stop reading input and proceed processing the string.
    Actually I am getting the SQL query as input and after that I am executing the same by passing it to a function.
    Pl. do reply me.
       Scanner scanner = new Scanner(System.in);             
            while(scanner.hasNext())
                 temp=scanner.next();
                  sql=sql.concat(" "+temp);
                 if(scanner.next=="\n")
                            break;
           System.out.println(sql);
           sqlTool.executeSQL(sql);
    The above is not working properly.

    But if new line comes, what will be it's value?Empty lines are discarded by the scanner.
    Kaj

  • Remove line breaks within a xml element

    Hi,
    I have a xml element that contains a long text string with multiple line breaks. something like this:
    text
    text
    text text
    text
    text text text
    text
    How can I remove all line breks except one, i e I still want a line brek after each text line:
    text
    text
    text text
    text
    text text text
    text
    Thanks for you help!
    Magnus

    This is a media object (BLOB) in a JDE report. But we have now modified the report to get the xml file correct from start instead.
    /Magnus

  • Line breaks within contents of floating fields

    Hello everyone,
    I have the following question regarding the usage of floating fields and line breaks:
    I have a text that incorporates a companies name using a floating field.
    If the name of the company is too large, a line break is forced automatically. In some cases the companies name is like "Company 123 N.V.". When the line break is forced on this company, its shows "Company 123 N." on the first line and "V." on the second line.
    How can i prevent the line break at "N."? I suppose the break is because of the "." in the text. But I want it only to be a line break if the "." is followed by a space, not if there is more text to it.
    The same sort of issue we face with floating fields containing for example telephone numbers ( format: "+31 (0) 20 - 123 456 1", line break at the "+", "(" or the spaces) and e-mail addresses ( [email protected], line break at the "@" or the ".") .
    Thanks for any reply on this.
    Regards,
    Joris

    I have the same problem. Does anyone have an answer for this one, please?
    Thanks,
    Vanessa

  • How to create a TAB Delimited file with blank columns

    Hi,
    In my requirement i want to generate a tab delimated file. So i used  FCC in Reciever CC and am able to create a tab delimated file. But the file content is not in the format i am expecting.  File created  as shown below.
    Eid       First Name      Middle Name         Last name
    7          raj                   reddy                    petter
    8          ram                 johnson
    Here 'Johnson' is last name of 'ram', But it displayed under the 'Middle name' column, as the employee does not have the middle name.
    But i want file in the format shown below.
    Eid       First Name      Middle Name         Last name
    7          raj                   reddy                    petter
    8          ram                                             johnson
    Please suggest me what i need to do to generate the file in the required format.
    Thanks  in Advance..
    Regards
    Sreeni

    Hi,
    You can handle this in mapping. just pur a condition for all the fields(middle name, last name) if missing like
    field---
              equalS-----If then----Constant(Space)-----Target field ---else----Field-----Target Field.
    Constant(blank)
    Thanks!

  • Preventing line breaks within field

    Hi,
    Some of my fields contain phrases that should stay together rather than break over a line. For example, one of my fields is a date field (e.g. 27 October 2009) and I'd liike the whole date field to stay together on one line. Is there any way to control this?
    Thanks a lot for any help provided!

    IS that within a table cell or free text?
    In free text it should stay together unless the field is right at the end of a line e.g.
    +This piece of text was written on 1st January
    2009 and has broken across a line.+
    Not much you can do for that except ensure that there is plenty of space.
    If its in a cell/column of a table - you need to give it enough width to handle the string in the word table. If you want to handle it programmatically you can but its going to squash other columns to make enough room. You are better to 'hardcode' the table column widths to handle the expected data.
    Regards
    tim

  • How to handle line break embeded inside CSV column

    Hi there,
    I am under the pressure to make it work. I already put this question on APEX forum, but on second thought, I think it relates more to PL/SQL rather than APEX since APEX 4.1 already have utility to handle CSV Upload.
    If you read it already in APEX forum, please ignore.
    I am sorry for that. Thanks for reading.
    I need to develop an app that allows user to upload CSV file to a interface table.
    The APEX version at my workplace is 4.0.2.
    I used the code from
    http://dbswh.webhop.net/htmldb/f?p=BLOG:READ:0::::ARTICLE:11000346061523
    It all works well till recently I find out
    If a column in a CSV file cotain a line break (or new line) e.g. (The tester copy and paste this text which has line break into a column in a spreadsheet)
    This is the first sentence.
    This is the second sentence.
    It will break the “This is the second sentence”. To a new column.
    The contents of the CSV viewed in Notepad look as below
    Assessment Date,Scheduled Date,Assessment Provider,Assessor Name,Court,First Name,Middle Name,Last Name,PRN Person Record Number,NHI Number,Defendant Attended Y/N,Is Dependent Y/N,Notes,Primary Ethnicity,"Ethnicity Other, please specify",Gender,Currently in Treatment Y/N,Substance of Concern 5,Other Substance Specified
    22/09/2012,,Provider Co Name,Warren Edgley,Wellington,,,Salty,2545554,dgsdf,ergerg,,"This is the first sentence.
    This is the second sentence.",Japanese,,Female,b,,
    Here is the code from the CSV UTIL, please help me to find out how can I replace the line break to a space so that the uploading process is correct.
      CREATE OR REPLACE PACKAGE BODY "CSV_UTIL"
    AS
         --strip the beginning and the end quotes, then replace double quotation with single
       FUNCTION de_quote (p_str IN VARCHAR2, p_enc_by IN VARCHAR2)
          RETURN VARCHAR2
       IS
       v_str VARCHAR2(32767) := p_str;
       BEGIN
          IF (p_enc_by IS NULL)
          THEN
             RETURN p_str;
          ELSE
            IF SUBSTR(p_str,-1) = p_enc_by THEN
               v_str := SUBSTR(p_str,1,LENGTH(p_str)-1);
            END IF;
            IF SUBSTR(p_str,1,1) = p_enc_by THEN
               v_str := SUBSTR(v_str,2);
            END IF; 
            RETURN REPLACE (v_str,
                             p_enc_by || p_enc_by,
                             p_enc_by
          END IF;
       END de_quote;
       PROCEDURE parse (p_str IN VARCHAR2, p_enc_by IN VARCHAR2, p_sep IN VARCHAR2)
       IS
          l_n          NUMBER   DEFAULT 1;
          l_in_quote   BOOLEAN  DEFAULT FALSE;
          l_ch         NCHAR (1);
          l_len        NUMBER   DEFAULT NVL (LENGTH (p_str), 0);
       BEGIN
          IF (l_len = 0)
          THEN
             RETURN;
          END IF;
          g_words := g_empty;
          g_words (1) := NULL;
          FOR i IN 1 .. l_len
          LOOP
             l_ch := SUBSTR (p_str, i, 1);
             IF (l_ch = p_enc_by)
             THEN
                l_in_quote := NOT l_in_quote;
             END IF;
             IF (l_ch = p_sep AND NOT l_in_quote)
             THEN
                l_n := l_n + 1;
                g_words (l_n) := NULL;
             ELSE
                g_words (l_n) := g_words (l_n) || l_ch;
             END IF;
          END LOOP;
          g_words (l_n) := de_quote (g_words (l_n), CHR(10));
          g_words (l_n) := de_quote (g_words (l_n), CHR(13));
          FOR i IN 1 .. l_n
          LOOP
             g_words (i) := de_quote (g_words (i), p_enc_by);
          END LOOP;
       END parse;
    Author: Oleg Lihvoinen
    Company: DbSWH
    Changes:
    10.02.2011, There was a miscalculation of the file line last position in case it is the end of file
       PROCEDURE upload (p_file_name VARCHAR2, p_collection_name VARCHAR2, p_enc_by IN VARCHAR2, p_sep_by IN VARCHAR2, p_rows NUMBER)
       IS
          v_blob_data    BLOB;
          v_clob_data    CLOB;
          v_clob_len     NUMBER;
          v_position     NUMBER;
          v_char         NCHAR (1);
          c_chunk_len    NUMBER           := 1;
          v_line         VARCHAR2 (32767) := NULL;
          v_data_array   vcarray;
          v_rows         NUMBER           := 0;
          n_seq          NUMBER           := 1;
          dest_offset    NUMBER           := 1;
          src_offset     NUMBER           := 1;
          amount         INTEGER          := DBMS_LOB.lobmaxsize;
          blob_csid      NUMBER           := DBMS_LOB.default_csid;
          lang_ctx       INTEGER          := DBMS_LOB.default_lang_ctx;
          warning        INTEGER;
          l_sep          VARCHAR2(100)    := CASE WHEN p_sep_by = '\t' THEN chr(9) ELSE p_sep_by END;
       BEGIN
          htmldb_collection.create_or_truncate_collection
                                          (p_collection_name      => p_collection_name);
          -- Read blob from wwv_flow_files
          SELECT blob_content
            INTO v_blob_data
            FROM wwv_flow_files
           WHERE NAME = p_file_name;
          v_position := 1;
          DBMS_LOB.createtemporary (lob_loc      => v_clob_data,
                                    CACHE        => TRUE,
                                    dur          => DBMS_LOB.SESSION
          DBMS_LOB.converttoclob (v_clob_data,
                                  v_blob_data,
                                  amount,
                                  dest_offset,
                                  src_offset,
                                  blob_csid,
                                  lang_ctx,
                                  warning
          v_clob_len := DBMS_LOB.getlength (v_clob_data);
          IF v_clob_len = 0 THEN
             RETURN;
          END IF;
          WHILE (v_position <= v_clob_len + 1)
          LOOP
             v_char := DBMS_LOB.SUBSTR (v_clob_data, c_chunk_len, v_position);
             v_line := v_line || v_char;
             v_position := v_position + c_chunk_len;
             -- When the whole line is retrieved and not end of file or end of file
             IF v_char = CHR (10) AND v_position < v_clob_len OR v_position = v_clob_len + 1
             THEN
                parse (p_str => v_line, p_enc_by => p_enc_by, p_sep => l_sep);
                v_data_array := g_words;
                FOR i IN 1..g_words.count LOOP
                   IF i <= 50 THEN
                      v_data_array(i) := g_words(i);
                   ELSE
                      exit;
                   END IF;
                END LOOP;
                FOR i IN g_words.count + 1..50 LOOP
                   v_data_array(i) := null;
                END LOOP;           
                v_rows := v_rows + 1;
                -- exit if uploaded specified number of rows
                IF p_rows IS NOT NULL AND v_rows > p_rows THEN
                   EXIT;
                END IF;
                -- Store data to collection
                n_seq :=
                   htmldb_collection.add_member
                                         (p_collection_name      => p_collection_name,
                                          p_c001                 => v_data_array
                                                                               (1),
                                          p_c002                 => v_data_array
                                                                               (2),
                                          p_c003                 => v_data_array
                                                                               (3),
                                          p_c004                 => v_data_array
                                                                               (4),
                                          p_c005                 => v_data_array
                                                                               (5),
                                          p_c006                 => v_data_array
                                                                               (6),
                                          p_c007                 => v_data_array
                                                                               (7),
                                          p_c008                 => v_data_array
                                                                               (8),
                                          p_c009                 => v_data_array
                                                                               (9),
                                          p_c010                 => v_data_array
                                                                               (10),
                                          p_c011                 => v_data_array
                                                                               (11),
                                          p_c012                 => v_data_array
                                                                               (12),
                                          p_c013                 => v_data_array
                                                                               (13),
                                          p_c014                 => v_data_array
                                                                               (14),
                                          p_c015                 => v_data_array
                                                                               (15),
                                          p_c016                 => v_data_array
                                                                               (16),
                                          p_c017                 => v_data_array
                                                                               (17),
                                          p_c018                 => v_data_array
                                                                               (18),
                                          p_c019                 => v_data_array
                                                                               (19),
                                          p_c020                 => v_data_array
                                                                               (20),
                                          p_c021                 => v_data_array
                                                                               (21),
                                          p_c022                 => v_data_array
                                                                               (22),
                                          p_c023                 => v_data_array
                                                                               (23),
                                          p_c024                 => v_data_array
                                                                               (24),
                                          p_c025                 => v_data_array
                                                                               (25),
                                          p_c026                 => v_data_array
                                                                               (26),
                                          p_c027                 => v_data_array
                                                                               (27),
                                          p_c028                 => v_data_array
                                                                               (28),
                                          p_c029                 => v_data_array
                                                                               (29),
                                          p_c030                 => v_data_array
                                                                               (30),
                                          p_c031                 => v_data_array
                                                                               (31),
                                          p_c032                 => v_data_array
                                                                               (32),
                                          p_c033                 => v_data_array
                                                                               (33),
                                          p_c034                 => v_data_array
                                                                               (34),
                                          p_c035                 => v_data_array
                                                                               (35),
                                          p_c036                 => v_data_array
                                                                               (36),
                                          p_c037                 => v_data_array
                                                                               (37),
                                          p_c038                 => v_data_array
                                                                               (38),
                                          p_c039                 => v_data_array
                                                                               (39),
                                          p_c040                 => v_data_array
                                                                               (40),
                                          p_c041                 => v_data_array
                                                                               (41),
                                          p_c042                 => v_data_array
                                                                               (42),
                                          p_c043                 => v_data_array
                                                                               (43),
                                          p_c044                 => v_data_array
                                                                               (44),
                                          p_c045                 => v_data_array
                                                                               (45),
                                          p_c046                 => v_data_array
                                                                               (46),
                                          p_c047                 => v_data_array
                                                                               (47),
                                          p_c048                 => v_data_array
                                                                               (48),
                                          p_c049                 => v_data_array
                                                                               (49),
                                          p_c050                 => v_data_array
                                                                               (50)                                                                          
                -- Clear the line
                v_line := NULL;
             END IF;
          END LOOP;
       END;
    END;In my apps, I save these straight into a table rather than an APEX collection because the number of columns can be longer than 50.
    I want to find out how can replace these line break inside a column to a space.
    If any one has any ideas, please let me know.
    Thanks a lot in advance.
    Ann

    Ann586341 wrote:
    I think the code split the whole thing by this line
    -- When the whole line is retrieved and not end of file or end of file
    IF v_char = CHR (10) AND v_position < v_clob_len OR v_position = v_clob_len + 1
    THEN
    Yes, exactly. That piece of code believes all CHR(10) occurences are record delimiters.
    It is not smart enough to recognize that a CHR(10) within quotation marks are part of the data.
    Optimally a solution should keep the CHR(10) rather than replacing with spaces, but that will be a bigger rewrite of the UTL_CSV code ;-)
    If you are happy with replacing with spaces, a "simple" solution could be something like:
    Declare a boolean variable in upload procedure:
    v_within_text_column   boolean := false;And use it like this:
          WHILE (v_position <= v_clob_len + 1)
          LOOP
             v_char := DBMS_LOB.SUBSTR (v_clob_data, c_chunk_len, v_position);
             IF v_char = '"' THEN
               v_within_text_column := NOT v_within_text_column;
             ELSIF v_char = CHR(10) AND v_within_text_column THEN
               v_char := ' ';
             END IF;
             v_line := v_line || v_char;
             v_position := v_position + c_chunk_len;
             -- When the whole line is retrieved and not end of file or end of file
             IF v_char = CHR (10) AND v_position < v_clob_len OR v_position = v_clob_len + 1
             THEN
               v_within_text_column := false; -- To be safe always set this on "true" linebreaks
    {code}
    +(This is untested code just written here in the text editor.)+
    It should work by toggling a flag whether you are "within" the quotes or not and then replacing CHR(10) with a space if you are within a text column.
    This way we avoid having to go through the clob more than once (it is enough that this code walks the clob one character at a time...)
    It will not handle if the clob contains situations like:
    {code}
    abc,123,"This is a text with a quote from a man who said \"To Be,
    or Not To Be\" some hundred years ago",123,xyz
    {code}
    Escaped quotes would need separate attention ;-)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How to get a line break

    Hi All,
    How do I get a line break within a particular field?
    My DB structure is that I have 4 columns address_line_1, address_line_2 and so on. I cannot select them as different fields because all of them can be null, in which case i pick it from internal_address_line column.
    I use:
    SELECT DECODE(address_line_1, NULL, internal_address_line, address_line_1 || ', ' || Address_Line_2) as ADDRESS
    But instead of th comma separating the 2 address lines I want a line break so that 2 address lines come in 2 separate lines in the output.
    It doesn't allow me to use chr(10) as a line break and gives an error
    I'm using Reports 2.5
    TIA
    Naveen

    Yes, true.
    How about setting up the sections as:
    Section 1 Introduction ('Section num space introduction' in this example - tab may be better)
    Then generate the Contents.
    Then do a GREP find/change on the document after the contents:
    This will add a forced line break and tab after each section number... You might want to specify a para style in the Find Format box too, so that references to Section xx in body text are not altered.
    If you update Contents after this, you will get the line break and tab in the Contents too.

  • Missing line breaks with Excel

    Hello,
      Am downloading certain into Excel using BSP. I am changing response mime type to "application/vnd.ms-excel" and am sending tab-demilited string as data.
      Some of the columns should contain line break within the cell( Mean the Alt-Enter in Excel cell ). I tried using the character code 10 and also 13 for the break. But it did not work.
      How do I cause a line-break in the Excel cell? Any work-arounds??
    Thanks.
    Srinivas.

    Hi Srinivas,
    I think you don't have the possibility to add line breaks into a tab seperated file. You rearly had to create an excel file. Unfortenately AFAIK this is not possible in ABAP. I've done this in PHP using the <a href="http://pear.php.net/package/Spreadsheet_Excel_Writer">Spreadsheet_Excel_Writer</a>.
    Regards
    Gregor

Maybe you are looking for