Delimiter woes

In brief, here is my problem. I'm attempting to scan a string for a very large number that will always start after a ";" and end just before a "=" (without the quotes). I thought I might be able to do this with useDelimiter but it seems I'm on the wrong path here. Could anyone point me toward what I need to grab a string from within two certain points.
For reference, the string will look like this:
;################=xxxxxxxxxxxxxxxx?
Where # represents some number
P.S. I cannot use substring because the number in question is of variable length.

prometheuzz wrote:
Encephalopathic wrote:
Of course you an just ignore this advice as someone will come up with some elegant Regex solution in a few minutes, you'll see.Yes, I am predictable!
import java.util.regex.*;
String text = "sometext;12345678987654321=alargenumberineedandalso;123=rathersmall;888ignore";
Matcher m = Pattern.compile("(?<=;)\\d++(?==)").matcher(text);
while(m.find()) System.out.println(m.group());
This works perfectly but I have one question.
Could you explain what the text inside the quotes is doing here:
Pattern.compile("(?<=;)\\d++(?==)")I understand that it's saying
Start at the ;
grab all the numbers
Stop at the =
but I don't understand how that is derived from "(?<=;)\\d++(?==)"
More specifically, I know what the \\d++ does
I don't understand how (?<=;) says start at ; and (?==) says stop at =
If someone could break that down for me, that would be great.

Similar Messages

  • Continual Mac woes (no question, just a rant)

    It's Tuesday, and I am having terrible problems with my Mac. But then, why should Tuesday be different from any other day of the week.
    Here is a typical day for me. The computer appears to be working OK. I need to watch a DVD for my work. I turn on DVD player, and put one in. The machine can't read the disc. It clicks and whirls, but the icon does not show up on the desktop. Meanwhile, so distressed is the machine that it freaks out. What was up until now was a fluidly operating machine suddenly reverts back to its old ways (i.e., its ways of two days ago). The hold ups and spinning pinwheels begin to eat of hours of my work day. (Remember the old days when computers made life easier?) The machine becomes sticky, gummy. Oh, I can move the curser and it seems to work for a second but then gets stuck in the dock, which explodes in icons and then freezes for five minutes. Yes. Five minutes.
    Would love to use Force Quit, but the cursor is spinning, and nothing is responding. Funny about that old Mac. You can't force quit Force Quit. I guess I need to leave it open all the time.
    Of course, FQ usually works on Safari. I have never just "quit" Safari. It always requires Force Quit, otherwise I can't turn off my computer. It stalls shut down.
    Now I have a DVD trapped in there and can't get it out. [But I just got an answer from another posting.]
    In the old macs, there used to be a pin hole you could stick a needle into ... can't find one on my flatpanel iMac.
    I bought my Apple flat panel iMac in August of 2002. Yes, I know that that is a long time to have a computer, but I am not rich nor attached to a corporation that can splurge on computers. The first weekend I had the machine, I had three kernal panics.
    Among the other problems I have documented are the following: the dock hiding itself unbidden and other features checking and unchecking themselves (Aug 2002); bus errors connected with OS 9 (Sept); some problems that inspired the tech person (Eric)) to talk me through deleting my user i.d., resulting in the loss of two months worth of e-mail (Thursday, 12 September); Preview problems (September); a bizarre box with an unmovable and undeletable red stop sign in it that no tech person or other Mac user I know had ever hear of (Monday 30 September); printing problems; computer won't shut down, numerous disconnection errors, which turned out to be caused by an OS X update (beginning December, 2002, or later); Kernel panics (Feb); computer won't shut down (March); Faxstexx problems, program won't allow me to set it up, finally just deleted the software (April); keys like "V" freeze and repeat endlessly (May 21); DVD Player freezes (May); Safari and Mail begin quitting unexpectedly (May); cursor begins to blink and fade out, plus odd sounds come out of the speakers, a constant error beeping (Sept 9); DVD Player problems (Oct 4).
    I called AppleCare while I had it about once a week (the total between August 2002 and the time it ran out was about 155 calls). Naturally, some of these calls are motivated by user error. On the other hand, many of the issues I have called about were unprecedented as far as the Tech person was concerned, such as the blinking mouse, the red stop sign, and the DVD Player woes.
    Things improved with Panther, but in Tiger many of the same old issues have returned.
    I have been having so many problems with my Mac that I once wrote a letter to the company asking when do I qualify for a new replacement machine. I never received an answer, but I felt better for about a day. Then I turned on my Mac again.

    The spinning ball of death as we used to call it is often caused by a lack of RAM, it is hard to be sure as I am not working on your machine, but sometimes things can be improved with additional RAM, it makes it seem like a whole new computer.
    A lot of your problems sound like stuff that can be fixed easily enough, and although frustrating things happen here and there with updates. It sounds like you are in fairly good spirits with it all, I would suggest just researching a bit more into maintenance you can do to help maintain the computer and educate yourself a bit more (sounds like you already have learned quite a bit along the way) and you will find a lot of these issues take you a few seconds to rid yourself of. I would start by making sure you are repairing permissions regularly and running the most up to date software. If a lot of problems persist, try creating a second user that is a "test" user to see if the problem is replicated on that user (don't delete your other one, but if you do find the problem not on the other user, you might have a corrupt user, however you don't have to lose all your emails there are plenty of ways to back it up and import it in, or even just bring the entire Mail folder from your library over to the new user). Another thing you can do if you find a lot of system problems is archive and install the OS, it takes a bit of time, but doing it overnight shouldn't be an issue, and you won't lose any of your stuff.

  • How to use '|' delimited as seprator in GUI_DOWNLOAD ? Plz suggest me ,,

    how to use '|' delimited as seprator in GUI_DOWNLOAD ? Plz suggest me ,,
    i want the output should be seprated by '|' delimited when i download the file.

    Hi,
    We will pass the seperator to the WRITE_FIELD_SEPARATOR parameter as
    CALL FUNCTION 'GUI_DOWNLOAD'
    EXPORTING
    filename = v_file
    write_field_separator = '|'
    TABLES
    data_tab = itab[] . "Our internal talbe filled with data
    Re: Why Function GUI_DOWNLOAD can create XML file but not a flat file?
    Award points if useful
    Thanks,
    Ravee...

  • Comma delimited in Sql query decode function errors out

    Hi All,
    DB: 11.2.0.3.0
    I am using the below query to generate the comma delimited output in a spool file but it errors out with the message below:
    SQL> set lines 100 pages 50
    SQL> col "USER_CONCURRENT_QUEUE_NAME" format a40;
    SQL> set head off
    SQL> spool /home/xyz/cmrequests.csv
    SQL> SELECT
    2 a.USER_CONCURRENT_QUEUE_NAME || ','
    3 || a.MAX_PROCESSES || ','
    4 || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    5 ||sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'I',1,0),0)) Pending_Normal ||','
    6 ||sum(decode(b.PHASE_CODE,'R',decode(b.STATUS_CODE,'R',1,0),0)) Running_Normal
    7 from FND_CONCURRENT_QUEUES_VL a, FND_CONCURRENT_WORKER_REQUESTS b
    where a.concurrent_queue_id = b.concurrent_queue_id AND b.Requested_Start_Date <= SYSDATE
    8 9 GROUP BY a.USER_CONCURRENT_QUEUE_NAME,a.MAX_PROCESSES;
    || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    ERROR at line 4:
    ORA-00923: FROM keyword not found where expected
    SQL> spool off;
    SQL>
    Expected output in the spool /home/xyz/cmrequests.csv
    Standard Manager,10,0,1,0
    Thanks for your time!
    Regards,

    Get to work immediately on marking your previous questions ANSWERED if they have been!
    >
    I am using the below query to generate the comma delimited output in a spool file but it errors out with the message below:
    SQL> set lines 100 pages 50
    SQL> col "USER_CONCURRENT_QUEUE_NAME" format a40;
    SQL> set head off
    SQL> spool /home/xyz/cmrequests.csv
    SQL> SELECT
    2 a.USER_CONCURRENT_QUEUE_NAME || ','
    3 || a.MAX_PROCESSES || ','
    4 || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    5 ||sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'I',1,0),0)) Pending_Normal ||','
    6 ||sum(decode(b.PHASE_CODE,'R',decode(b.STATUS_CODE,'R',1,0),0)) Running_Normal
    7 from FND_CONCURRENT_QUEUES_VL a, FND_CONCURRENT_WORKER_REQUESTS b
    where a.concurrent_queue_id = b.concurrent_queue_id AND b.Requested_Start_Date <= SYSDATE
    8 9 GROUP BY a.USER_CONCURRENT_QUEUE_NAME,a.MAX_PROCESSES;
    || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
    >
    Well if you want to spool query results to a file the first thing you need to do is write a query that actually works.
    Why do you think a query like this is valid?
    SELECT 'this, is, my, giant, string, of, columns, with, commas, in, between, each, word'
    GROUP BY this, is, my, giant, stringYou only have one column in the result set but you are trying to group by three columns and none of them are even in the result set.
    What's up with that?
    You can only group by columns that are actually IN the result set.

  • Find and replace value in Delimited String

    Hi All,
    I have a requirement, where i need to find and replace values in delimited string.
    For example, the string is "GL~1001~157747~FEB-13~CREDIT~A~N~USD~NULL~". The 4th column gives month and year. I need to replace it with previous month name. For example: "GL~1001~157747~JAN-13~CREDIT~A~N~USD~NULL~". I need to do same for last 12 months.
    I thought of first devide the values and store it in variable and then after replacing it with required value, join it back.
    I just wanted to know if there is any better way to do it?

    for example (Assumption: the abbreviated month is the first occurance of 3 consecutive alphabetic charachters)
    with testdata as (
    select 'GL~1001~157747~FEB-13~CREDIT~A~N~USD~NULL~' str from dual
    select
    str
    ,regexp_substr(str, '[[:alpha:]]{3}') part
    ,to_date('01'||regexp_substr(str, '[[:alpha:]]{3}')||'2013', 'DDMONYYYY') part_date
    ,replace (str
             ,regexp_substr(str, '[[:alpha:]]{3}')
             ,to_char(add_months(to_date('01'||regexp_substr(str, '[[:alpha:]]{3}')||'2013', 'DDMONYYYY'),-1),'MON')
    ) res
    from testdata
    STR
    PART
    PART_DATE
    RES
    GL~1001~157747~FEB-13~CREDIT~A~N~USD~NULL~
    FEB
    02/01/2013
    GL~1001~157747~JAN-13~CREDIT~A~N~USD~NULL~
    with year included
    with testdata as (
    select 'GL~1001~157747~JAN-13~CREDIT~A~N~USD~NULL~' str from dual
    select
    str
    ,regexp_substr(str, '[[:alpha:]]{3}-\d{2}') part
    ,to_date(regexp_substr(str, '[[:alpha:]]{3}-\d{2}'), 'MON-YY') part_date
    ,replace (str
             ,regexp_substr(str, '[[:alpha:]]{3}-\d{2}')
             ,to_char(add_months(to_date(regexp_substr(str, '[[:alpha:]]{3}-\d{2}'), 'MON-YY'),-1),'MON-YY')
    ) res
    from testdata
    STR
    PART
    PART_DATE
    RES
    GL~1001~157747~JAN-13~CREDIT~A~N~USD~NULL~
    JAN-13
    01/01/2013
    GL~1001~157747~DEC-12~CREDIT~A~N~USD~NULL~
    Message was edited by: chris227 year included

  • String to Row: Delimiter as part of the value

    My DB Version - 10.2.0.4.0
    I have a string like this
    with t
    as
    select q'['My column',LPAD(TRIM(my_column),4,'0'),10,10000]' str
      from dual
    select * from tI am looking for a SQL solution that will convert this string into row like this
    'My column'
    LPAD(TRIM(my_column),4,'0')
    10
    10000Normal way to convert delimited string to row would be like this
    with t
    as
    select q'['My column',LPAD(TRIM(my_column),4,'0'),10,10000]' str
      from dual
    select regexp_substr(str,'[^,]+',1,level) val
      from t
    connect by level <= length(str)-length(replace(str,','))+1But this would result in
    'My column'
    LPAD(TRIM(my_column)
    4
    '0')
    10
    10000 But this is incorrect. So any idea how to solve it?

    Karthick_Arp wrote:
    I totally understand. But we should also accept the fact that at times we get such crazy stuff to work with.
    Just hoping that some one comes up with a super cool sql to do this ;)Something like this perhaps?
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select q'['My column',LPAD(TRIM(my_column),4,'0'),10,10000]' str from dual)
      2      ,x as (select regexp_substr(str, '[^(]+', 1, rownum) str, rownum as lvl
      3             from t
      4             connect by rownum <= length(regexp_replace(str, '[^(]'))+1
      5            )
      6      ,y as (select x.str as orig_str, lvl
      7                   ,replace(regexp_replace(x.str, '[^\)]+$'),',','~')||regexp_substr(x.str, '[^\)]+$') as str2
      8             from x
      9            )
    10      ,z as (select ltrim(sys_connect_by_path(str2, '('),'(') as str
    11             from y
    12             where connect_by_isleaf = 1
    13             connect by lvl = prior lvl + 1
    14             start with lvl = 1
    15            )
    16  --
    17  select rownum rn, replace(regexp_substr(str, '[^,]+', 1, rownum),'~',',') str
    18  from z
    19* connect by rownum <= length(regexp_replace(str, '[^,]'))+1
    SQL> /
            RN STR
             1 'My column'
             2 LPAD(TRIM(my_column),4,'0')
             3 10
             4 10000
    SQL> ed
    Wrote file afiedt.buf
      1  with t as (select q'['My column',LPAD(TRIM(SUBSTR(my_column,4)),4,'0'),10,SUBSTR('FRED',4),10000]' str from dual)
      2      ,x as (select regexp_substr(str, '[^(]+', 1, rownum) str, rownum as lvl
      3             from t
      4             connect by rownum <= length(regexp_replace(str, '[^(]'))+1
      5            )
      6      ,y as (select x.str as orig_str, lvl
      7                   ,replace(regexp_replace(x.str, '[^\)]+$'),',','~')||regexp_substr(x.str, '[^\)]+$') as str2
      8             from x
      9            )
    10      ,z as (select ltrim(sys_connect_by_path(str2, '('),'(') as str
    11             from y
    12             where connect_by_isleaf = 1
    13             connect by lvl = prior lvl + 1
    14             start with lvl = 1
    15            )
    16  --
    17  select rownum rn, replace(regexp_substr(str, '[^,]+', 1, rownum),'~',',') str
    18  from z
    19* connect by rownum <= length(regexp_replace(str, '[^,]'))+1
    SQL> /
            RN STR
             1 'My column'
             2 LPAD(TRIM(SUBSTR(my_column,4)),4,'0')
             3 10
             4 SUBSTR('FRED',4)
             5 10000
    SQL>Seems to work, but don't hold me to that. :D

  • Discover Plus - Export to Text Tab delimited is not exporting all the rows

    Hi gurus,
    I am trying to export a large data report which has 1 million plus rows to text tab delimited. The export takes 9 plus hours to export and the data is not more than 100000.
    My question is
    1. How can I make the discoverer to export it quicker or rather faster to tab delimited.
    2. Where can I change the number of rows to be exported.
    Any help, suggestions is appreciated.
    Thanks,
    SAI

    Hi Rod,
    Yes. The text tab delimited export is taking lot of time. The total rows for this report are nearly 1 million. If I break down the report with condition and export it I was able to export it ( three files exported with 212000, 103000 and 687000 rows respectively).
    But I m still having problems exporting it in one shot. Is there any way I could resolve this? Please let me know.
    Thanks,
    SAI

  • Urgent !! -Please Help -Leaving action delimits IT 0017

    Hello Gurus,
    At the moment when you run the Action to make an employee a leaver in HR
    the Action delimits Infotype 0017 Travel Privileges.  This then means
    that if the Travel & Expense Department need to go and view
    expense claims for that employee the system does not permit it as it
    states there is no Infotype 0017 valid.
    I have advised the to remove the delimit function from the Action
    for Infotype 0017, this will then mean that the Travel & Expense
    Department can view historical expense claims for all employees.
    Pls help how to remove the delimit function form the action type?
    Or is there any other way to avoid the IT0017 from getting delimit
    Thanks & Regds,
    N.babu

    Hello Gurus,
    With ref to my above question, my client is having a problem -
    They need to remove the travel privilege screen from this leavers action, so that when an employee leaves, they are delimited, but the travel privileges are still active.
    They have had a problem with leavers, as once they have been run as a leaver, and they have been delimited, s they have an error when trying to view the history of their claims in PR05.
    After investigation of this it seems that this is caused by the travel privileges being delimited.
    So we need this to be taken out of the leavers action, so that the other items are still delimited but NOT the travel privileges.
    Pls suggest....
    Points are assured!!
    thanks & regds,
    Nbabu

  • PA40 - Termination/Leaving Action - problems delimiting IT0014

    Hi all,
    For an upgrade from ecc 4.7 tot ECC 6.0 (ehp 4) I am testing some actions.
    I've tested the Hire action and this test was successful. No Iu2019m testing the Termination/Leave Action. When I want to save the Organizational Assignment screen I get a message that the delimiting of IT0014 has failed.
    Is anyone familiar with this problem? If so, could you tell me what I can do to solve this problem?
    Thank you in advance!

    Hi Ashley,
    Please check SAP Note 1016799.
    Regards,
    Dilek

  • Upload tab-delimited file from the application server to an internal table

    Hello SAPients.
    I'm using OPEN DATASET..., READ DATASET..., CLOSE DATASET to upload a file from the application server (SunOS). I'm working with SAP 4.6C. I'm trying to upload a tab-delimited file to an internal table but when I try load it the fields are not correctly separated, in fact, they are all misplaced and the table shows '#' where supposedly there was a tab.
    I tried to SPLIT the line using as separator a variable with reference to CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB but for some reason that class doesn't exist in my system.
    Do you know what I'm doing wrong? or Do you know a better method to upload a tab-delimited file into an internal table?
    Thank you in advance for your help.

    Try:
    REPORT ztest MESSAGE-ID 00.
    PARAMETER: p_file LIKE rlgrap-filename   OBLIGATORY.
    DATA: BEGIN OF data_tab OCCURS 0,
          data(4096),
          END   OF data_tab.
    DATA: BEGIN OF vendor_file_x OCCURS 0.
    * LFA1 Data
    DATA: mandt  LIKE bgr00-mandt,
          lifnr  LIKE blf00-lifnr,
          anred  LIKE blfa1-anred,
          bahns  LIKE blfa1-bahns,
          bbbnr  LIKE blfa1-bbbnr,
          bbsnr  LIKE blfa1-bbsnr,
          begru  LIKE blfa1-begru,
          brsch  LIKE blfa1-brsch,
          bubkz  LIKE blfa1-bubkz,
          datlt  LIKE blfa1-datlt,
          dtams  LIKE blfa1-dtams,
          dtaws  LIKE blfa1-dtaws,
          erdat  LIKE  lfa1-erdat,
          ernam  LIKE  lfa1-ernam,
          esrnr  LIKE blfa1-esrnr,
          konzs  LIKE blfa1-konzs,
          ktokk  LIKE  lfa1-ktokk,
          kunnr  LIKE blfa1-kunnr,
          land1  LIKE blfa1-land1,
          lnrza  LIKE blfa1-lnrza,
          loevm  LIKE blfa1-loevm,
          name1  LIKE blfa1-name1,
          name2  LIKE blfa1-name2,
          name3  LIKE blfa1-name3,
          name4  LIKE blfa1-name4,
          ort01  LIKE blfa1-ort01,
          ort02  LIKE blfa1-ort02,
          pfach  LIKE blfa1-pfach,
          pstl2  LIKE blfa1-pstl2,
          pstlz  LIKE blfa1-pstlz,
          regio  LIKE blfa1-regio,
          sortl  LIKE blfa1-sortl,
          sperr  LIKE blfa1-sperr,
          sperm  LIKE blfa1-sperm,
          spras  LIKE blfa1-spras,
          stcd1  LIKE blfa1-stcd1,
          stcd2  LIKE blfa1-stcd2,
          stkza  LIKE blfa1-stkza,
          stkzu  LIKE blfa1-stkzu,
          stras  LIKE blfa1-stras,
          telbx  LIKE blfa1-telbx,
          telf1  LIKE blfa1-telf1,
          telf2  LIKE blfa1-telf2,
          telfx  LIKE blfa1-telfx,
          teltx  LIKE blfa1-teltx,
          telx1  LIKE blfa1-telx1,
          xcpdk  LIKE  lfa1-xcpdk,
          xzemp  LIKE blfa1-xzemp,
          vbund  LIKE blfa1-vbund,
          fiskn  LIKE blfa1-fiskn,
          stceg  LIKE blfa1-stceg,
          stkzn  LIKE blfa1-stkzn,
          sperq  LIKE blfa1-sperq,
          adrnr  LIKE  lfa1-adrnr,
          mcod1  LIKE  lfa1-mcod1,
          mcod2  LIKE  lfa1-mcod2,
          mcod3  LIKE  lfa1-mcod3,
          gbort  LIKE blfa1-gbort,
          gbdat  LIKE blfa1-gbdat,
          sexkz  LIKE blfa1-sexkz,
          kraus  LIKE blfa1-kraus,
          revdb  LIKE blfa1-revdb,
          qssys  LIKE blfa1-qssys,
          ktock  LIKE blfa1-ktock,
          pfort  LIKE blfa1-pfort,
          werks  LIKE blfa1-werks,
          ltsna  LIKE blfa1-ltsna,
          werkr  LIKE blfa1-werkr,
          plkal  LIKE  lfa1-plkal,
          duefl  LIKE  lfa1-duefl,
          txjcd  LIKE blfa1-txjcd,
          sperz  LIKE  lfa1-sperz,
          scacd  LIKE blfa1-scacd,
          sfrgr  LIKE blfa1-sfrgr,
          lzone  LIKE blfa1-lzone,
          xlfza  LIKE  lfa1-xlfza,
          dlgrp  LIKE blfa1-dlgrp,
          fityp  LIKE blfa1-fityp,
          stcdt  LIKE blfa1-stcdt,
          regss  LIKE blfa1-regss,
          actss  LIKE blfa1-actss,
          stcd3  LIKE blfa1-stcd3,
          stcd4  LIKE blfa1-stcd4,
          ipisp  LIKE blfa1-ipisp,
          taxbs  LIKE blfa1-taxbs,
          profs  LIKE blfa1-profs,
          stgdl  LIKE blfa1-stgdl,
          emnfr  LIKE blfa1-emnfr,
          lfurl  LIKE blfa1-lfurl,
          j_1kfrepre  LIKE blfa1-j_1kfrepre,
          j_1kftbus   LIKE blfa1-j_1kftbus,
          j_1kftind   LIKE blfa1-j_1kftind,
          confs  LIKE  lfa1-confs,
          updat  LIKE  lfa1-updat,
          uptim  LIKE  lfa1-uptim,
          nodel  LIKE blfa1-nodel.
    DATA: END   OF vendor_file_x.
    FIELD-SYMBOLS:  <field>,
                    <field_1>.
    DATA: delim          TYPE x        VALUE '09'.
    DATA: fld_chk(4096),
          last_char,
          quote_1     TYPE i,
          quote_2     TYPE i,
          fld_lth     TYPE i,
          columns     TYPE i,
          field_end   TYPE i,
          outp_rec    TYPE i,
          extras(3)   TYPE c        VALUE '.,"',
          mixed_no(14) TYPE c        VALUE '1234567890-.,"'.
    OPEN DATASET p_file FOR INPUT.
    DO.
      READ DATASET p_file INTO data_tab-data.
      IF sy-subrc = 0.
        APPEND data_tab.
      ELSE.
        EXIT.
      ENDIF.
    ENDDO.
    * count columns in output structure
    DO.
      ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
      IF sy-subrc <> 0.
        EXIT.
      ENDIF.
      columns = sy-index.
    ENDDO.
    * Assign elements of input file to internal table
    CLEAR vendor_file_x.
    IF columns > 0.
      LOOP AT data_tab.
        DO columns TIMES.
          ASSIGN space TO <field>.
          ASSIGN space TO <field_1>.
          ASSIGN COMPONENT sy-index OF STRUCTURE vendor_file_x TO <field>.
          SEARCH data_tab-data FOR delim.
          IF sy-fdpos > 0.
            field_end = sy-fdpos + 1.
            ASSIGN data_tab-data(sy-fdpos) TO <field_1>.
    * Check that numeric fields don't contain any embedded " or ,
            IF <field_1> CO mixed_no AND
               <field_1> CA extras.
              TRANSLATE <field_1> USING '" , '.
              CONDENSE <field_1> NO-GAPS.
            ENDIF.
    * If first and last characters are '"', remove both.
            fld_chk = <field_1>.
            IF NOT fld_chk IS INITIAL.
              fld_lth = strlen( fld_chk ) - 1.
              MOVE fld_chk+fld_lth(1) TO last_char.
              IF fld_chk(1) = '"' AND
                 last_char = '"'.
                MOVE space TO fld_chk+fld_lth(1).
                SHIFT fld_chk.
                MOVE fld_chk TO <field_1>.
              ENDIF.       " for if fld_chk(1)=" & last_char="
            ENDIF.         " for if not fld_chk is initial
    * Replace "" with "
            DO.
              IF fld_chk CS '""'.
                quote_1 = sy-fdpos.
                quote_2 = sy-fdpos + 1.
                MOVE fld_chk+quote_2 TO fld_chk+quote_1.
              ELSE.
                MOVE fld_chk TO <field_1>.
                EXIT.
              ENDIF.
            ENDDO.
            <field> = <field_1>.
          ELSE.
            field_end = 1.
          ENDIF.
          SHIFT data_tab-data LEFT BY field_end PLACES.
        ENDDO.
        APPEND vendor_file_x.
        CLEAR vendor_file_x.
      ENDLOOP.
    ENDIF.
    CLEAR   data_tab.
    REFRESH data_tab.
    FREE    data_tab.
    Rob

  • Read Tab delimited File from Application server

    Hi Experts,
    I am facing problem while reading file from Application server.
    File in Application server is stored as follows, The below file is a tab delimited file.
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    i have downloaded this file from Application server using Transaction CG3Y. the Downloaded file is a tab delimited file and i could not see "#' in the file,
    The code is as Below.
    c_split  TYPE abap_char1 VALUE cl_abap_char_utilities=>horizontal_tab.
    here i am using IGNORING CONVERSION ERRORS in order to avoid Conversion Error Short Dump.
    OPEN DATASET wa_filename-file FOR INPUT IN TEXT MODE ENCODING DEFAULT IGNORING CONVERSION ERRORS.
          IF sy-subrc = 0.
            WRITE : /,'...Processing file - ', wa_filename-file.   
           DO.
          Read the contents of file
              READ DATASET wa_filename-file INTO wa_file-data.
              IF sy-subrc = 0.
                SPLIT wa_file-data AT c_split INTO wa_adrc_2-kunnr
                                                   wa_adrc_2-title
                                                   wa_adrc_2-name1
                                                   wa_adrc_2-name2
                                                   wa_adrc_2-name3
                                                   wa_adrc_2-name4
                                                   wa_adrc_2-name_co
                                                   wa_adrc_2-city1
                                                   wa_adrc_2-city2
                                                   wa_adrc_2-regiogroup
                                                   wa_adrc_2-post_code1
                                                   wa_adrc_2-post_code2
                                                   wa_adrc_2-po_box
                                                   wa_adrc_2-po_box_loc
                                                   wa_adrc_2-transpzone
                                                   wa_adrc_2-street
                                                   wa_adrc_2-house_num1
                                                   wa_adrc_2-house_num2
                                                   wa_adrc_2-str_suppl1
                                                   wa_adrc_2-str_suppl2
                                                   wa_adrc_2-country
                                                   wa_adrc_2-langu
                                                   wa_adrc_2-region
                                                   wa_adrc_2-sort1
                                                   wa_adrc_2-sort2
                                                   wa_adrc_2-deflt_comm
                                                   wa_adrc_2-tel_number
                                                   wa_adrc_2-tel_extens
                                                   wa_adrc_2-fax_number
                                                   wa_adrc_2-fax_extens
                                                   wa_adrc_2-taxjurcode.
    WA_FILE-DATA is having below values
    ##K#U#N#N#R###T#I#T#L#E###N#A#M#E#1###N#A#M#E#2###N#A#M#E#3###N#A#M#E#4###S#O#R#T#1###S#O#R#T#2###N#A#M#E#_#C#O###S#T#R#_#S#U#P#P#L#1###S#T#R#_#S#U#P#P#L#2###S#T#R#E#E#T###H#O#U#S#E#_#N#U#M#1
    And this is split by tab delimited and moved to other variables as shown above.
    Please guide me how to read the contents without "#' from the file.
    I have tried all possible ways and unable to get solution.
    Thanks,
    Shrikanth

    Hi ,
    In ECC 6 if all the unicode patches are applied then UTF 16 will defintly work..
    More over i would suggest you to ist replace # with some other  * or , and then try to see in debugging if any further  # appears..
    and no # appears then try to split now.
    if even now the # appears after replace statement then try to find out what exactly is it... wheather it is a horizantal tab etc....
    and then again try to replace it and then split..
    Please follow the process untill all the # are replaced...
    This should work for you..
    Let me know if you further face any issue...
    Regards
    Satish Boguda

  • Open/Transfer/close Dataset and delimiting

    Hi people hi have a situation regarding open/transfer/close dataset and delimits.
    I have a submit program that calls any other via variants. It works fine with using list_to_memory, convert to ascii, and open dataset. I can download the report in excel format to an application server.
    Problem is that the reports arent delimited properly in the excel report. Usually there is a vertial line "|" placed after each column and it would be eazy to mauiplate it during viewing the actual file.
    However I want this done automatically. My question is, is there a way to delimit the list before/during using open/transfer/close dataset?
    My solution before was to convert the vertical lines to a comma and just save the file as a CSV file. Problem here is the data could be affected as comma is being used normally. So the excel report would get messed up.
    Hope I get some help soon. Thanks guys and good day.

    Lets say you have fields A, B, C and D to be transfered to the dataset. You can use the below logic.
    concatenate a b c d into l_text separated by '$'.
    transfer l_text to dsn.
    Now, when you open the text file in excel it would ask you whether the file is delimeted or fixed width...if you select delimited..in the next screen you would be able to select the delimiter.
    Regards
    Anurag

  • Ordering messed up when generating Report as Delimited on Server

    Hi folks,
    Not sure if anyone else might have observed or faced this issue but thought I'd give it a try. I have an Oracle Report (Version 6i) and the output generated should be a tab delimited text file so the user can view the output in Excel. The Report output is sorted by Name. When I run the Report locally (Client) by doing File -> Generate to File -> Delimited (Delimiter Option: Tab), the Report works fine. The ordering is correct. However, when I am running it off the server (Using the option DELIMITED for the DESFORMAT), the ordering seems messed up. The ordering is maintained for quite a few rows and then a Name (out of order appears). Note that this record also appears in the correct place as well as above the place where its expected to be ... Here is an example:
    Name                     ID     BIRTH DATE
    ==========================================
    ALLAN, BRIAN             1001   1970/12/31
    ALLAN, SUSAN             2001   1968/01/01
    FRANCIS, JOHN            5001   1965/02/01
    BROWN, EDWARD            3001   1978/01/02
    BROWN, JENNY             4001   1976/04/01
    FRANCIS, JOHN            5001   1965/02/01
    GRAHAM, DALE             6001   1980/10/10As you can see that FRANCIS, JOHN is appearing out of order and in order (Twice). This is not happening when I run it on the Client. Any ideas what may be causing this? I realize I don't have much information out here as I myself am at my wits end.
    Thanks
    Edited by: Roxyrollers on Jan 4, 2013 2:15 PM

    Hello,
    I you have access to Metalink (http://metalink.oracle.com), you can take a look to the notes:
    Note.278044.1 How to Debug REP-1401 when executing Reports with Barcode java code
    Note.382952.1 RDF to Debug REP-1401 when using Oraclebarcode.jar
    Note 252691.1 REP-1401: 'CF_1Formula ': Fatal PL/SQL error occurred - ORA-
    06502: PL/SQL: numeric or value error when running the example "Building a Report with a Barcode"
    Note 271807.1 Report With Barcode Bean Is Causing REP-69, REP-57054, REP-1401 and ORA-39565:
    Regards

  • SSRS 2008 R2/SSRS 2012 Tab Delimited setup is not working correctly in Report Designer

    I have changed a RSReportDesigner.config to add a tab delimited option for report export, but it is not working correctly in vs2008/vs2010 report designer. I get a duplicate of CSV (comma delimited) in the export dropdown list.
    I have tested SSRS 2008 R2/SSRS 2012 on Windows 7 64 bit professional.
    It is working correctly in the report manager.
    <Extension Name="TAB" Type="Microsoft.ReportingServices.Rendering.DataRenderer.CsvReport,Microsoft.ReportingServices.DataRendering">
            <OverrideNames>
                <Name Language="en-US">TAB delimited</Name>
            </OverrideNames>
            <Configuration>
                <DeviceInfo>
                    <FieldDelimiter xml:space="preserve">&#9;</FieldDelimiter>
                    <Encoding>ASCII</Encoding>
                    <UseFormattedValues>False</UseFormattedValues>
                    <NoHeader>False</NoHeader>
                    <FileExtension>TXT</FileExtension>
                </DeviceInfo>
            </Configuration>
          </Extension>

    Hi Dave0323,
    According to your description, you want to export your report into TXT file and change the delimiter to be a Tab. Right?
    In Reporting Services, when exporting to a CSV file, we can change the default field delimiter to any character that we want, including TAB, by changing the device information settings in the configuration file. For example, to use TAB,
    update the FieldDelimiter setting to <FieldDelimiter xml:space="preserve">[TAB]</FieldDelimiter>. And we can change the FileExtension to be TXT if we want to export it into a TXT file. So in this scenario, we just need to modify the configuration
    file like below:
    The result looks like below:
    Reference:
    CSV Device Information Settings
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • XML Function pipe delimiter function error

    Hi SQL gurus,
    Thank you for any assistance. I was given the below query to use for taking a column that contains values and delimited with "|". below is a sample of the data in that column, the error, and the function with the query:
    XML parsing: line 1, character 199, illegal name character
    Item - Shim Stock | Type - Sheet | Material - Polyester | Thickness (In.) - 0.0005" | Thickness (mm) - 0.013mm | Thickness Tolerance - +/-0.000025" | Size - 5 x 20" | Width Tolerance - +/-0.0625" | Length Tolerance - +/-0.0625" | Moisture Absorption - 0.08% | Compress To - 1.00% | Color - Silver | Tensile Strength (PSI) - 30,000 psi | Temp. Range (F) - -90 to 300 Degrees | Meets/Exceeds - LP-377 | Package Quantity - 5
    CREATE FUNCTION [dbo].[split](
    @delimited NVARCHAR(MAX),
    @delimiter NCHAR(1)
    ) RETURNS @t TABLE (id INT IDENTITY(1,1), val NVARCHAR(MAX))
    AS
    BEGIN
    DECLARE @xml XML
    SET @xml = N'<t>' + REPLACE(@delimited,@delimiter,'</t><t>') + '</t>'
    INSERT INTO @t(val)
    SELECT r.value('.','varchar(MAX)') as item
    FROM @xml.nodes('/t') as records(r)
    RETURN
    END
    SELECT
    ItemNo, 2ItemNo,NameItem , SV.val, sv.id
    from #TableWithData v
    cross apply dbo.split(v.ColumnwithValues,'|') sv
    Edwin Lopera

    To get you moving, here is another example of a splitter that works with the examples you supplier:http://andrewtuerk.wordpress.com/2012/08/10/cross-apply-yes-you-can-use-a-table-valued-function-without-a-cursor/

Maybe you are looking for