Split Delimiters into Columns

Hi,
I am using database version 11.2.1 and have a function that returns a delimited value.
For example:
Value returned:
'1012, User1, Raleigh Close'
Is it possible to split this delimited value into 3 columns, this split would be required in the same query that returns the function value, for example:
select return_value(ID), regexp_substr(..........)
from test
where ......
Required Output:
'1012, User1, Raleigh Close', 1012, User1, Raleigh Close
Can anyone suggest the best way?
Thanks

You do not need a Regexp for such trivial patterns. They tend to consume more CPU as against the Pure SQL counterparts viz. SUBSTR and INSTR. And pure SQL functions do tend to perform better when you deal with huge amount of data. Hence, my choice would stay with SUBSTR and INSTR for such requirements.
with data as
  select '1012, User1, Raleigh Close' col from dual
select col, substr(col, 1, instr(col, ',') - 1) col1,
       substr(col, instr(col, ',') + 1, instr(col, ',', 1, 2) - instr(col, ',') - 1) col2,
       substr(col, substr(col, instr(col, ',', 1, 2) + 1)) col3
  from data;
COL                        COL1                       COL2                       COL3                      
1012, User1, Raleigh Close 1012                        User1                     Raleigh Close Edited by: Purvesh K on Mar 26, 2013 4:09 PM
Modified INSTR for COL3; Perhaps over-thought on it a bit.

Similar Messages

  • How to split strings into columns

    Hi guys,
    I have a select like this:
    select 'testing1'||'.'||'testing2'||'.'||'testing3' from dualwhich gives me an output like this:
    testing1.testing2.testing3How can I split this into 3 columns in SQL PLUS?
    Thank you in advance

    user643734 wrote:
    Thank you guys for all your help.
    Hi,
    I tried to resolve your problem using somme functions...
    I know is a long, long story but it works.
    Regards,
    Ion--create table_pivot
    CREATE TABLE pivot_table(id VARCHAR2(1) primary key, all_concat VARCHAR2(1810));
    --add data
    insert into pivot_table (id, all_concat) values('x','12345.2829303132.234234.234234.234234');
    insert into pivot_table (id, all_concat) values('y','67890.2324252627.234234.234234.234234.332545');
    insert into pivot_table (id, all_concat) values('z','11121314.12345.234234.234234.234234.23432.32453245.345435.345435');
    insert into pivot_table (id, all_concat) values('a','151617.67890.234234.234234');
    insert into pivot_table (id, all_concat) values('b','1819202122.1112131415.234234.234234.234234.345435');
    insert into pivot_table (id, all_concat) values('c','2324252627');
    insert into pivot_table (id, all_concat) values('h','2829303132.234234.234234.234234.23432.32453245.345435.345435.4325435.345');
    insert into pivot_table (id, all_concat) values('r','');
    set termout off
    --procedures and functions to compile
    --1. Main function to format strings
    create or replace function fgetformatstring
    (v_max_rows numeric, v_crt_row varchar2, v_table_source varchar2, v_save_to_table varchar2, v_option numeric)
    return varchar2
    is
    v_crt_char      numeric:=0;
    v_pos_char      numeric:=1;
    v_count      numeric:=0;
    v_string_out varchar2(500);
    v_str_paded     varchar2(10):=',null';
    v_length_pad     numeric;
    v_diff_pad numeric;
    l_crt_row varchar2(1800);
    v_char char:=',';
    v_delimiter char:='.';
    l_save_to_table varchar2(1000);
    l_max_rows     numeric;
    begin
    l_crt_row:=replace(v_crt_row,'.' ,',');
    l_save_to_table:=v_save_to_table;
    l_max_rows:=v_max_rows;
    if v_option=0 then     --get the ',null' to paded in pivot_table     
    loop
              v_pos_char:=v_crt_char;
              v_crt_char:=instr(v_crt_row,v_char, v_crt_char+1);
              v_count:=v_count+1;
    exit when v_crt_char=0 or v_count=1000;
         end loop;
         v_string_out:=' ';
         v_count:=v_count-1;
         v_diff_pad :=l_max_rows-1-v_count;
    if v_diff_pad >0 then                    --nothing to add
         v_length_pad:=length(v_str_paded);-- (+v_char)
         v_length_pad:=v_diff_pad*v_length_pad;
         v_length_pad:=v_length_pad+1;
         v_string_out:=rpad(v_string_out,v_length_pad,v_str_paded);
    end if;
    end if;
    if v_option=1 then     --get definition of v_save_to_table
         v_count:=1;
         v_string_out:=' ';
         loop
    exit when v_count=l_max_rows+1 or v_count>=1000;
    v_length_pad:=length(v_string_out||'col'||to_char(v_count)||',');
              v_string_out:=rpad(v_string_out,v_length_pad,'col'||to_char(v_count)||',');
              v_count:=v_count+1;
         end loop;
    v_string_out:=trim(v_char from v_string_out);
              --v_string_out:='id,'||v_string_out;
              v_string_out:=v_string_out;
    end if;
    if v_option=2 then     --get position of last comma  
    loop
              v_pos_char:=v_crt_char;
              v_crt_char:=instr(v_crt_row,v_char, v_crt_char+1);
              v_count:=v_count+1;
    exit when v_crt_char=0 or v_count=1000;
         end loop;
         v_string_out:=substr(v_crt_row,1,v_pos_char-1);
    end if;
    if v_option=3 then     --get numbers of delimiters(.)
    loop
              v_pos_char:=v_crt_char;
              v_crt_char:=instr(v_crt_row,v_delimiter, v_crt_char+1);
    --dbms_output.put_line( 'Rows: ' ||v_crt_row|| ' iteration: '||v_count);
              v_count:=v_count+1;
    exit when v_crt_char=0 or v_count=1000;
         end loop;
         v_count:=v_count;
         v_string_out:=to_char(v_count);
    end if;
    if v_option=4 then     --get sql command to create v_save_to_table
         v_count:=1;
         v_string_out:=' ';
         loop
    exit when v_count=l_max_rows+1 or v_count>=1000;
    v_length_pad:=length(v_string_out||'col'||to_char(v_count)||' varchar2(40), ');
              v_string_out:=rpad(v_string_out,v_length_pad,'col'||to_char(v_count)||' varchar2(40), ');
              v_count:=v_count+1;
         end loop;
    v_string_out:=trim(' ' from v_string_out);
              v_string_out:=trim(v_char from v_string_out);
              v_string_out:='create table '|| l_save_to_table ||'('||v_string_out||')';
    end if;
    if v_option=5 then     --get numbers of delimiters(,) 
    loop
              v_pos_char:=v_crt_char;
              v_crt_char:=instr(v_crt_row,',', v_crt_char+1);
              v_count:=v_count+1;
    exit when v_crt_char=0 or v_count=1000;
         end loop;
         v_count:=v_count-1;
         v_string_out:=to_char(v_count);
    end if;
    dbms_output.put_line( 'string: ' ||v_string_out|| ' option: '||v_option);
    return v_string_out;
    end;
    --2.get max items
    create or replace function fgetmaxitems
    --get the max number of items founded in the pivot_table.all_concat
    --before change the points with comma
    return numeric
    is
    v_max                numeric:=0;
    v_field               pivot_table.all_concat%type;
    v_crt_max numeric:=0;
    v_crt_row     varchar(1000);
    cursor c1 is
    select all_concat from pivot_table;
    begin
    v_crt_max:=0;
    v_max:=0;
    open c1;
    loop
    fetch c1 into v_field;
    exit when c1%notfound;
    v_crt_row:=v_field;
    v_crt_max:=fgetformatstring(0,v_crt_row,'pivot_table','',3);
    if v_crt_max>v_max then
         v_max:=v_crt_max;
    end if;
    end loop;
    close c1;
    return v_max;
    end;
    --3. insert the rows in table_dest
    create or replace procedure pinsertrow
    v_max_rows numeric,
    v_all_concat varchar2,
    v_source_table_name varchar2,
    v_save_to_tablename varchar2
    is
    v_sql_string      varchar2(1000);
    l_all_concat     varchar2(1500);
    begin
    l_all_concat:=replace(v_all_concat,'.' ,',');
    v_sql_string:='('||l_all_concat||')';
    v_sql_string:=' insert into ' ||v_save_to_tablename
    ||' (' ||fgetformatstring(v_max_rows,l_all_concat,v_source_table_name,v_save_to_tablename,1)||')'||
    ' values'||v_sql_string ;
    execute immediate v_sql_string;
    end;
    --4.procedure to create dinamically <table_dest>
    create or replace procedure pcreatesavetable
    (v_table_source varchar2, v_save_to_table varchar2)
    is
    v_sql_string      varchar2(1000);
    already_exists exception;
    pragma exception_init(already_exists,-955);
    v_max numeric;
    begin
    --clear empty values;
    v_sql_string:='delete from ' ||v_table_source|| ' where length(all_concat)=0 or (all_concat) is null';
    execute immediate v_sql_string;
    v_sql_string:='';
    v_max:=fgetmaxitems;
         --create new table
         v_sql_string:=fgetformatstring(v_max,'all_concat', v_table_source,v_save_to_table,4);
    dbms_output.put_line( 'sql create : '|| v_sql_string);
    execute immediate v_sql_string;
    exception
         when already_exists then
    --delete old table
         v_sql_string:='drop table ' ||v_save_to_table;
         execute immediate v_sql_string;
         --create new table
         v_sql_string:=fgetformatstring(v_max,'all_concat', v_table_source,v_save_to_table,4);
    dbms_output.put_line( 'sql recreate final_table : '|| v_sql_string);
    execute immediate v_sql_string;
    end;
    --5.main procedure
    create or replace procedure pmainproc(v_save_to_table varchar2)
    is
    v_key_pivot      pivot_table.id%type;
    v_column_pivot     pivot_table.all_concat%type;
    v_max numeric;
    v_crt_row          varchar2(1600);
    cursor c2 is select y.id , y.all_concat
         from pivot_table y
    ;     --cursor upon pivot_table 
    begin
    v_max:=fgetmaxitems;
    if v_max <> 0 then
    --clear empty values;
    delete from pivot_table where length(all_concat)=0 or (all_concat) is null;
    --change all points with comma
    update
    (select x.id, x.all_concat from pivot_table x, pivot_table y where x.id=y.id) b
    set b.all_concat=replace(b.all_concat,'.', ',');
    -- padd the all_concat with ',null'
    update
    (select x.id, x.all_concat from pivot_table x, pivot_table y where x.id=y.id and
    fgetformatstring(v_max,x.all_concat,'pivot_table',v_save_to_table,5) <> v_max
    ) b
    set b.all_concat=b.all_concat || fgetformatstring(v_max,b.all_concat,'pivot_table',v_save_to_table,0);
    --remove last
    update pivot_table
    set all_concat= fgetformatstring(v_max,all_concat,'pivot_table',v_save_to_table,2)
    where fgetformatstring(v_max,all_concat,'pivot_table',v_save_to_table,5) = v_max;
    open c2;
    loop
    fetch c2 into      v_key_pivot, v_column_pivot;
    exit when c2%notfound;
    v_crt_row:=v_column_pivot;
    -- insert data in to final_table
    pinsertrow(v_max,v_crt_row,'pivot_table',v_save_to_table);
    end loop;
    close c2;
    end if;
    end;
    --create table_dest
    --the table_dest will be created automatically, so you need admin privilege to run that before compile pcreatesavetable :
    connect / as sysdba or connect system/password
    --grant create table to hr;
    --conne hr/hr;
    exec pcreatesavetable('pivot_table','table_dest');
    --add data to table_dest
    exec pmainproc('table_dest');
    set termout on
    select * from table_dest;
    Table created.
    1 row created.
    1 row created.
    1 row created.
    1 row created.
    1 row created.
    1 row created.
    1 row created.
    1 row created.
    Function created.
    Function created.
    Procedure created.
    Procedure created.
    Procedure created.
    PL/SQL procedure successfully completed.
    PL/SQL procedure successfully completed.
    COL1 COL2 COL3 COL4 COL5 COL6 COL7 COL8 COL9 COL10
    12345 2829303132 234234 234234 234234
    67890 2324252627 234234 234234 234234 332545
    11121314 12345 234234 234234 234234 23432 32453245 345435 345435
    151617 67890 234234 234234
    1819202122 1112131415 234234 234234 234234 345435
    2324252627
    2829303132 234234 234234 234234 23432 32453245 345435 345435 4325435 345
    7 rows selected.

  • Pages (iWork 06) won't split cells into columns past a certain number...

    I'm trying to split a number of cells into columns (only one column of cells selected at a time). It worked fine with 5 out of the 15 columns that I have, but now when I highlight a column's worth of cells and right click "split into columns" is grayed out. Why?
    I changed the page size and made it huge in case that was limiting the overall table dimensions or anything like that, but it's still not working. Thanks!

    Does the ext directory have the php_oci8.dll? In the original steps the PHP dir is renamed. In the given php.in the extension_dir looks like it has been updated correctly. Since PHP distributes php_oci8.dll by default I reckon there would be a very good chance that the problem was somewhere else. Since this is an old thread I don't think we'll get much value from speculation.
    -- cj

  • Split flat file column data into multiple columns using ssis

    Hi All, I need one help in SSIS.
    I have a source file with column1, I want to split the column1 data into
    multiple columns when there is a semicolon(';') and there is no specific
    length between each semicolon,let say..
    Column1:
    John;Sam;Greg;David
    And at destination we have 4 columns let say D1,D2,D3,D4
    I want to map
    John -> D1
    Sam->D2
    Greg->D3
    David->D4
    Please I need it ASAP
    Thanks in Advance,
    RH
    sql

    Imports System
    Imports System.Data
    Imports System.Math
    Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
    Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
    Imports System.IO
    Public Class ScriptMain
    Inherits UserComponent
    Private textReader As StreamReader
    Private exportedAddressFile As String
    Public Overrides Sub AcquireConnections(ByVal Transaction As Object)
    Dim connMgr As IDTSConnectionManager90 = _
    Me.Connections.Connection
    exportedAddressFile = _
    CType(connMgr.AcquireConnection(Nothing), String)
    End Sub
    Public Overrides Sub PreExecute()
    MyBase.PreExecute()
    textReader = New StreamReader(exportedAddressFile)
    End Sub
    Public Overrides Sub CreateNewOutputRows()
    Dim nextLine As String
    Dim columns As String()
    Dim cols As String()
    Dim delimiters As Char()
    delimiters = ",".ToCharArray
    nextLine = textReader.ReadLine
    Do While nextLine IsNot Nothing
    columns = nextLine.Split(delimiters)
    With Output0Buffer
    cols = columns(1).Split(";".ToCharArray)
    .AddRow()
    .ID = Convert.ToInt32(columns(0))
    If cols.GetUpperBound(0) >= 0 Then
    .Col1 = cols(0)
    End If
    If cols.GetUpperBound(0) >= 1 Then
    .Col2 = cols(1)
    End If
    If cols.GetUpperBound(0) >= 2 Then
    .Col3 = cols(2)
    End If
    If cols.GetUpperBound(0) >= 3 Then
    .Col4 = cols(3)
    End If
    End With
    nextLine = textReader.ReadLine
    Loop
    End Sub
    Public Overrides Sub PostExecute()
    MyBase.PostExecute()
    textReader.Close()
    End Sub
    End Class
    Put this code in ur script component. Before that add 5 columns to the script component output and name them as ID, col1, co2..,col4. ID is of data type int. Create a flat file destination and name it as connection and point it to the flat file as the source.
    Im not sure whats the delimiter in ur flat file between the 2 columns. I have use a comma change it accordingly.
    This is the output I get:
    ID Col1
    Col2 Col3
    Col4
    1 john
    Greg David
    Sam
    2 tom
    tony NULL
    NULL
    3 harry
    NULL NULL
    NULL

  • How To UPLOAD a DATA (.DAT) fiel from PC to internal table and then split it into the data different columns

    Hi all,
    I am new to ABAP Development. I need to upload a .DAT file (the file doesn#t have any proper structure-- Please find the .DAT file in the attachment). After uploading the DATA (.DAT) fiel I need to split in into different columns. Refering the attached .DAT fiel the fields in bracets like:
    [Arbeitstag],  [Pecunia], [Mita], [Kunde], [Auftrag] and  [Position] are different fields that need to be arranged in columns in an internal table. this .DAT fiel which I want to upload and then SPLIT it into various fields will will treated as MASTER DATA table for further programming. The program that I had written is as below. Also please refer the attached .DAT table.
    Please if any one could help me. i searched a lot in different forums but couldn't find me  a solution. Also note that the attached fiel is in text (.txt) format here but in real situation the same fiel is in DATA (.DAT) format.
    *& Report  ZDEMO_ZEITERFASSUNG9
    REPORT  ZDEMO_ZEITERFASSUNG9.
    Types: Begin of ttab,
            Rec(1000) type c,
           End of ttab.
    DATA: itab  type table of ttab.
    DATA: wa_tab type ttab.
    DATA: file_str type string.
    Parameters: p_file type localfile.
    At selection-screen on value-request for p_file.
                                           CALL FUNCTION 'KD_GET_FILENAME_ON_F4'
                                            EXPORTING
    *                                          PROGRAM_NAME        = SYST-REPID
    *                                          DYNPRO_NUMBER       = SYST-DYNNR
    *                                          FIELD_NAME          = ' '
                                               STATIC              = 'X'
    *                                          MASK                = ' '
                                             CHANGING
                                               file_name           = p_file.
    *                                        EXCEPTIONS
    *                                          MASK_TOO_LONG       = 1
    *                                          OTHERS              = 2
    Start-of-Selection.
      file_str = P_file.
      CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          filename                      = '\\10.10.1.92\Volume_1\_projekte\Zeiterfassung-SAP\BUP_ZEIT.DAT'   " This the file  source address
          FILETYPE                      = 'DAT'
          HAS_FIELD_SEPARATOR           = ';'
    *     HEADER_LENGTH                 = 0
    *     READ_BY_LINE                  = 'X'
    *     DAT_MODE                      = ' '
    *     CODEPAGE                      = ' '
    *     IGNORE_CERR                   = ABAP_TRUE
    *     REPLACEMENT                   = '#'
    *     CHECK_BOM                     = ' '
    *     VIRUS_SCAN_PROFILE            =
    *     NO_AUTH_CHECK                 = ' '
    *   IMPORTING
    *     FILELENGTH                    =
    *     HEADER                        =
        tables
          data_tab                      = itab
       EXCEPTIONS
         FILE_OPEN_ERROR               = 1
         FILE_READ_ERROR               = 2
         NO_BATCH                      = 3
         GUI_REFUSE_FILETRANSFER       = 4
         INVALID_TYPE                  = 5
         NO_AUTHORITY                  = 6
         UNKNOWN_ERROR                 = 7
         BAD_DATA_FORMAT               = 8
         HEADER_NOT_ALLOWED            = 9
         SEPARATOR_NOT_ALLOWED         = 10
         HEADER_TOO_LONG               = 11
         UNKNOWN_DP_ERROR              = 12
         ACCESS_DENIED                 = 13
         DP_OUT_OF_MEMORY              = 14
         DISK_FULL                     = 15
         DP_TIMEOUT                    = 16
         OTHERS                        = 17
      IF sy-subrc <> 0.
    * MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    *         WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
      LOOP at itab into wa_tab.
            WRITE: / wa_tab.
      ENDLOOP.
    I will be grateful to all you experts for ur inputs
    regards
    Chandan Singh

    For every Auftrag, there are multiple Position entries.
    Rest of the blocks don't seems to have any relation.
    So you can check this code to see how internal table lt_str is built whose first 3 fields have data contained in Auftrag, and next 3 fields have Position data. The structure is flat, assuming that every Position record is related to preceding Auftrag.
    Try out this snippet.
    DATA lt_data TYPE TABLE OF string.
    DATA lv_data TYPE string.
    CALL METHOD cl_gui_frontend_services=>gui_upload
      EXPORTING
        filename = 'C:\temp\test.txt'
      CHANGING
        data_tab = lt_data
      EXCEPTIONS
        OTHERS   = 19.
    CHECK sy-subrc EQ 0.
    TYPES:
    BEGIN OF ty_str,
      a1 TYPE string,
      a2 TYPE string,
      a3 TYPE string,
      p1 TYPE string,
      p2 TYPE string,
      p3 TYPE string,
    END OF ty_str.
    DATA: lt_str TYPE TABLE OF ty_str,
          ls_str TYPE ty_str,
          lv_block TYPE string,
          lv_flag TYPE boolean.
    LOOP AT lt_data INTO lv_data.
      CASE lv_data.
        WHEN '[Version]' OR '[StdSatz]' OR '[Arbeitstag]' OR '[Pecunia]'
             OR '[Mita]' OR '[Kunde]' OR '[Auftrag]' OR '[Position]'.
          lv_block = lv_data.
          lv_flag = abap_false.
        WHEN OTHERS.
          lv_flag = abap_true.
      ENDCASE.
      CHECK lv_flag EQ abap_true.
      CASE lv_block.
        WHEN '[Auftrag]'.
          SPLIT lv_data AT ';' INTO ls_str-a1 ls_str-a2 ls_str-a3.
        WHEN '[Position]'.
          SPLIT lv_data AT ';' INTO ls_str-p1 ls_str-p2 ls_str-p3.
          APPEND ls_str TO lt_str.
      ENDCASE.
    ENDLOOP.

  • Splitting field into two columns

    HI,
    I'm told it is possible to have a field be split up into multiple columns in the details section of a report rather than having it listed in a single column.  How can this be accomplished?  I've heard it has something to do with subreports.
    For instance, instead of this:
    Bob
    Jane
    Sally
    Barbara
    Jim
    Dave
    It could look like this (columns not lined up in this example, not sure how to do it on this forum):
    Bob          Jane
    Sally          Barbara
    Jim          Dave
    Thanks

    No need to use subreports
    In section expert for the deatils section check box - Format with multiple columns
    A new tab appear - Layout - Define width to set up number of columns required.
    Ian

  • How to split a string into columns

    Hello to all ,
    Having a strings like this, where pipe(|) is his delimiter
    10:00 | x1 | 2 | RO | P | Con ausilio  | y1
    10:10 | x2 | 1 | RO |  |  | y2
    10:20 |x3 | 3 |  |  |  | y3
    10:30 |x4 | 3 | RO | N | Con aiuto  | y4
    10:40 |x5 | 1 | RO |  |  | y5
    how can I break it up into columns, for example, the first char(before first pipe) insert in first variable,
    then, after first pipe,  second characters in a other column ans so on
    col1 := '10:00';
    col2 := 'x1';
    col3 := '2';
    col4:= 'RO';
    col5 := 'P';
    col6 := ' Con ausilio ';
    col7 := 'y1';
    col1 := '10:10';
    col2 := 'x2';
    .. and so on
    thanks in advance

    Hi,
    check this:
    WITH mydata(txt) AS
       SELECT '10:00 | x1 | 2 | RO | P | Con ausilio  | y1' FROM DUAL UNION ALL
       SELECT '10:10 | x2 | 1 | RO |  |  | y2'              FROM DUAL UNION ALL
       SELECT '10:20 |x3 | 3 |  |  |  | y3'                 FROM DUAL UNION ALL
       SELECT '10:30 |x4 | 3 | RO | N | Con aiuto  | y4'    FROM DUAL UNION ALL
       SELECT '10:40 |x5 | 1 | RO |  |  | y5'               FROM DUAL
    SELECT TRIM(REGEXP_SUBSTR(txt,'[^|]+',1,1)) col1
         , TRIM(REGEXP_SUBSTR(txt,'[^|]+',1,2)) col2
         , TRIM(REGEXP_SUBSTR(txt,'[^|]+',1,3)) col3
         , TRIM(REGEXP_SUBSTR(txt,'[^|]+',1,4)) col4
         , TRIM(REGEXP_SUBSTR(txt,'[^|]+',1,5)) col5
         , TRIM(REGEXP_SUBSTR(txt,'[^|]+',1,6)) col6
         , TRIM(REGEXP_SUBSTR(txt,'[^|]+',1,7)) col7
      FROM mydata;
    COL1   COL2   COL3   COL4   COL5   COL6            COL7 
    10:00  x1     2      RO     P      Con ausilio     y1   
    10:10  x2     1      RO                            y2   
    10:20  x3     3                                    y3   
    10:30  x4     3      RO     N      Con aiuto       y4   
    10:40  x5     1      RO                            y5    Regards.
    Al

  • Split records into two files based on lookup table

    Hi,
    I'm new to ODI and want to know on how I could split records into two files based on a value in one of the columns in the table.
    Example:
    Table:
    my columns are
    account name country
    100 USA
    200 USA
    300 UK
    200 AUS
    So from the 4 records I maintain list of countries in a lookup file and split the records into 2 different files based on values in the file...
    Say I have records AUS and UK in my lookup file...
    So my ODI routine should send all records with country into file1 and rest to file2.
    So from above records
    File1:
    300 UK
    200 AUS
    File2:
    100 USA
    200 USA
    Can you help me how to achieve this?
    Thanks,
    Sam

    1. where and how do i create filter to restrict countries? In source or target? Should I include some kind of filter operator in interface.
    You need to have the Filter on the Source side so that we can filter records accordingly the capture the same in the File. To have a Filter . In the source data store click and drag the column outside the data store and you will have Cone shaped icon and now you can click and type the Filter.
    Please look into this link for ODI Documentation -http://www.oracle.com/technetwork/middleware/data-integrator/documentation/index.html
    Also look into this Getting started guide - http://download.oracle.com/docs/cd/E15985_01/doc.10136/getstart/GSETL.pdf . You can find information as how to create Filter in this guide.
    2. If I have include multipe countries like (USA,CANADA,UK) to go to one file and rest to another file; Can I use some kind of lookup file...? Instead of modifying filter inside interface...Can i Update entries in the file?
    there are two ways of handling your situation.
    Solution 1.
    1. Create Variable Country_Variable
    2. Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
    3. Create a new Package Country File Unload
    4. Call the Variable in Country_Variable in Set Mode and provide the Country (USA )
    5. Next call the First Interface
    6. Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
    7. Now run the package .
    Solution 2.
    If you need a solution to handle through Filer.
    1. Use this Method (http://odiexperts.com/how-to-refresh-odi-variables-from-file-%E2%80%93-part-1-%E2%80%93-just-one-value ) to call the File where you wish to create store the country name into the variable Country_Variable
    2. Pretty much the same Create a Filter in the Source datastore in the First Interface ( SOURCE.COLUMN = #Country_Variable)
    3.Create a new Package Country File Unload
    4.Next call the Second Interface where the Filter condition will be ( SOURCE.COLUMN ! = #Country_Variable )
    5. Now run the package .
    Now through this way using File you can control the File.
    Please try and let us know , if you need any other help.

  • Pivoting rows into columns in Oracle 10g

    Hi,
    I want to pivot rows into column in some optimal way.
    I don't want to go with the DECODE option as the number of columns can be more than 200.
    i have also tried the transpose logic which is making the pl/sql block too huge.
    can i directly query the database for the desired output instead of storing the data into some arrays and displaying rows as columns?

    Hi,
    Here's a dynamic way to do this is Oracle 10, using theSQL*Plus @ command to handle the dynamic parts.
    First, let's see how we would do this using a static query:
    WITH     col_cntr    AS
         SELECT     column_name
         FROM     all_tab_columns
         WHERE     owner          = 'FKULASH'
         AND     table_name     = 'TEST_EMP'
         AND     column_name     NOT IN ('EMP_ID', 'TYPE_VAL')
    ,     unpivoted_data     AS
         SELECT     e.type_val
         ,     c.column_name
         ,     CASE c.column_name
                  WHEN  'X_AMT'  THEN  x_amt     -- *****  Dynamic section 1  *****
                  WHEN  'Y_AMT'  THEN  y_amt     -- *****  Dynamic section 1  *****
                  WHEN  'Z_AMT'  THEN  z_amt     -- *****  Dynamic section 1  *****
              END     AS v
         FROM          test_emp  e
         CROSS JOIN     col_cntr  c
    SELECT       column_name     AS type_val
    ,       SUM (CASE WHEN type_val = 'Q1' THEN v ELSE 0 END)     AS q1     -- ***** Dynamic section 2  *****
    ,       SUM (CASE WHEN type_val = 'Q2' THEN v ELSE 0 END)     AS q2     -- ***** Dynamic section 2  *****
    ,       SUM (CASE WHEN type_val = 'Q3' THEN v ELSE 0 END)     AS q3     -- ***** Dynamic section 2  *****
    ,       SUM (CASE WHEN type_val = 'Q4' THEN v ELSE 0 END)     AS q4     -- ***** Dynamic section 2  *****
    FROM       unpivoted_data
    GROUP BY  column_name
    ORDER BY  column_name
    ;Column names are hard-coded in two places:
    (1) in the sub-query unpivoted_data, we had to know that there were 3 columns to be unpivoted, and that they were called x_amt, y_amt and z_amt. You want to derive all of that from all_tab_columns.
    (2) in the main query, we had to know that there would be 4 pivoted columns in the rsult set, and that they would be called q1, q2, q3 and q4. You want to derive all that from the data actually in test_emp.
    Instead of hard-coding those 2 dynamic sections, have Preliminary Queries write them for you, a split second before you run the main query, by running this script:
    --  Before writing sub-scripts, turn off features designed for human readers
    SET     FEEDBACK    OFF
    SET     PAGESIZE    0
    PROMPT *****  Preliminary Query 1  *****
    SPOOL     c:\temp\sub_script_1.sql
    SELECT    '              WHEN  '''
    ||       column_name
    ||       '''  THEN  '
    ||       LOWER (column_name)     AS txt
    FROM       all_tab_columns
    WHERE       owner          = 'FKULASH'
    AND       table_name     = 'TEST_EMP'
    AND       column_name     NOT IN ('EMP_ID', 'TYPE_VAL')
    ORDER BY  column_name
    SPOOL     OFF
    PROMPT     *****  Preliminary Query 2  *****
    SPOOL     c:\temp\sub_script_2.sql
    SELECT DISTINCT  ',       SUM (CASE WHEN type_val = '''
    ||                type_val
    ||           ''' THEN v ELSE 0 END)     AS '
    ||           LOWER (type_val)          AS txt
    FROM           test_emp
    ORDER BY      txt
    SPOOL     OFF
    --  After writing sub-scripts, turn on features designed for human readers
    SET     FEEDBACK    5
    SET     PAGESIZE    50
    -- Main Query:
    WITH     col_cntr    AS
         SELECT     column_name
         FROM     all_tab_columns
         WHERE     owner          = 'FKULASH'
         AND     table_name     = 'TEST_EMP'
         AND     column_name     NOT IN ('EMP_ID', 'TYPE_VAL')
    ,     unpivoted_data     AS
         SELECT     e.type_val
         ,     c.column_name
         ,     CASE c.column_name
                  @c:\temp\sub_script_1
              END     AS v
         FROM          test_emp  e
         CROSS JOIN     col_cntr  c
    SELECT       column_name     AS type_val
    @c:\temp\sub_script_2
    FROM       unpivoted_data
    GROUP BY  column_name
    ORDER BY  column_name
    ;As you can see, the main query looks exactly like the static query, except that the two dynamic sections have been replaced by sub-scripts. These 2 sub-scripts are written by 2 prelimiary queries, right before the main query.
    As others have said, the fact that you're asking this question hints at a poor table design. Perhaps the table should be permanently stored in a form pretty much like unpivoted_data, above. When you need to display it with columns x_amt, y_amt, ..., then pivot it, using GROUP BY type_col. When you need to display it with columns q1, q2, ..., then pivot it using GROUP BY column_name.

  • SharePoint List Form using InfoPath 2010 "Cannot insert the value NULL into column 'tp_DocId', table 'Content_SP_00003.dbo.AllUserData'; column does not allow nulls"

    I am experiencing issue with my SharePoint site , when I am trying to add new Item in List . Error given below :--> 02/03/2015 08:23:36.13 w3wp.exe (0x2E04) 0x07E8 SharePoint Server Logging Correlation Data 9gc5 Verbose Thread change; resetting trace
    level override to 0; resetting correlation to e2e9cddc-cf35-4bf8-b4f3-021dc91642da c66c2c17-faaf-4ff9-a414-303aa4b4726b e2e9cddc-cf35-4bf8-b4f3-021dc91642da 02/03/2015 08:23:36.13 w3wp.exe (0x2E04) 0x07E8 Document Management Server Document Management 52od
    Medium MetadataNavigationContext Page_InitComplete: No XsltListViewWebPart was found on this page[/sites/00003/Lists/PM%20Project%20Status/NewForm.aspx?RootFolder=&IsDlg=1]. Hiding key filters and downgrading tree functionality to legacy ListViewWebPart(v3)
    level for this list. e2e9cddc-cf35-4bf8-b4f3-021dc91642da 02/03/2015 08:23:36.17 w3wp.exe (0x1B94) 0x1A0C SharePoint Server Logging Correlation Data 77a3 Verbose Starting correlation. b4d14aec-5bd4-4fb1-b1e3-589ba337b111 02/03/2015 08:23:36.17 w3wp.exe (0x1B94)
    0x1A0C SharePoint Server Logging Correlation Data 77a3 Verbose Ending correlation. b4d14aec-5bd4-4fb1-b1e3-589ba337b111 02/03/2015 08:23:36.31 w3wp.exe (0x2E04) 0x07E8 SharePoint Foundation Database 880i High System.Data.SqlClient.SqlException: Cannot insert
    the value NULL into column 'tp_DocId', table 'Content_SP_00003.dbo.AllUserData'; column does not allow nulls. INSERT fails. The statement has been terminated. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at
    System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject
    stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavi... e2e9cddc-cf35-4bf8-b4f3-021dc91642da 02/03/2015
    08:23:36.31* w3wp.exe (0x2E04) 0x07E8 SharePoint Foundation Database 880i High ...or runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream,
    Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior,
    RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.SharePoint.Utilities.SqlSession.ExecuteReader(SqlCommand
    command, CommandBehavior behavior,

    Are you trying to setup P2P? Could you explain the process you followed completely? By anychance you create the backup and then created the publication?
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • How to adjust splitted lines into one line in Text file?

    Hi Guys,
    I have a text file with 3 fields(comma separated): GLCode (Number), Desc1 (Char), Desc2(Char) and need to load it into BW.
    My Text file looks like:
    1011.00,"Mejor PC Infrastructure","This line is ok."
    1012.00,"Telephone Equipment $","This line ends in next line.   
    1)Need to change the equipment immediately.
    2)Take immediate action"
    1013.00,"V1 Computer Server Infrastructure # Equip","For purchases
    of components that make up the company's network, such as servers, hubs, routers etc."
    1014.00,"Flash Drive","Need to provide all IT Developer"
    This is how file looks like. Now I need the followings:
    1. Need to remove the space and need to adjust the splitted line into one. Say here
    line/record 2 is splitted into 3 lines and need to adjust in 1 line.
    2. In Line 5 (Record 3) data splitted into 2 lines and need to make 1 line.
    3. Need to remove bad characters.
    Could someone help me please how to proceed ?
    Regards,

    Not quite correct by my testing.  Try:
    $i=0
    Get-Content .\test.txt | ForEach {
    If ($i%2){
    ("$Keep $($_)").Trim()
    }Else{
    $keep=$_
    }$i++
    Good catch!
    Boe Prox
    Blog |
    Twitter
    PoshWSUS |
    PoshPAIG | PoshChat |
    PoshEventUI
    PowerShell Deep Dives Book

  • REP-1425 report formula DO_SQL error putting value into column

    Hi all
    I have opened a report 2.5 in Oracle9i Reports Developer and, it converted ok. However, when I run the report (paper layout), the message
    rep-1425 report formula DO_SQL error putting value
    into column. Column may not be referenced by parameter
    triggers
    appears. There are several report level formula columns and corresponding placeholder columns that are the cause of this error. The formula has the following :
    SRW.DO_SQL('SELECT RPAD(''DAILY TABLE AUDIT REPORT'',60,''.'')||TO_CHAR(SYSDATE,''DD-MON-YYYY'') INTO :REPORT_TIT FROM DUAL');
    COMMIT;
    RETURN('');
    I can't work out what this error message really means as the column, report_tit is a placeholder column and, the formula column is not a parameter trigger!! The report_tit placeholder is used as a source for a layout field. I noticed that the layout field is defined as a placeholder column in the converted report but in the reports 2.5 version, it is defined as a layout field.
    I can do a work around by replacing the SRW.DO_SQL statement with a normal PL/SQL SELECT statement. However, I wonder if anyone else has had the same problem and, if anyone can help provide an answer as to what this error really means and, also, how I can retain the SRW.DO_SQL statement and/or an alternative work around to the one that I have described.
    Thanks.
    Therese Hughes
    Forest Products Commision

    Hi again
    The firewall proved to be the problem after all! The firewall set in Reports config-files is not used for WebServices, it has to be set within the stub:
    Properties prop = System.getProperties();
    prop.put("http.proxyHost","yourProxyServer");
    prop.put("http.proxyPort","youProxyServerPort");
    I inserted this in my java-code and after some problems (see below), restarting Report Builder turned the trick, the report works now.
    Cheers
    Tino
    Here there mail I set up before I found that restarting Report Builder helped:
    Thanks for your answer, putting in the proxy-settings actually helped some - the same error message is
    popping up, but instantly and not after 10 seconds like before:
    My proxy lines look like this, I also tried "http://proxy.ch.oracle.com", but "proxy.ch.oracle.com" proved to
    be the correct syntax:
    public Float getRate(String country1, String country2) throws Exception
    Float returnVal = null;
    Properties prop = System.getProperties();
    prop.put("http.proxyHost","proxy.ch.oracle.com");
    prop.put("http.proxyPort","8080");
    URL endpointURL = new URL(endpoint);
    Call call = new Call();
    I tested the new proxy-entries by disabling the proxy preference in JDeveloper, so I could verify the added
    proxy-lines in the code work - they do. If I change the proxy to some incorrect value like
    "proxyy.ch.oracle.com", they fail.
    After unsuccessfully trying recompile and re-import of Java classes, I rebuild the report from scratch and
    stumbled over the same problem. Now the question is, whether I'm still doing something wrong with the
    proxy or whether there is another problem after passing the firewall....
    -------------------------------------------------------------------------------------------------------------

  • Help  needed  in inserting  into column  of XML type

    i have requirement of inserting data into column of xml type
    eg
    cust product cost
    1 a 3
    1 b 7
    1 c 5
    now required result should
    <PROD-LIST>
    <a>3</a>
    <b>7</b>
    <c>5</c>
    </PROD-LIST>
    Please let me know how to achieve this , i was trying write function , it was working for one values ,but how to do if many values exist .

    Take a deep breath, then retype this putting in all the words and punctuation you missed out the first time until it makes sense.
    Your sample data, for example, could be better formatted using the [pre] and [/pre] tags to preserve formatting and put in a table format. Similarly with your output. Why are two of the numbers floating around freely in your xml but the 3rd isn't?

  • Reading fixed length flatfile & splitting it into header and lineitem data

    Hi Friends,
    I am reading a fixed length flat file from application server into an Internal table.
    But problem is, data in flat file is in the below format with fixed start and end positions.
    1 - 78 -  control header
    1 - 581 - Invoice header data
    1 - 411 - Invoice Line item data
    1 - 45 -   trailer record
    There will be one control header and one trailer record per file and number of invoice headers and its line items can vary.
    There is unique identifiers to identify as below.
    Control header - starts with 'CHR'
    Invoice Header starts with - '000'
    Invoice Lineitem stats with - '001'
    trailer record - starts with 'TRL'
    So its like
    CHR.......control  data..(79)000.....header data...(660)001....lineitem1...(1481)001...lineitem2....multiples of 411 and 581 and ends with... TRL...trailer record..
    (position)
    I am first reading the data set and store in internal table with a field of 255char.
    by looping on above ITAB i have to split it into Header records and line item records.
    Did anyone face this kind of scenario before. If yes appreciate if you can throw some ideas on logic to split this data.
    Any help in splitting up the data is highly appreciated.
    Regards,
    Simha
    ITAB declaration
    DATA: BEGIN OF ITAB OCCURS 0,
                   FIELD(255),
               END OF ITAB,
                lt_header type table of ZTHDR,
                lt_lineitem type table of ZTLINITM.

    Hi,
    i am sending sample code which resembles your requiremeant.
    data: BEGIN OF it_input OCCURS 0, "used for store all the data in one line.
          line type string ,
          END OF it_input,
          it_header type TABLE OF string WITH HEADER LINE,"use to store all header with corresponding items
          it_item   type TABLE OF string WITH HEADER LINE.."used to store all item data
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
        filename                      = 'd:\test.txt'
      FILETYPE                      = 'ASC'
      HAS_FIELD_SEPARATOR           = ' '
      HEADER_LENGTH                 = 0
      READ_BY_LINE                  = 'X'
      DAT_MODE                      = ' '
      CODEPAGE                      = ' '
      IGNORE_CERR                   = ABAP_TRUE
      REPLACEMENT                   = '#'
      CHECK_BOM                     = ' '
      VIRUS_SCAN_PROFILE            =
      NO_AUTH_CHECK                 = ' '
    IMPORTING
      FILELENGTH                    =
      HEADER                        =
      tables
        data_tab                      = it_input
    EXCEPTIONS
      FILE_OPEN_ERROR               = 1
      FILE_READ_ERROR               = 2
      NO_BATCH                      = 3
      GUI_REFUSE_FILETRANSFER       = 4
      INVALID_TYPE                  = 5
      NO_AUTHORITY                  = 6
      UNKNOWN_ERROR                 = 7
      BAD_DATA_FORMAT               = 8
      HEADER_NOT_ALLOWED            = 9
      SEPARATOR_NOT_ALLOWED         = 10
      HEADER_TOO_LONG               = 11
      UNKNOWN_DP_ERROR              = 12
      ACCESS_DENIED                 = 13
      DP_OUT_OF_MEMORY              = 14
      DISK_FULL                     = 15
      DP_TIMEOUT                    = 16
      OTHERS                        = 17
    IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    LOOP AT  it_input.
       write : it_input-line.
    ENDLOOP.
    before doing the below steps just takethe controle record and trail record cutt of from the table input based up on the length
      split it_input-line AT '000' INTO TABLE it_header IN CHARACTER MODE.
      LOOP AT  it_header.
        split it_header AT '0001' INTO TABLE it_item IN CHARACTER MODE.
        write :/ it_header.
      ENDLOOP.
    after this you need to cut the records in tocorresponding  fields in to respective tables by putting loop on item and header table as i decleared above.
    i think this may solve your problem you need to keep some minnor effort.
    Regaurds,
    Sree..

  • Split data into different fields in TR

    I have a flat file with space (multiple spaces between different fields) as a delimiter. The problem is, file is coming from 3rd party and they don't want to change the separator as comma or tab delimited CSV file. I have to load data in ODS (BW 3x).
    Now I am thinking to load line by line and then split data into different objects in Transfer rules.
    The Records looks like:
    *009785499 ssss BC sssss 2988 ssss 244 sss 772 sss  200
    *000000033 ssss AB ssss        0  ssss   0 ssss 0 ssss 0
    *000004533 ssss EE ssss        8  ssss   3 ssss 2 ssss 4
    s = space
    Now I want data to split like:
    Field1 = 009785499
    Field2 = BC
    Field3 = 2988
    Field4 = 244
    Field5 = 772
    Field6 = 200
    After 1st line load, go to 2nd line and split the data as above and so on. Could you help me with the code pleaseu2026?
    Is it a good design to load data? Any other idea?
    I appreciate your helps..

    Hi,
    Not sure how efficient this is, but you can try an approach on the lines of this link /people/sap.user72/blog/2006/05/27/long-texts-in-sap-bw-modeling
    Make your transfer structure in this format. Say the length of each line is 200 characters. Make the first field of the structure of length 200. That is, the length of Field1 in the Trans Struc will be 200.
    The second field can be the length of Field2 as you need in your ODS, and similarly for Field3 to Field6. Load it as a CSV file. Since there are no commas, the entire line will enter into the first field of the Trans Structure. This can be broken up into individual fields in the Transfer Rules.
    Now, in your Start Routine of transfer rules, write code like this (similar to the ex in the blog):
    Field-symbols <fs> type transfer-structure.
    Loop at Datapak assigning <fs>.
        split <fs>-Field1 at 'ssss' into <fs>-field1 <fs>-field2 <fs>-field3....
        modify datapak from <fs>
    endloop.
    Now you can assign Field1 of Trans Struc to Field1 of Comm Struc, Field2 of Trans Struc to Field2 of Comm Struc and so on.
    Hope it helps!
    Edited by: Suhas Karnik on Jun 17, 2008 10:28 PM

Maybe you are looking for

  • B&W G3 + Radeon 7000 + Mac OS 8.6 = no dice?

    Greetings. I recently acquired a "new to me" B&W G3 (rev 2 motherboard). It came equipped with a Radeon 7000 and Mac OS X. I have since acquired a Rage 128, a new hard drive, and installed Mac OS 8.6 on it. I now have a Radeon 7000 lying around and w

  • Acrobat 9 std - pdf/x

    I have Acrobat 9 Standard version. I can't create it with PDF/X standard files. Anyone pl help

  • Delayed landline repair

    I am a senior citizen from State College, PA who depends on my telephone landline. On the morning of August 2, 2011, I reported my telephone outage to Verizon. I was given the repair time by 7:00 pm on August 9th, but I was told that the problem woul

  • Changed the name, it changed everything

    I hope I can describe this so that it makes sense... I'm pretty new to macs, I just got a new iMac about a month ago. So far everything has been perfect and I absolutely love it. Last night, I opened the Finder and noticed in the little pane on the l

  • How to get rid of the crash!?

    It is often this error: Iphone 4S when it is in standby mode does not respond to the button, and when you press a few times randomly on the button, the screen lights up. In the setting of shows just such a crash. Incident Identifier: CA6D9898-5AEE-4E