Flat files data comma separated using SSIS.

Hi,
I have multiple flat files which come in comma separated columns. See example below :
Customer Data
CustID,FName,LName,Disease,Email,Phone
12345,Xyz,Smit,Bronchitis, Asthma and fever,[email protected],80000000
12346,Abc,Doe,fever Headache,[email protected],90000000
12347,Klu,joe,Sugar, cough and fever,[email protected],12345678
Please look at the ID's 12345 and 12347. The disease column has a internal comma space between. How do i remove the comma spaces in the disease column, so that it can be loaded from flat file to sql table using SSIS. ?
Please help !
Thanks

Here is a full solution base on my post above (first option)
1. create temp table (Give it a unique name):
create table #T (Txt NVARCHAR(MAX))
GO
2. Insert all the data into temporary table. Each line in the text file, is a value for one column in a row in the table.
-- I will jump to the table and use simple insert.
-- If you have problem with step 1 then please inform us (this is simple bulk insert basically)
insert #T (Txt) values
('1234435,Xyz,Stemit,Brfsdonchitis, Asthma and fever,[email protected],80000000'),
('12346,Agjdfjbc,Doge,fevhhhher Headsxdshhache,[email protected],90000000'),
('123447,Klu,joe,Sugar, cough and fever,[email protected],12345678')
GO
the result should be like this:
Txt
1234435,Xyz,Stemit,Brfsdonchitis, Asthma and fever,[email protected],80000000
12346,Agjdfjbc,Doge,fevhhhher Headsxdshhache,[email protected],90000000
123447,Klu,joe,Sugar, cough and fever,[email protected],12345678
I use a SPLIT Function named Split_CLR_Fn. This is a CLR Split function that get input <string to split> and <string as delimiter,> and it return table with 2 columns ID, SplitData
For example if you use: SELECT * from Split_CLR_Fn('text1,text2,text3,',') then you get result:
ID SplitData
1 Text1
2 Text2
3 Text3
** You can find in the internet several good functions, I HIGHLY RECOMMENDED NOT TO USE T-SQL FUNCTIONS but CLR FUNCTION. Check thi link to understand why:
http://sqlperformance.com/2012/07/t-sql-queries/split-strings
** This is the best function that I know about and I use it, but I change the code a bit to return 2 columns and not just the SplitData as in this blog: http://sqlblog.com/blogs/adam_machanic/archive/2009/04/28/sqlclr-string-splitting-part-2-even-faster-even-more-scalable.aspx
That's it :-) we are ready for the solution which is very simple
Solution 1 (BAD solution but easy to write):
select
(select SplitData from Split_CLR_Fn(Txt,',') where ID = 1) CustID,
(select SplitData from Split_CLR_Fn(Txt,',') where ID = 2) FName,
(select SplitData from Split_CLR_Fn(Txt,',') where ID = 3) LName,
STUFF((select ',' + SplitData from Split_CLR_Fn(Txt,',') where ID > 3 and ID < (select MAX(ID) from Split_CLR_Fn(Txt,',')) - 1 for XML path('')), 1 , 1,'') Disease,
(select SplitData from Split_CLR_Fn(Txt,',') where ID = (select MAX(ID) from Split_CLR_Fn(Txt,',')) - 1) Email,
(select SplitData from Split_CLR_Fn(Txt,',') where ID = (select MAX(ID) from Split_CLR_Fn(Txt,','))) Phone
from #T
GO
Solution 2: better in this case since the format is constant (this is the solution I wrote about above)
;With MyCTE as (
select
Txt,
SUBSTRING(Txt, 1, CHARINDEX(',', Txt, 1) - 1) as CustID
, SUBSTRING(
Txt
,CHARINDEX(',', Txt, 1) + 1 -- I start from the end of preview len
, CHARINDEX(',', Txt, CHARINDEX(',', Txt, 1)+1)- CHARINDEX(',', Txt, 1) - 1
) as FName
, SUBSTRING(
Txt
,CHARINDEX(',', Txt, CHARINDEX(',', Txt, 1)+1)+1 -- I start from the end of preview len
, CHARINDEX(',', Txt, CHARINDEX(',', Txt, CHARINDEX(',', Txt, 1)+1)+1) - CHARINDEX(',', Txt, CHARINDEX(',', Txt, 1)+1) - 1
) as LName
, RIGHT(Txt, CHARINDEX(',', REVERSE(Txt), 1) - 1) as Phone
, RIGHT(LEFT(Txt, Len(Txt) - Len(RIGHT(Txt, CHARINDEX(',', REVERSE(Txt), 1) - 1)) - 1), CHARINDEX(',', REVERSE(LEFT(Txt, Len(Txt) - Len(RIGHT(Txt, CHARINDEX(',', REVERSE(Txt), 1) - 1)) - 1)), 1) - 1) as Email
from #T
select CustID,FName,LName, Phone, Email, SUBSTRING(Txt, Len(CustID) + Len(FName) + Len(LName) + 4, Len(Txt) - Len(Email) - LEN(Phone) - Len(CustID) - Len(FName) - Len(LName) - 5) as Disease
from MyCTE
I hope that this is useful :-)
  Ronen Ariely
 [Personal Site]    [Blog]    [Facebook]

Similar Messages

  • Flat file data loading error using process chain

    Hi SAP Experts,
    I am having a problem loading the flat file data to the cube using process chain. The issue is that when i run the process chain it fails giving the message " Date format error, please enter the date in the format  _.yyyy" . I am using " 0calmonth in the datasource" . Strange is that when i manually execute the infopackage, i dont get any errors and am able to load the same file successfully to the dso and the cube. Is there any special setting for the process chain that i am missing?
    The date format in the flat file is mm/yyyy. I have tried all the options i could including recreating the datasource and invain dont see any success so far. please help me solving this problem as we r in the middle of testing cycle.
    Thanking you all for your quick response and support all the time.
    Kind Regards,
    Sanjeev

    Hi Sanjeev,
    I believe you are opening the .csv file again after saving it. In this case the initial 0 in single digit month (say 02/2010) is getting changed to 2/2010 and the resulting file is not readable to the system. Just do not open the file after you have entered data in the .csv file and saved and closed it. If required, open the file in notepad, but not in excel in case you want to re-check the data.
    Hope this helps.
    regards,
    biplab

  • Open Hub File ignores comma separator

    Hi friends,
    I've got a problem while extracting a csv file with an open hub service.
    In the system my user (tab default) has US decimal format, and I can see the values in the infocubes correctly.
    But when I extract this values thru open hub service to a csv file, the comma separator and decimal point disappear.
    I thought it was a problem of the configuration of my pc, so I modified the regional options selecting US format.
    But, even doing this, when I open the csv file with excel the comma separator and decimal point doesn't exist.
    I know there exist a transaction called something like "RSRVT" or similar that can definy the thousand separator, but I don't remember the exact name.
    The field that I'm interested in the open hub file is defined as NUMC, length 17 and 3 decimal. I've tried to define with DEC format or CURR with similar result (totally failed).
    Can anybody help me!... this is very urgent.
    Thanks a lot.
    Francis.

    Hi
    check out this link too...
    http://help.sap.com/saphelp_nw2004s/helpdata/en/8e/dbe92341c84242be2c7d3917f1c197/frameset.htm
    Re: Flat file data load
    swetha

  • Need help in laoding flat file data, which has \r at the end of a string

    Hi There,
    Need help in loading flat file data, which has \r at the end of a string.
    I have a flat file with three columns. In the data, at the end of second column it has \r. So because of this the control is going to the beginning of next line. And the rest of the line is loading into the next line.
    Can someone pls help me to remove escape character \r from the data?
    thanks,
    rag

    Have you looked into the sed linux command? here are some details:
    When working with txt files or with the shell in general it is sometimes necessary to replace certain chars in existing files. In that cases sed can come in handy:
    1     sed -i 's/foo/bar/g' FILENAME
    The -i option makes sure that the changes are saved in the new file – in case you are not sure that sed will work as you expect it you should use it without the option but provide an output filename. The s is for search, the foo is the pattern you are searching the file for, bar is the replacement string and the g flag makes sure that all hits on each line are replaced, not just the first one.
    If you have to replace special characters like a dot or a comma, they have to be entered with a backslash to make clear that you mean the chars, not some control command:
    1     sed -i 's/./,/g' *txt
    Sed should be available on every standard installation of any distribution. At lesat on Fedora it is even required by core system parts like udev.
    If this helps, mark as correct or helpful.

  • Converting Flat File data into XML

    Hi Experts,
    Consider the message type of the SENDER system and flat file data
    <dt_sender>
    <root>
    <header1>   0..1
        <f1>
        <f2>
        <f3>
    <header2>   0..1
        <f4>
        <f5>
        <f6>
    <item>        1..unbounded
        <f7>
        <f8>
        <f9>
        <f10>
        <f11>
        <f12>
    </item>
    abc     def     ghi     jkl     mno     pqr
    123     123     123     123     123     123
    456     456      456     456     456     456
    how to convert the flat file data into following XML data. please note that each field value is separated by TAB delimeter...wht parameters shld b used
    <root>
        <Header1>
            <f1>abc</f1>
            <f2>def</f2>
            <f3>ghi</f3>
        </Header1>
        <Header2>
            <f4>jkl</f4>
            <f5>mno</f5>
            <f6>pqr</f6>
        </Header1>
        <item>
            <f7>123</f7>
            <f8>123</f8>
            <f9>123</f9>
            <f10>123</f10>
            <f11>123</f11>
            <f12>123</f12>
            <f7>456</f7>
            <f8>456</f8>
            <f9>456</f9>
            <f10>456</f10>
            <f11>456</f11>
            <f12>456</f12>
        </item>
    points will be given to the correct answers
    Thanks in advance.
    FAisal
    Edited by: Abdul Faisal on Feb 29, 2008 5:53 AM

    Faisal,
    When you read the multiple recordset strucutre file then each record in txt file should have an header from which you can identiy which segment it should go.. and you identiy it by using the keyfiledValue in file adapter
    <root>
    <header1> 0..1
    <f1>
    <f2>
    <f3>
    <header2> 0..1
    <f4>
    <f5>
    <f6>
    <item> 1..unbounded
    <f7>
    <f8>
    <f9>
    <f10>
    <f11>
    <f12>
    </item>
    for this input file
    abc def ghi jkl mno pqr
    123 123 123 123 123 123
    456 456 456 456 456 456
    abc def ghi can be read using the file adater to header 1 usinfg key field value, but using the same file adapter you cannt put GHI into header2.
    else you should read whole row abc def ghi jkl mno pqr in single filed and write an UDF to split data to header1 and Header 2
    similarly you have to take care for item records also
    if your inout file is something like this
    abc def ghi
    jkl mno pqr
    123 123 123 123 123 123
    456 456 456 456 456 456
    abc identifies to Header 1
    JKL for Header 2  so on...
    read the whole line in single field  and write UDF to Split to header 1 and header 2 similary for item.

  • Load a flat file into BW-BPS using SAP GUI

    Hi,
    We are using BW BPS 3.5 version, i implemented how to guide  " How to load a flat file into BW-BPS using SAP GUI" successfully without any errors.
    I inlcuded three infoobjects in the text file   costelemt, Posting period and amount. the same three infoobjects i inlcuded the file structure in the global data as specified in the how to document
    The flat file format is like this
    Costelmnt      Postingperiod         Amount   
    XXXXX             #      
    XXXXX             1                          100
    XXXXX             2                           800
    XXXXX             3                           700
    XXXXX             4                           500
    XXXXX             5                           300
    XXXXX             6                           200
    XXXXX             7                           270
    XXXXX             8                           120
    XXXXX             9                           145 
    XXXXX            10                           340
    XXXXX            11                           147
    XXXXX            12                           900 
    I successfully loaded above flat file in to BPS cube and it dispalyed in the layout also.
    But users are requesting to load  flatfile in the below format
    Costelmnt        Annual(PP=#)   Jan(PP=1)   Feb(PP=2) ........................................Dec(PP=12)  
    XXXXX              Blank                       100           800                                                   900
    Is it possible to load a flat file like this
    They wants load a single row instead of 13 rows for each costelement
    How to do this. Please suggest me if anybody accorss this requirment.
    In the infocube we have got only one Info object 0FISCPER3(Posting period) and one 0AMOUNT(Amount)
    do we need 13 Infobjects for each posting period and amount.
    Is there any possiblity we can implement any user exit which we use in BEX Quer's
    Please share your ideas on this.
    Thanks in advance
    Best regards
    SS

    Hi,
    There are 2 ways to do this.
    One is to change the structure of the cube to have 12 key figures for the 12 posting periods.
    Another way is to write an ABAP Function Module to fetch the values from each record based on the posting period and store it in the cube for the corresponding characteristic. This way, you dont have to change the structure of the cube.
    If this particular cube is not used anywhere else, I would suggest to change the structure itself.
    Hope this helps.

  • How to store the flat file data into custom table?

    Hi,
    Iam working on inbound interface.Can any one tell me how to store the flat file data into custom table?what is the procedure?
    Regards,
    Sujan

    Hie
    u can use function
    F4_FILENAME
    to pick the file from front-end or location.
    then use function
    WS_UPLOAD
    to upload into
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
      CALL FUNCTION 'F4_FILENAME'   "Function to pick file
        EXPORTING
          field_name = 'p_file'     "file
        IMPORTING
          file_name  = p_file.     "file
      CALL FUNCTION 'WS_UPLOAD'
       EXPORTING
         filename                       = p_file1
        TABLES
          data_tab                      = it_line
    *then loop at it_line splitting it into the fields of your custom table.
    loop at it_line.
              split itline at ',' into
              itab-name
              itab-surname.
    endloop.
    then u can insert the values into yo table from the itab work area.
    regards
    Isaac Prince

  • Flat file data load - ODS - Look up data in Startroutine

    Hi All,
    There is a requirement that I have two ODS say 1) ABC and XYZ.
    For Both ODS , We load flat file data.
    First we load data to ABC for current fiscal period
    During the data load to XYZ, We lookup part nos data in ABC for current fiscal period , if data there , for those part nos we load XYZ Flat file data.
    My requirement is that , I need to fetch part nos data from ABC for Fiscal period < current fisacl period.  Then I need to extract already loaded data for those part nos from XYZ and Mark the flag field as Non reportable .
    How can we achieve this ?
    Please advice
    Thanks
    Ajay

    Hi ,
    Thanks for your reply.
    I have done so. When I add lines of My Internal table data to DATA_PACKAGE, It gives the syntax error that both structures are not unique. We have several fields in XYZ ods but they are not in Communication structure.
    Thats the problem. Also , if I write the code , will it be executed for each datapackage. I mean , for each datapacket process , my code will fetch whole data from ABC and then from XYZ. Repetion will be there?
    Any other logic can i use ?
    Thanks
    Ajay

  • BDC (Flat File Data Validation) - Code

    I am trying to validate flat file data BEFORE performing BDC (Call Trans. or Session)..
    Pls help me out in below code for xk02..
    DATA: BEGIN OF itab occurs 0,  "ITAB having flat file data.
          lifnr(16) ,
          bukrs(4),
          ekorg(4),
          END OF itab.
    DATA: BEGIN OF int_final occurs 0,
          lifnr(16) ,
          bukrs(4),
          ekorg(4),
          status(6),
          message(6),
          END OF int_final.
    DATA: int_final TYPE TABLE OF int_final.
    DATA: wa_itab TYPE TABLE OF itab.
    DATA: validate_itab TYPE TABLE OF itab. "VALIDATE_ITAB having master data.
    DATA: wa_validate_itab TYPE TABLE OF itab.
    FORM data_validation .
    SELECT LFB1LIFNR LFB1BUKRS LFM1~EKORG INTO TABLE validate_itab
             FROM LFB1 INNER JOIN LFM1 ON LFB1LIFNR = LFM1LIFNR.
    IF sy-subrc = 0.
    SORT validate_itab BY lifnr bukrs ekorg.
    ENDIF.
    LOOP AT itab INTO wa_itab.
    READ TABLE validate_itab WITH KEY
    lifnr = itab-lifnr
    bukrs = itab-bukrs
    ekorg = itab-ekorg
    BINARY SEARCH.
    IF sy-subrc NE 0.
    PERFORM f_error_log USING text-005. "Invalid Value Set
    CONTINUE.
    ENDIF.
    ENDLOOP.
    ENDFORM.                    " data_validation
    *&      Form  f_error_log
    FORM f_error_log USING l_message TYPE string.
    CLEAR : fs_final.
    fs_final-lifnr = itab-lifnr.
    fs_final-bukrs = itab-bukrs.
    fs_final-ekorg = itab-ekorg.
    fs_final-status = text-014. "Error
    fs_final-message = l_message.
    APPEND fs_final TO int_final.
    ENDFORM.                    " f_error_log
    Thanks..

    Hi GAurav,
    I have a small question in th validation.
    In LFM1~LIFNR does not contian any value how u r comparing both and one more thing After getting the data using GUI_upload u will get the data into validate_tab.
    Loop at Vlidate_tab into wa_itab.
    SELECT LFB1LIFNR LFB1BUKRS LFM1~EKORG INTO TABLE validate_itab
    FROM LFB1 INNER JOIN LFM1 ON LFB1~LIFNR =  wa_itab-lifnr.
    endllop.
    Thanks,

  • Errors when loading flat file data

    We just test to load a very simple flat file data with only two lines and the two lines of data in preview of InfoSource is correct.  But when run InfoPackage to load data, the monitor of the InfoPackage shows the following errors (see in between two dashed lines below):
    Error getting SID for ODS object ZDM_SUBS
    Activation of data records from ODS object ZDM_SUBS terminated
    Error when assigning SID (details in long text)
    Error when assigning SID (details in long text)
    Error when assigning SID (details in long text)
    Error when assigning SID (details in long text)
    Error when assigning SID (details in long text)
    Error when assigning SID (details in long text)
    Error when assigning SID (details in long text)
    Error when assigning SID (details in long text)
    Value 'Bottom' (hex. '0042006F00740074006F006D') of characteristic ZRATEPLN contains invalid characters
    Value 'Dealer' (hex. '004400650061006C00650072') of characteristic ZCHANNEL contains invalid characters
    Value 'Bottom' (hex. '0042006F00740074006F006D') of characteristic ZRATEPLN contains invalid characters
    Value '19884/' of characteristic 0DATE is not a number with 000008 spaces
    Value '/19812' of characteristic 0DATE is not a number with 000008 spaces
    Value '19884/' of characteristic 0DATE is not a number with 000008 spaces
    In the flat file (excel sheet saved as a CSV file), for each row of the data, there are two fields which are start_date and end_date and the date format is MM/DD/YYYY and in the Transfer Rule, we transfer the date format from MM/DD/YYYY to YYYYMMDD which is required by DATS InfoObject type in BW.  If you need the excel sheet of data in order to answer our questions about the above errors, you can give us your e-mail address and we can send the simple two rows of data excel sheet file to you.
    Thanks!

    Hi Kevin,
    1.You can use lowercase letters in the values for your characteristics provided you have checked the lowercase checkbox in the general tab page of Create characteristic screen.But when you do so no masterdata tables,text tables, or another level of attributes underneath are allowed.
                            OR
    Use only upper case letters in your characteristic unchecking the above mentioned box.
    2.The date format in the CSV file should be yyyymmdd.It should have 8 characters . I guess there is something strange in your "calendardays" since I could not find 8 characters irrespective of the order.Do not forget to use zeroes.
    Hope this works.
    Reward if it is helpful.
    Regards,
    Balaji

  • Flat File Data Loads to BI 7.0

    Hi Experts,
    Please update me what is the best approach i have to follow for the below scenario of Flat File Data Loads
    I will get data in Excel ....with Two worksheets....from the user
    My requirment is to place the file in Central location avaliable to the user and BW to update if any changes necessary  and want to load data(full0  to bw from file if there are any changes
    Please update me how to deal with this scenarion of Two work sheets,In a central location...
    Thanks

    Easiest thing would be to use a DSO with change log to handle the changes to pass onto to any cubes and load a full every night
    Then let the change log worry about any changes to the workbook
    You have to be careful about the DSO keys though for this to work properly
    Now to automate the loads - just how are you planning to create the infopackage as it will only read a csv and not the binary xls
    Well it will read the binary xls if you maybe use a dbconnect with a jdbc driver to read the xls (that's on my next thing to do - but if you are as your user id suggests a "bw learner" then that may be a bit complicated)
    The only other thign to do is to write a macro that automatically creates the csv file on the app server when the user quits the xls
    Or off course you can just dump the csv each night - but then that is a manual task and in systems I design I hate manual tasks as staff go on holiday and peopel change jobs and it's not really very SoX compliant

  • Conversion_Exit_Cunit_error occured while loading the Flat file data

    Hi
    Iam tryign to load Flat file data into an ODS, i am getting error like Error Conversion Cunit.
    Also we are using 0unit in the ODS for which CUNIT is a conversion rule
    Can you please suggest me why iam getting this error

    Hi Sunil
    Hope you can check whether you are loadig the flat file data from application server or Client workstation.
    May be if you are loading from Client work station you will face problem of this type.
    Try to check if any change in format in the file.
    at the end of the file delete the spaces.

  • Splitting flat file data

    I want to store the flat file data for temporary purpose and as a backup also, but its size is so huge that processing them at one instance is a difficult task. Is there any 'Z' program that will split the records puts it in a new file. Anyone please give an idea and your answer ll b rewarded with maximum points if it helps me on this issue.

    Hi Manjula,
    Check out the below program this will help you solve your requirement, you split the records as per the limit specified and puts them in a new file. Think so it help you.
    REPORT zc1_split_file MESSAGE-ID ztestmsg.
    TABLES: mara.
    DATA: BEGIN OF input,
    mandt LIKE mara-mandt,
    matnr LIKE mara-matnr,
    ersda LIKE mara-ersda,
    ernam LIKE mara-ernam,
    matkl LIKE mara-matkl,
    END OF input.
    DATA: i_mara_tab LIKE TABLE OF input WITH HEADER LINE,
    i_mara_temp LIKE TABLE OF input,
    w_mara_tab LIKE LINE OF i_mara_tab,
    v_newfile(120) TYPE c,
    v_no_lines TYPE i,
    v_split(4) TYPE n VALUE 1,
    v_count(4) TYPE n VALUE 1.
    SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
    PARAMETERS: p_file TYPE rlgrap-filename MEMORY ID file,
    p_count(4) TYPE n.
    SELECTION-SCREEN END OF BLOCK b1.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
    Diplaying the dialog box for opening the file ************
    CALL FUNCTION 'WS_FILENAME_GET'
    EXPORTING
    mask = '.,..'
    IMPORTING
    filename = p_file.
    START-OF-SELECTION.
    IF p_file IS INITIAL. " check to ensure file.
    MESSAGE i001.
    EXIT.
    ENDIF.
    IF p_count IS INITIAL. " check to ensure file.
    MESSAGE i002.
    EXIT.
    ENDIF.
    CALL FUNCTION 'WS_UPLOAD'
    EXPORTING
    filename = p_file
    filetype = 'DAT'
    TABLES
    data_tab = i_mara_tab
    EXCEPTIONS
    conversion_error = 1
    file_open_error = 2
    file_read_error = 3
    invalid_type = 4
    no_batch = 5
    unknown_error = 6
    invalid_table_width = 7
    gui_refuse_filetransfer = 8
    customer_error = 9
    OTHERS = 10.
    IF sy-subrc <> 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    LOOP AT i_mara_tab INTO w_mara_tab.
    IF v_count < p_count.
    APPEND w_mara_tab TO i_mara_temp.
    v_count = v_count + 1.
    ELSE.
    APPEND w_mara_tab TO i_mara_temp.
    v_count = v_count + 1.
    PERFORM split_records.
    REFRESH i_mara_temp.
    v_count = 1.
    v_split = v_split + 1.
    ENDIF.
    ENDLOOP.
    IF v_count NE 1.
    PERFORM split_records.
    ENDIF.
    --> p1 text
    <-- p2 text
    FORM split_records.
    CONCATENATE p_file v_split INTO v_newfile.
    CALL FUNCTION 'WS_DOWNLOAD'
    EXPORTING
    filename = v_newfile
    filetype = 'DAT'
    mode = 'A'
    TABLES
    data_tab = i_mara_temp
    EXCEPTIONS
    file_open_error = 1
    file_write_error = 2
    invalid_filesize = 3
    invalid_type = 4
    no_batch = 5
    unknown_error = 6
    invalid_table_width = 7
    gui_refuse_filetransfer = 8
    customer_error = 9
    no_authority = 10
    OTHERS = 11.
    IF sy-subrc <> 0.
    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
    ENDIF.
    WRITE:/'Data has been written into:' , v_newfile.
    ENDFORM.
    **Reward points if found useful....
    Regards,
    Manikandan.A

  • How to Merge Flat file data and DSO Data

    I have one DSO A which has more than 20 objects (5 of them are keys), which include 0comp_code,0gl_account,.  I have a Flat file which has the following fields ,COMPANY CODE, GLACCOUNT and SUPERVISOR ,EXECUTIVE,ANALYST. My requirement is to get a DSO with consolidated data from DSO A and Flat File data. That means for exapme, In my DSO A , I have US01 COMPANY CODE so when I merge the data in DSO C It should diplay the data for the supervisor exe and analyst for DSO A as well
    When I added with 2 DSOs A (ECC ) and B (Flat file)..C is getting the data from B and its appending the data for Flat file at the end...So I want to merge both...DSO A should match  up with DSO B flat file company code and gl account and should display the remaining fields
    I think its clear..Can any one let me know the solution ASAP...
    Note: We are on BI7.0 not 3.5...
    Thanks in advance

    I have the following key fields in DSO A :
    0COMP_CODE,OGL_ACCOUNT, 0CHRT_ACCT,0FISCVARNT,0FISCPER,0AC_DOC_NUMBER,0ITEM_NUM.
    Here are the total fields in Flat File:
    COMPANY CODE,GL ACCOUNT,SUPERVISOR,ANALYST,EXECUTIVE.
    Here am taking 0COMP_CODE  and 0GL_ACCOUNT  as key fileds in DSO B for Flat File...
    For every month there will be 5 to 6000 records for Flat file ...
    Thanks

  • Urgent: Transport of Flat File Data Source

    While transporting flat file data source is it required to transport first in a separate request or everything can go in 1 request.
    Thank you,
    sam

    Hi,
    In this case it can go in one request as no replication is needed. But for other datasources, you need to send the datasources first in a separate request and then replicate the datasource in BW before you send the BW changes.
    Cheers,
    Kedar

Maybe you are looking for