Flat file have week no.s in header column

Hi Guys,
             We have a flat file which the first 5objects are characterstics,from 6th to 57th column displays the week no.s starting from current week to next yr current week as headings of columns.
Below this headings we have quantity's differing for those those 5objects.
To upload this data into BW,based on calweek is there any program to convert these columns into rows.

Hi Ganesh,
                Do you have sample code for this requirement.
Becoz it is weekly file,we need to load into BW.
Every week the 6th column is going to change as a current week and then onwards upto 52 weeks,it will show the forecast qunatity.
Pls send me if u have any sample code.
My mail id:  [email protected]

Similar Messages

  • Import flat file to multiple tables based on identifier column

    Hello,
    I am trying to setup a package that will import one pipe-delimited flat file (a utility bill) to multiple data tables based on the value of the first column.  I have been told it is similar in format to an EDI file, but there are some differences.
    The number of columns is consistent where the first columns are the same.  Meaning a record that has '00' in the first column will always have 10 columns; a record that has '01' in the first column will always have 9 columns; etc.
    Each value in the first column represents a separate destination data table.  Meaning a record that has '00' in the first column should be output to table '00'; a record that has '01' in the first column should be output to table '01'; etc.  All
    destination tables reside on the same SQL Server.
    Identifier columns can repeat multiple times throughout the flat file.  Meaning a record that starts with '01' may be repeated multiple times in the same.
    Sample Data:
    00|XXXXXXXX|XXX|XXXXXXXX|XXXXXX|XXXX|X|XXXXXXXXXX|XX|XXXXX
    01|XXXXXXXXXXX|XXX|XXXXXXXX|XXXXX|XXXXXXXXXXXXXXXXXXXX|XXXXXXXXXX|XXXXXXX|XXXXXXXXXXXXXX
    02|XXXXXXXXXXX|XXXXXXXX|XXXXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX
    04|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX
    05|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    07|XXXXXXXXXXXXX|X|XXXXXXXXXXXXXXX|XXX|XXXXXXXX|XXXX|XXXXXXX|XXXXXXXXXXX
    01|XXXXXXXXXXX|XXX|XXXXXXXX|XXXXX|XXXXXXXXXXXXXXXXXXXX|XXXXXXXXXX|XXXXXXX|XXXXXXXXXXXXXX
    02|XXXXXXXXXXX|XXXXXXXX|XXXXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX|XXXXX
    04|XXXXXXXXXXX|XXXXXXXXXXXXX|XXX|XXXXXXXX
    Any help would be appreciated.

    Hi koldar.308,
    If there are few distinct values in the first column, we can use Flat File Source connect to that flat file, then use Conditional Split Transformation to split the first column to multiples, and then load the data to multiple tables with OLE DB Destination
    based on the outputs of Conditional Split.
    After testing the issue in my environment, please refer to the following steps to achieve this requirement:
    Drag a  Flat File Source connect to that flat file with Flat File Connection Manager.
    Drag a Conditional Split Transformation connects to the Flat File Source.
    Double-click the Conditional Split Transformation, add several Output based on the first column values as below:
    Drag same number OLE DB Destinations as the outputs of Conditional Split, connect to Conditional Split with one case output:
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    If you have any feedback on our support, please click
    here.
    Katherine Xiong
    TechNet Community Support

  • Flat file data upload - happens for only 1 column

    Dear Experts,
    Goal: To upload data from flat file (with 3 columns-char3,language,text) to characteristic (texts).
    Problem: I am doing through CSV File but when i click on preview i am able to see data only in first column, the other two columns are empty.
    More details:
    I have created this data source in data source tab.
    Adapter - Load text type file from Workstation.
    data format -Seperated with seperator (CSV).
    Data Separator     - ,
    Escape Sign     - "
    File name      - C:\Documents and Settings\hans\Desktop\mtype.txt
    Fields
    /BIC/TSMOVTYPE           CHAR     3
    LANGU               CHAR     2
    TXTSH               CHAR     20
    Can anyone give me idea?
    regards
    BI Learner

    Hi,
    You can try this:
    1. After ESCAPE SIGN, there will be an option " Number of rows to be left". Put 1 there.
    2. In your flat file, leave the 1st row blank or add anything in the first row for all the three columns.Save as CSV file. Note that the file must have the same sequence as in the data source.
    3. Check your transformations. Data source fields should be mapped correctly to the right side.
    try to load now. Hope it'll help.
    Preet

  • Unix Flat File: Remove header and trailer and put in another file.

    Hi,
    I have Source Flat File placed on Unix Box with header and trailer.
    I want to remove Header and Trailer and put in some other file and Data in another file.
    I tried following command in unix its working.But not getting Header and Trailer in another file.
    sed '1d;$d' input_source.txt > output_data.txt
    also How will i use OS command for it in ODI.
    Guide me.
    Thanks
    Ashwini

    Hi Ashwini,
    You can run OS commands in a package using an ODI Tool: OdiOSCommand.
    It is also possible to execute OS commands in an ODI procedure using the Operating System or Jython technologies.
    There should be some articles about this on metalink (http://metalink.oracle.com).
    Thanks,
    Julien

  • Validate Technical Information from Flat File header record

    Hi Experts,
    I would like to know the best way to validate data from a flat file where the layout has a header containing the number of records sent and the total amount distributed along the file content.
    Please, notice we have a commom layout where the first 3 fields are only used by header record (record type = 00)
    For example:
    RECORD TYPE     NUMBER OF RECORDS     TOTAL AMOUNT     COST CENTER     AMOUNT
              00                              3                                  250,00                                                    
              01                                                                                1000000             100,00
              01                                                                                2000000             100,00
              01                                                                                1000000               50,00
    So, let´s suppose I received the file content above and I have recorded all 4 records in a first DSO. The next step I would like to load to a different DSO the records where RECORD TYPE = 01 (business) data. That´s OK.
    The header record where RECORD TYPE = 00 (technical data) has to be uploded to a log DSO validating the total amount (250,00) and number of records (3)  sent in the file. This line will be further exported as a validation information.
    Well, any suggestions how to validate the data as described above?
    Any information will be fully appreciated.
    Thanks.
    Fábio
    Edited by: Fabio Chaves on Jan 30, 2010 9:30 PM

    Hi Fabio,
    You seem to compare the first record of file or header record with the collated values of rest of the records in the flat file.
    Simple way would be to do validation separate and load the file...
    create a function module to read a file...
    compare the first record with the collated values in the rest of file.
    If the file seems to be okay,then loading is your choice...
    rakesh

  • Flat File Split

    Hi,
    I will be loading more than 3 million records from a flat file every week.
    I tuned RSCUSTV6, the infocube dimensions (creating line-itemsdims), use number range buffering, have a fixed length ASCII file, load Target and PSA in parallell. The load is now taking 2 hours.
    In order to speed this up I want to split the Flat file in 4 pieces and load these in parallell. However, these loads are locking each other (SM12, ENQUEUE) when I start them manually.
    What to do?
    SOlved the problem by changing the setting in the infopackage; Only load Data Target. The load now takes 30 min. for 3million records
    Message was edited by: F. Padt

    Hi,
    If that's happening then I would suggest to try different upload methods (try to put your files on the application server and load them in background).
    If that's not working then I would try to create 4 different flatfile infosources (and datasources) to be able to load one piece from each infosource.
    Hope it helps,
    David.

  • How to upload  schedule line from flat files to sap file

    dear all,
    i want to upload the schedule lines from flat files to sap schedulle lines
    but the flat files have 15 schedule lines and the data is as per date
    so how to upload that and the fields available in flat files are more than the sap screen
    we are having more than 6 items
    and 15scedule lines its abt 90data to be upload
    for one customer in every 15 day
    so how to do this
    is there any direct use in functional side
    with out the help of any abap
    but my user will do it
    so he need a permanent solution
    with regards
    subrat

    Hi Subrat ,
    u can upload the data either ( Master /Transaction) data with the help of lsmw. for that all u need to do is go through the lsmw and do it. in that u can go Batch input recording/ BAPI/ IDOC any of that. here i am sending the LSMW Notes go through it and do the work.
    once u create the LSMW project then u can ask the data from user or u can explain the user about the program and can run the flat file to upload the data.
    if u require LSMW material Just send me blank mail from u. my mail id is [email protected]
    Reward if Helpful.
    Regards,
    Praveen Kumar.D

  • Flat file to target table map in 11G

    Hi,
    I am havin many problem with my 10G R1 map that have been migrated to 11G.
    all that my 10G map has is mapping from flat file to stage table. It also had one constant operator. i am getting error coz of this constant that i am setting up in the map.
    when i validate this constant variable i get error mess:
    API8534:Validation no supported for language SQLLOADER,property Expression for SAASD_MAP
    -- Generator Version : 11.1.0.7.0
    -- Created Date : Mon May 11 21:16:25 CDT 2009
    -- Modified Date : Mon May 11 21:16:25 CDT 2009
    -- Created By : OWB_WUSER
    -- Modified By : OWB_WUSER
    -- Generated Object Type : SQL*Loader Control File
    -- Generated Object Name : "SADAD_MAP"
    -- Copyright © 2000, 2007, Oracle. All rights reserved.
    OPTIONS (BINDSIZE=50000,ERRORS=50,ROWS=200,READSIZE=65536)
    LOAD DATA
    CHARACTERSET WE8MSWIN1252
    INFILE '{{ETL_FILE_LOC.RootPath}}{{}}dgp.dat'
    BADFILE '{{ETL_FILE_LOC.RootPath}}{{}}dgp.bad'
    DISCARDFILE '{{ETL_FILE_LOC.RootPath}}{{}}dgp.dis'
    CONCATENATE 1
    INTO TABLE "STAGING"."DGP_STAGG"
    APPEND
    REENABLE DISABLED_CONSTRAINTS
    WHEN (1:2) = 'EM'
    "PHONE_COUNTRY" CONSTANT ''Asia'',
    "MRI" POSITION(1:2) CHAR(2) ,
    if you note 11G some how does not allow me to have constant operator in flat file maps. and the constant for PHONE_COUNTRY column is enclosed in double quotes insted of single quotes.[should be 'Asia' and not ''Asia'']
    can any one tell me why this is heppening?...i also have problem with the READ BUFFER property. which in 10G used to be defulted to '4' but ignored which generating the .ctl but in 11G this property creates READ BUFFERS statement in .ctl....any help in this will be grt.
    Edited by: user591315 on May 12, 2009 7:39 AM

    if you note 11G some how does not allow me to have constant operator in flat file maps. and the constant for >PHONE_COUNTRY column is enclosed in double quotes insted of single quotes.Hi
    Please open the same mapping in 10g R2 its same .
    It use Double code around the column name.
    By the way execute the mapping and then check are you getting desired result.

  • Urgent: Flat file load issue

    Hi Guru's,
    Doing loading in data target ODS via flat file, problem is that flat files shows date field values correct of 8 characters but when we do preview it shows 7 characters and loading is not going through.
    Anyone knows where the problem is why in the preview screen it shows 7 characters of date when flat file has 8 characters.
    Thanks
    MK

    Hi Bhanu,
    How do i check if conversion is specified or not and another thing is it is not just one field we have 6 date fields and all of them are showing 7 characters in PSA after loading where as flat file have 8 characters in all of the 6 date fields.
    In PSA I checked the error message it shows 2 error messages:
    First Error message:
    The value '1# ' from field /BIC/ZACTDAYS is not convertible into the    
    DDIC data type QUAN of the InfoObject in data record 7 . The field      
    content could not be transferred into the communication structure       
    format.                                                                               
    em Response                                                                               
    The data to be loaded has a data error or field /BIC/ZACTDAYS in the    
    transfer structure is not mapped to a suitable InfoObject.                                                                               
    The conversion of the transfer structure to the communication structure 
    was terminated. The processing of data records with errors was continued
    in accordance with the settings in error handling for the InfoPackage   
    (tab page: Update Parameters).                                                                               
    Check the data conformity of your data source in the source system.                                                                               
    On the 'Assignment IOBJ - Field' tab page in the transfer rules, check    
    the InfoObject-field assignment for transfer-structure field              
    /BIC/ZACTDAYS .                                                                               
    If the data was temporarily saved in the PSA, you can also check the      
    received data for consistency in the PSA.                                                                               
    If the data is available and is correct in the source system and in the   
    PSA, you can activate debugging in the transfer rules by using update     
    simulation on the Detail   tab page in the monitor. This enables you to   
    perform an error analysis for the transfer rules. You need experience of  
    ABAP to do this.                                                                               
    2nd Error Message:
    Diagnosis                                                                               
    The transfer program attempted to execute an invalid arithmetic          
        operation.                                                                               
    System Response                                                                               
    Processing was terminated, and indicated as incorrect. You can carry out 
        the update again any time you choose.                                                                               
    Procedure                                                                               
    1.  Check that the data is accurate.                                                                               
    -   In particular, check that the data types, and lengths of the     
                transferred data, agree with the definitions of the InfoSource,  
                and that fixed values and routines defined in the transfer rules                                                                               
    -   The error may have arisen because you didn't specify the      
             existing headers.                                                                               
    2.  If necessary, correct the data error and start processing again.  
    Thanks
    MK

  • File Adapter: Flat-file to XML

    New to the Oracle ESB.
    I am using the file adapter to read in a fixed-length flat file, have mapped it to an xsd and now I just want to dump the resulting xml into a directory. I don't see an obvious way to do this without converting it back to a flat file (by pointing the file writer back to a similar flat file xsd). Missing something obvious?
    Thank you!

    File Adapters content conversion does not supprot such a nested strucutre currently.
    The only format supported is the one shown in ths link,
    http://help.sap.com/saphelp_nw04/helpdata/en/2c/181077dd7d6b4ea6a8029b20bf7e55/content.htm
    Either write a module that will do this conversion or Change the datatye for the source to the format shown in help.sap.
    Regards
    Bhavesh

  • 3 flat files grouped into 1 flat file

    Hi XI Gurus!
    IS there a way to merge the contents of all 3 flat files (source) to 1 flat file without using BPM? And IF I have to use BPM, could you please give me a general idea on how I can go on about it? After merging, I have to map the fields to an IDOC (target). Thanks so much for this.
    Regards,
    SAPenthusiast

    Hi,
    If you are not looking out for some specific thing and want to add second file at the end of first file and so on, then write a shell script to merge all files in one and then use that shell script in file adapter with option RUN OS Command before processing or you can use output file directly with File to IDoc mapping with FCC.
    Assumption of this approach -
    1. I assume that your flat files have same structure so that when you use FCC, it will parse all fields.
    If this is not the case and all flat files have different structures then you have to use BPM for merging it. As suggested above, you have to use BPM Collect pattern of SAP BAsis component of your ESR or you have to create something similar BPM.
    For your scenario using BPM, you have to merge the files first and after merging it, that is output file of the scenario, you have to create another scenario for mapping merged file to Idoc.
    In case of BPM
    You will have  2 scenario - FIle to file (Merge flat file in One)
    2nd will be file to Idoc.
    Regards
    Aashish Sinha

  • Regarding ssis - Filter first 5 rows from flat file

    Hi,
    i have a requirement like this every day we receive 10-15 Flat files , we need to load these data into SQL Table ,here nothing is complex for this we need to use foreachloop Container,Dataflowtask..etc. here the issue is each flat file have top 5 rows has
    Client information like ( We need to delete First 5 rows) and then insert into SQL Table . please suggest me how to develope the package..

    You need to use a script task to read file one by one and remove the first 5 rows from the file. The file name/path can be stored as a variable and read it from the script code. Insert the script task inside the foreach loop container just before importing
    the file.
    Refer the link to remove a line in .net
    http://stackoverflow.com/questions/7008542/removing-the-first-line-of-a-text-file-in-c-sharp
    Regards, RSingh

  • Loading several flat files

    I want to load several flat files each week with the same rules file. The files are for instance week12.txt, week13.txt etc. The files are all in the same directory, and need to be loaded each time. Is it possible to use a wildcard with ESSCMD, or MaxL to create a simple loadscript? Or does anybody know another way to load several files?

    If you are using a Win OS, you could use a VBS to write a script. Call the VBS in a DOS batch file, then execute the newly created script. Note, In this example, I didn't fully build the IMPORT command. Additionally, I would add some form of error handling and logging.Const sDataPath = "c:\Files"Dim fso, oFolder, oFiles, oFile, oScriptSet fso = CreateObject("Scripting.FileSystemObject")Set oFolder = fso.GetFolder(sDataPath)Set oFiles = oFolder.FilesSet oScript = fso.CreateTextFile("c:\script.scr" ,2, False)oScript.WriteLine "LOGIN " & CHR(34) & "ASPEN" & CHR(34) & " " & CHR(34) & "ADMIN" & CHR(34) & " " & CHR(34) & "PASSWORD" & CHR(34) & ";"oScript.WriteLine "SELECT " & CHR(34) & "SAMPLE" & CHR(34) & " " & CHR(34) & "SAMPLE" & CHR(34) & ";"For Each oFile In oFiles oScript.WriteLine " IMPORT 3 " & CHR(34) & oFile.Name & CHR(34) & ";"NextoScript.WriteLine "LOGOUT;"oScript.WriteLine "EXIT;"oScript.CloseSet oFolder = NothingSet oFiles = NothingSet oScript = NothingSet fso = Nothing

  • Modifying the number of records to skip after importing the flat file

    I imported a flat file and the first row was the column header. I also created an external table for that flat file. The sqlldr is skipping the first record during the load. Is there a way to change this in the flat file module or External table?

    If you marked this row as the header in the sample wizard then you will see the following in the External Table:
    ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    CHARACTERSET WE8MSWIN1252
    STRING SIZES ARE IN BYTES
    NOBADFILE
    NODISCARDFILE
    NOLOGFILE
    SKIP 1
    So the external table is skipping this.
    Now the issue with changing it is interesting because you cannot change this after the sampling... I think this is a bug which I will file.
    Let me know if this answers the question,
    Jean-Pierre

  • Different formats of the flat file for the same target

    In our deployment, we use plugin code to extract the csv files in the required format. The customers are on same version of datamart, but they are on different versions of source database - from 3.x to 4.5 depending on which version of application they are using. In 4.0, we introduced a new column email in the user table in the source database. Accordingly, plugin will add the field in the csv file. But not all the customers will get the upgraded version of plugin at the same time. So ETL code needs to decide which data flow to process depending on the format of the csv file to load data to the same target table. I made the email field in the target table nullable but it still expects the same format of the csv file with delimiter for null value.
    Need help to achieve this. Can I read the structure of the flat file in DS or get the count of delimiters so that I can use a conditional to use different data flow based on the format of the flat files.
    Can I make the email column in the flat file optional?
    Thanks much in advance.

    You can add an email column that maps to null in a query transform for the source that does not contain this column. 
    Or else you can define two different file formats that map to the same file.  One with the column and one without

Maybe you are looking for

  • New System - No POST, No Beep Codes

    Hey guys, So I built a new system last night, powered everything on and everything seemed to come on as expected but there was absolutely nothing on the screen.  The system MIGHT be posting properly because I see the keyboard lights flash when I firs

  • F4 on selection screen in BDC

    can you tell me how to get a file from desktop to selection screen. iam using the KD_GET_FILENAME_ON_F4 function module . F4 functionality is not getting. regards RAj.

  • OCI DRIVER error. Please help me!!!

    Hi there, I am trying to connect to oracle 9.0.1 database that is on server, not on my system. I am using JDBC for conncetion. I want to use OCI driver but it gives me "java.lang.UnsatisfiedLinkError: no ocijdbc8 in java.library.path" I have jdbc dri

  • Cross Zoom not working in some places

    Hello, I have been using cross zoom transitions for a piece I've been working on in Final Cut Pro 6/Studio 2, and it has worked well but sometimes it will not put the transition between the two pieces of video I want it to be between. Rather, it stic

  • How to regenarate Infoset using program

    Hi, Please some one can help me to regenerate Info Set. I hope there is program,which will regenerate Info Set. I know the TCode RSISET.I would like to regenerate all our Infosets using batch program.so that no need to regenerate each nfo set individ