Space - Flat file

Hi
I am doing a flat file load which is about 2.5 GB. Loading this file using process chian in to ODS objects. There is a program which runs in the beginning of the process chain and splits the file in to two other files. We tested in Dev it worked fine with less volume of data. But when we try to load this big file, the program is running since 6 hrs. I am thinking this is may be because of space in Unix server. Any ideas!!
Thanks

Ajay
Please check the job is active or not. As you rightly said there may be some other jobs running in the same server and it makes slow down the performance. If the job is active let it run.
Thnaks
Sat

Similar Messages

  • How to delete extra tab space in the flat file??

    Hi all,
        I have a flat file on presentation server which has 20000 records. In this file there are 16 fields and distance between each field is different and what I want is each field should have one tab space between another.
    Thanks & Regards
    Jerry

    Hi,
    I dont want to join the fields but to seperate them with a tab space.
    Regards
    Jerry

  • How to generate blank spaces at end of the record in a flat file with fixed

    Hi,
    I am generating a flat file with fixed length.
    In my ABAP program, i am able to see the spaces at the end of the recors in debug but when download to applicaiton server i am not able to see those spaces.
    How can i generate blank spaces at the end of the record in a flat file?
    Please update
    Thank you

    How are you downloading the file?  And, How are you looking at the file on the application server?
    Can you provide snippets of your code?
    Cheers
    John

  • Handling flat file conversion with spaces inbetween  at sender side

    Hi,
       I am facing some problem in configuring the sender JMS adapter file content conversion. Please find the structure of my file below
    010AG  07/17/2007 000130800 TOZ07/17/200710:48:46
    010AU  07/17/2007 006682800 TOZ07/17/200710:48:46
    010-Record key
    AG-Metal code
    07/17/2007 -price Date
    000130800 -pricevalue
    TOZ-Unitofmessure
    07/17/200710:48:46-Unitofmessure
    there are 2spaces inbeween 1and 2nd fields and one space beween 2nd and 3rd ,one space between 3rd and 4th fileds
    I declared my source data strucute like below
    <source_MT>
    <PriceData>
         <Metal code>
          <price Date>
          <pricevalue>
          <Unitofmessure>
           <shipDate>
    </priceData>
    </source_MT>
    I am using this PDF to configure my serder communication channel https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/50061bd9-e56e-2910-3495-c5faa652b7
    . but i got struck up in declaring these two fields as i need deal with spaces.
    xml.NameA.fieldNames
    xml.NameA.fieldFixedLengths
    It would be great if sombody tell me how ican decalre content conversion rules for the file
    Thanks
    sudheer

    Hi,
    Please check some links on FCC.
    Introduction to simple(File-XI-File)scenario and complete walk through for starters(Part1)
    Introduction to simple (File-XI-File)scenario and complete walk through for starters(Part2)
    File Receiver with Content Conversion
    Content Conversion (Pattern/Random content in input file)
    NAB the TAB (File Adapter)
    Introduction to simple(File-XI-File)scenario and complete walk through for starters(Part1)
    Introduction to simple (File-XI-File)scenario and complete walk through for starters(Part2)
    How to send a flat file with various field lengths and variable substructures to XI 3.0
    Content Conversion (Pattern/Random content in input file)
    NAB the TAB (File Adapter)
    File Content Conversion for Unequal Number of Columns
    Content Conversion ( The Key Field Problem )
    The specified item was not found.
    File Receiver with Content Conversion
    http://help.sap.com/saphelp_nw04/helpdata/en/d2/bab440c97f3716e10000000a155106/content.htm
    Regards,
    Phani

  • B1i Truncating space characters in elements from RegEx flat file SLD

    Task:
    I'm importing from a fixed-width file that's specifically 94 characters wide consisting of multiple columns.  My only concern at this point is the number of lines and their width.  I have to pad the block out to a specific number of lines, and all existing lines in my source file already fill all 94 characters.  Multiple headers and footers exist in the same flat file, so defining the format explicitly based on column width in RegEx will add significant time to the project (though it's not necessarily impossible).
    This file ends lines with a single LF character (no CR), which apparently is too much for TXT and CSV formats to handle (the entire file comes in as one line).  So I turned to Regex, where I can split the document on arbitrary characters like LFs.
    I've recorded a message and the lines look something like this:
    "ABB CCCCCCCCC DDDDDDDDDEEEEEEFFFFGHHHIIJKKKKKK KKKK K KKKKK    KKKKKK KKKK K KKKKK    LLLLLLLL"
    and column L is occasionally blank (just spaces).  This is one of the row formats present in the file.  Most of the columns use spaces as fill characters and are left justified.
    Tag definition looks like this:
    <tagDefinition xmlns="urn:com.sap.b1i.bizprocessor:pltdefinition" regex="\n" schemaName="" tagName="line" matchSplit="S" stackSize="10000" DOTALL="true" MULTILINE="false">
    </tagDefinition>
    Problems:
    With a recorded test messsage, the lines build properly:
    <Payload Role="S" intype="rgx_ruledoc">
    <bfa:io xmlns:bfa="urn:com.sap.b1i.bizprocessor:bizatoms" pltype="rgx" schemaName="">
      <line xmlns="">ABB CCCCCCCCC DDDDDDDDDEEEEEEFFFFGHHHIIJKKKKKK KKKK K KKKKK    KKKKKK KKKK K KKKKK            </line>
    </bfa:io>
    </Payload>
    But by the time it reaches my first XSL Transformation atom, I receive the data in this form:
    <Payload Role="S" intype="rgx_ruledoc">
    <bfa:io xmlns:bfa="urn:com.sap.b1i.bizprocessor:bizatoms" pltype="rgx" schemaName="">
      <line xmlns="">ABB CCCCCCCCC DDDDDDDDDEEEEEEFFFFGHHHIIJKKKKKK KKKK K KKKKK KKKKKK KKKK K KKKKK </line>
    </bfa:io>
    </Payload>
    In other words, I'm missing 3 spaces from within the middle of K and 11 from the end of the example line.  Multiple lines are affected by this.  I've built the Regex import portion properly, judging by the test message.  But the data I build is not the data I'm given.  And the truncation seems to be happening somewhere outside my control.
    Questions:
    1. Is there a way to specify that all spaces are non-breaking in the input/output format descriptors?
    2. Is there a way I can specify that B1i doesn't modify the data before handing it to me?
    3. Is there an easier/more correct way to do this?
    Also possibly 4. Can you specify line endings in TXT/CSV imports?

    As far as I've found, there's no way to specify that B1i doesn't normalize spaces for you in an input Msg.  I ended up splitting the incoming message line to its individual columns using the Regex format descriptor.  That way, I was able to track the integrity of each component of the message.
    However, when writing out to the same file format using the TXT file type, B1i is inserting commas between the columns, despite:
    1. Being TXT format, which shouldn't use delimiters.
    2. Having the delimiter field empty.  It correctly ignores this field, even if it does insert a delimiter.
    3. Specifying no delimiter explicitly using <FileOut type="file_full">/<Control><deli>.
    It seems odd that the included means for writing raw text would mangle the output message so.
    Has anyone successfully written data to a TXT file with no delimiter?

  • Export Report to flat file with spaces

    I have a report that is several columns wide. It queries data from our SQL based accounting package to create a flat file with spaces(must be accurate). When we go export the file we don't get all of the columns. Any ideas?

    Hi,
    Are you exporting to CSV or Excel File?
    Export to Excel.
    Bashir Awan

  • How to avaoid space while reading from flat file

    Dear all
    I am using forms 10g.
    I am reading data from flat file using Text_io.
    In my exception part i have written like this
    EXCEPTION
      WHEN no_data_found THEN
          CLIENT_TEXT_IO.Fclose(in_file);But if in my last line from the flat file is some other characters then it is taking that also .
    How can i avoid it ?
    Note :- Other characters means you cannot see like this , when you press SHOW ALL CHARACTERS button in the NOTEPAD++ then it will show ...

    My File is like this ABC . There is no extension like ABC.txt .
    And in the file if the last line if blank space is there then it is coming to exception part , but if TAB is there then it is not coming .

  • Fields in Flat File are separated by special characters

    hello every one,
       fIf fields in Flat File are separated by special characters
    and If Flat File is given in Excel sheet , how to handle this kind of issues.
    to Upload data into R/3 system, in BDC.
       If u have sample program ,please send it ot me and explain ?
    others give me hints...

    Hi,
    you can use the function module ALSM_EXCEL_TO_INTERNAL_TABLE .
    Check this sample code.
    hi,
    use the FM ALSM_EXCEL_TO_INTERNAL_TABLE.
    PARAMETERS:
    P_INFL like RLGRAP-FILENAME.
    DATA:
    BEGIN OF T_DATA1 OCCURS 0,
    RESOURCE(25) TYPE C,
    DATE(10) TYPE C,
    DURATION TYPE P DECIMALS 2,
    ACTIVITY(25) TYPE C,
    B_NBILL(1) TYPE C,
    END OF T_DATA1,
    T_DATA TYPE ALSMEX_TABLINE OCCURS 0 WITH HEADER LINE,
    BEGIN OF T_FINAL OCCURS 0,
    RESOURCE(25) TYPE C,
    DATE(10) TYPE C,
    DURATION(15) TYPE C,
    ACTIVITY(25) TYPE C,
    B_NBILL(10) TYPE C,
    END OF T_FINAL.
    DATA : HEADER TYPE XSTRING.
    Work Variables Declaration.
    CONSTANTS:
    W_Y TYPE C VALUE 'Y',
    W_N TYPE C VALUE 'N'.
    Work area.
    DATA:
    WA_DATA LIKE T_FINAL.
    AT SELECTION-SCREEN ON VALUE-REQUEST FOR P_INFL.
    PERFORM GET_FILENAME CHANGING P_INFL.
    START-OF-SELECTION.
    PERFORM UPLOAD_DATA_FROMEXCEL.
    FORM UPLOAD_DATA_FROMEXCEL.
    Downloading the data from presentation server
    CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
    EXPORTING
    FILENAME = p_infl
    I_BEGIN_COL = 1
    I_BEGIN_ROW = 2
    I_END_COL = 8
    I_END_ROW = 1000
    TABLES
    INTERN = T_DATA
    EXCEPTIONS
    INCONSISTENT_PARAMETERS = 1
    UPLOAD_OLE = 2
    OTHERS = 3.
    IF SY-SUBRC <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    ENDFORM. " upload_data_fromexcel
    *& Form process_data
    text
    FORM PROCESS_DATA .
    T_FINAL-RESOURCE = 'Resource'.
    T_FINAL-DATE = 'Date'.
    T_FINAL-DURATION = 'Duration'.
    T_FINAL-ACTIVITY = 'Activity'.
    T_FINAL-B_NBILL = 'Billable'.
    APPEND T_FINAL.
    SORT T_DATA BY ROW COL.
    LOOP AT T_DATA.
    CASE T_DATA-COL.
    WHEN 3.
    T_DATA1-RESOURCE = T_DATA-VALUE.
    WHEN 4.
    T_DATA1-DATE = T_DATA-VALUE.
    WHEN 5.
    T_DATA1-DURATION = T_DATA-VALUE.
    WHEN 6.
    t_data1-activity = t_data-value.
    WHEN 7.
    T_DATA1-B_NBILL = T_DATA-VALUE.
    ENDCASE.
    AT END OF ROW.
    COLLECT T_DATA1.
    ENDAT.
    ENDLOOP.
    LOOP AT T_DATA1.
    T_FINAL-RESOURCE = T_DATA1-RESOURCE.
    T_FINAL-DATE = T_DATA1-DATE.
    T_FINAL-DURATION = T_DATA1-DURATION.
    T_FINAL-ACTIVITY = T_DATA1-ACTIVITY.
    T_FINAL-B_NBILL = T_DATA1-B_NBILL.
    APPEND T_FINAL.
    ENDLOOP.
    ENDFORM. " process_data
    *& Form get_filename
    FORM GET_FILENAME CHANGING P_FILENAME.
    CALL FUNCTION 'WS_FILENAME_GET'
    EXPORTING
    DEF_FILENAME = SPACE
    DEF_PATH = P_FILENAME
    MASK = ',. ,..'
    MODE = 'O' " O = Open, S = Save
    TITLE = BOX_TITLE
    IMPORTING
    FILENAME = P_FILENAME
    EXCEPTIONS
    INV_WINSYS = 1
    NO_BATCH = 2
    SELECTION_CANCEL = 3
    SELECTION_ERROR = 4
    OTHERS = 5.
    CASE SY-SUBRC.
    WHEN 1.
    MESSAGE I999 WITH
    'File selector not available on this windows system'(046).
    WHEN 2.
    MESSAGE E999 WITH
    'Frontend function cannot be executed in background'(047).
    WHEN 3.
    MESSAGE I999 WITH 'Selection was cancelled'(048).
    WHEN 4.
    MESSAGE E999 WITH 'Communication error'(049).
    WHEN 5.
    MESSAGE E999 WITH 'Other error'(050).
    ENDCASE.
    ENDFORM. " get_filename
    <b>Kindly Reward points if you found the reply helpful.</b>
    Cheers,
    CHAITANYA.

  • Size of Infocube Export to a flat file (.CSV)

    First off, I'm new to BW so I apologize if I use the wrong terminology. I was wondering if there was any way to estimate the size of a flat file infocube export. For example, if I look up all the associated DB tables for a cube, let's call it TESTCUBE, would the total size of those tables be about the size of the TESTCUBE export to a flat file. This assumes that the cube is not compressed in any way. So if the the total DB table size is 100 MB then is the flat file about 100 MB? If not, is there a way to estimate this.
    Thanks!

    Can't say for sure how all DBs that BW run on work, but with Oracle, space is consumed in blocks, not by rows/records. 
    Let's say you have a table where the average row width (record length) is 100 bytes.  In a tablespace that has 8K blocks, in theory, you could fit 81 rows in the block.
    However there is usually a certain amount of freespace left in a block when rows are added to accomodate subsequent inserts/updates.  The BW tables I have looked at default this to 10%. So that means youe have over 800 bytes of the block that might not get filled.  There are other things that can result in space being consumed, such as deletions that will leave dead space, and updates can cause a row to migrate to another block, etc.  Also, space is allocated in chunks as a table grows, and you may add one row to a table that causes a 10MB chunk of storage to be allocated, but it is holding only 1 row.
    So as you can begin to see.  A flat export should require less space than the source table (unless you are using database table compression which could completely invalidate what I have said), but by how much depends on all the things I mentioned, and could change from day to day.
    You could certainly use as a rough rule of thumb, that a flat file export will require the same space and you would be safe (except if you are using db compression).

  • Export data to a flat file

    Hi all,
    I need to export a table data to a flat file.
    But the problem is that the data is huge about 200million rows occupying around 60GB of space
    If I use SQL*Loader in Toad, it is taking huge time to export
    After few months, I need to import the same data again into the table
    So please help me which is the efficient and less time taking method to do this
    I am very new to this field.
    Can some one help me with this?
    My oracle database version is 10.2
    Thanks in advance

    OK so first of all I would ask the following questions:
    1. Why must you export the data and then re-import it a few months later?
    2. Have you read through the documentation for SQLLDR thoroughly ? I know it is like stereo instructions but it has valuable information that will help you.
    3. Does the table the data is being re-imported into have anything attached to it eg: triggers, indices or anything that the DB must do on each record? If so then re-read the the sqlldr documentation as you can turn all of that off during the import and re-index, etc. at your leisure.
    I ask these questions because:
    1. I would find a way for whatever happens to this data to be accomplished while it was in the DB.
    2. Pumping data over the wire is going to be slow when you are talking about that kind of volume.
    3. If you insist that the data must be dumped, massaged, and re-imported do it on the DB server. Disk IO is an order of magnitude faster then over the wire transfer.

  • How can I load a flat file into a ZTABLE dynamically

    I need to create a program which can Load a ZTABLE from a flat file structure (delimited and fixed options required).  We have many ZTables where this will be required so I was hoping to do it dynamically somehow.  Otherwise I will have to create one ABAP for every ZTable we have to load.
    My Inputs should be
    PARAMETERS:  p_ztable TYPE ddobjname,         "Z Table Name
                 p_infile(132) LOWER CASE,        "File Name
                 p_delim(1).                      "Delimiter
      I know that I can read the file by using gui_upload
        CALL FUNCTION 'GUI_UPLOAD'
        EXPORTING
          filename                = c_infile
          has_field_separator     = p_delim
        TABLES
          data_tab                = indata
        EXCEPTIONS
          file_open_error         = 1
          file_read_error         = 2
            OTHERS                  = 9.
    I know that  I can split the contents of this file (if a delimiter is used).  However I will not know the actual field names of the table until runtime as to what to fields to split it into.  In the example below I have the actual table (t_rec) and each of the fields (pernr, lgart, etc) in the code but each table I
    need to load will be different – it will have a different # of fields as well.
    FORM read_data_pc.
      LOOP AT indata.
        PERFORM splitdata USING indata.
        APPEND t_rec.
        CLEAR t_rec.
      ENDLOOP.
    ENDFORM.
    FORM splitdata USING p_infile.
       SPLIT p_infile AT p_delim INTO
            t_rec-pernr                 "Employee #
            t_rec-lgart                 "Wage Type
            t_rec-begda                 "Effective date
            t_rec-endda.                 "End date
      ENDFORM.                       
    Once I split the data into the fields then I can just look and insert the record.
    Does anyone have any ideas?  Specific code examples would be great if you do.  Thx!!

    Hi janice,,
    Try this sample code where you can upload data from a flat file into the internal table.
    REPORT  z_test.
    TABLES: mara.
    FIELD-SYMBOLS : <fs> .
    DATA : fldname(50) TYPE c.
    DATA : col TYPE i.
    DATA : cmp LIKE TABLE OF rstrucinfo WITH HEADER LINE.
    DATA: progname LIKE sy-repid,
    dynnum LIKE sy-dynnr.
    DATA itab TYPE TABLE OF alsmex_tabline WITH HEADER LINE.
    DATA: BEGIN OF ZUPLOAD1_T OCCURS 0 ,
             matnr like mara-matnr,
             ersda like mara-ersda,
             ernam like mara-ernam,
             laeda like mara-laeda,
          END OF ZUPLOAD1_T.
    *DATA: ZUPLOAD1_T LIKE mara OCCURS 0 WITH HEADER LINE.
    DATA: wa_data LIKE TABLE OF  ZUPLOAD1_T WITH HEADER LINE.
    selection-screen
    SELECTION-SCREEN: BEGIN OF BLOCK blk WITH FRAME TITLE text-001.SELECTION-SCREEN : SKIP 1. PARAMETERS : p_file LIKE rlgrap-filename.SELECTION-SCREEN : SKIP 1.SELECTION-SCREEN : END OF BLOCK blk
    . AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
    F4 Value for File
      CALL FUNCTION 'KD_GET_FILENAME_ON_F4'   EXPORTING
                          PROGRAM_NAME        = SYST-REPID
                          DYNPRO_NUMBER       = SYST-DYNNR
                          FIELD_NAME          = ' '
         static              = 'X'
                          MASK                = ' '
        CHANGING      file_name           = p_file   EXCEPTIONS     mask_too_long       = 1     OTHERS              = 2 
              .  IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
      ENDIF.
    START-OF-SELECTION.
    CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE' 
    EXPORTING    filename                      = P_FILE   
    i_begin_col                   = 1   
    i_begin_row                   = 1   
    i_end_col                     = 5   
    i_end_row                     = 12507 
         tables   
       intern                        = ITAB
    EXCEPTIONS  
            INCONSISTENT_PARAMETERS       = 1  
            UPLOAD_OLE                    = 2  
           OTHERS                        = 3.          .
    IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
    ENDIF.
    CALL FUNCTION 'GET_COMPONENT_LIST' 
    EXPORTING  
        program          = SY-REPID   
        fieldname        = 'ZMARA' 
           tables   
           components       = CMP.
    LOOP AT itab.    AT NEW row.     
    IF sy-tabix = 1.        APPEND ZUPLOAD1_T.     
    ENDIF.   
    ENDAT.   
    col = itab-col.   
    READ TABLE cmp INDEX col.  
    CONCATENATE 'ZUPLOAD1_T-' cmp-compname INTO fldname.  
    ASSIGN (fldname) TO <fs>.  
    <fs> = itab-COL.  
      APPEND ZUPLOAD1_T.  ENDLOOP.
    DELETE ZUPLOAD1_T where matnr eq space.
    LOOP AT ZUPLOAD1_T INTO wa_data.
    insert mara from  wa_data .
        WRITE: / ZUPLOAD1_T-matnr, 20 ZUPLOAD1_T-ersda , 45 ZUPLOAD1_T-ernam, 55 ZUPLOAD1_T-laeda.
    *HERE IAM JUST CHECKING I NEED TO UPDATE A ZTABLE
      ENDLOOP.
    insert ZMARA FROM table itab ACCEPTING DUPLICATE KEYS.
    I have tried it for mara.Please let me know whether it was helful.
    Regards,
    Kannan

  • How to remove the date extensions from a filename in SSIS Flat File Connection Manager dynamically at run time

    Hello,
    I have to load data from a csv file to SQL Database. The file is placed into a directory by another program but the file name being same, has different extensions based on time of the day that it is placed in the directory. Since I know the file name
    ahead of time, so, I want to strip off the date/time extension from the file name so that I can load the file using Flat File Connection Manager. I am trying to use 'variable' and 'expression editor' so that I can specify the file name dynamically. But I
    don't know how to write it correctly. I have a variable 'FileLocation' that holds the folder location where the file will be placed. The file for example:  MyFileName201410231230  (MyFileName always the same, but the date/time will be different)
    Thanks,
    jkrish

    I don't want to use ForEach Loop because the files are placed by a FTP process 3 times a day at a specific time, for ex. at 10 AM, 12 PM and 3 PM. These times are pretty much fixed. The file name is same but the extension will have this day time stamp. I
    have to load each file only once for a particular reason, meaning I don't want to load the ones I already loaded. I am planning on setting up the SSIS process to load at 10:05, 12:05 and 3:05 daily. The files will be piling up in the folder. As it comes,
    I load them. At some point in time, I can remove the old ones so that they won't take up space in the server.  In fact, I don't have to keep the old ones at all since they are saved in a different folder anyways. I can ask the FTP process to
    remove the previous one when the new one arrives. So, at any point in time, there will be one file, but that file will have different extensions every time.
    I am thinking of removing the extensions before I load every time. If the file name is 'MyFileNamexxxxxxx.csv', then I want to change it to 'MyFileName.csv' and load it.
    Thanks,
    jkrish
    You WILL need to use it eventually because you need to iterate over each file.
    Renaming is unnecessary as one way or another you will need to put a processed file away.
    And having the file with the original extension intact will also help you troubleshoot.
    Arthur
    MyBlog
    Twitter

  • Flat File Loading

    Hi Experts,
    I am loading data from Flat File to DSO.I got the following error while activating Data in DSO.
    "No SID found for value '      0500136A' of characteristic 0MAT_SALES"
    I have checked in Material sales master data  I have the value 0500136A in .If you see above error it showing some space in front of 0500136A.I am not seeing that space in master data.I was edidted again and loaded to DSO and activated Then I got same error.Is it problem with Excel File formatting related.Pls through some light onthe issue.
    Thanks,
    Suryam.

    Hi All,
    Thank you.
    Khaja,
    I got the data upto PSA is fine and I am able to load the data in to DSO but I got the error while activating DSO
    Ashok,
    I loaded master data before loading Transactional data.May be it problem with file format.How can I check these format issues.
    Brian,
    We are using Transformation in BI7.0.I tried put conversion in DataSource level i.e Alpha for the field.But still I am getting same error.
    Pls advice me on the above issue.
    Thanks,
    Suryam.

  • Error when exporting to flat file in ODI 11g

    This works ok in ODI 10g. I'm using IKM SQL to File Append on Windows Server 2008 R2
    Getting the following error when exporting to a flat file in ODI 11g: ODI-40406: Bytes are too big for array
    I've seen a couple of threads like this on the forum, but I've never seen an answer to the problem.

    Problem is with the difference in behaviour for the IKM SQL to File Append KM between 10g and 11g.
    Our 10g target file datastore had a mixture of fixed string and numeric columns. Mapping from the source to target was simple one to one column mapping. It generated the desired fixed format flat file; numerics were right adjusted with embedded decimal point and leading spaces. Each numeric column in the generated flat file occupied the exact space allocated to it in the layout. This gave the desired results, even though documentation for the 10g IKM states that all columns in the target must be string.
    When we converted to 11g and tried to run this interface, it generated an error on the "numeric" columns because it was wrapping them in quotation marks. The result column was being treated as string, and it was larger than the defined target once it acquired those quotation marks.
    In order to get 11g to work, I had to change all the numeric columns in the target data store to fixedstring 30. I then had to change the mapping for these numeric columns to convert them to right adjusted character strings (i.e. RIGHT(SPACE(30) + RTRIM(MyNumericColumn),30).
    Now it works.

  • Getting error while loading Flat File

    Hello All,
    I am getting error while loading flat file. Flat file is a CSV file.
    Value ',,,,,,,,' of characteristic 0DATE is not a number with 000008 spaces
    Data seprator  |
    Escape sign    ;
    It has 23708 entries , it s loading successfully till 23 665 entries
    Besides when checked in PSA
    for record having entries >23667 has calender day as ,,,,,, where as rest entries are  having date
    Besides when i checked in Flat file ,the total number of rows is 23,667 is there but i wonder why it has got 23,708 in
    RSMO
    Could you please let me know how to correct.
    regards
    path

    Hi,
    For date column you should maintain YYYYMMDD formate Eg: 20090601, kepp cursor on date column and right click and Formate >Custome>make it 00000000 then save teh file as .CSV . First type values on column and do like this formate and save it and without opening it load it. Once you open it you losw 00000000 formate you need to give again the same formate.
    Settings in Infopackage:
    Data Format  = CSV
    Data Separator = ,
    Escape Sign = ;
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/01ed2fe3811a77e10000000a422035/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a6567e07211d2acb80000e829fbfe/content.htm
    Thanks
    Reddy

Maybe you are looking for