Flat file again

let me try this again.
say i have a table called events. in that table there is a field with 120 types of event codes. what i would like to do is count these codes that i request in my scrip and have it to save in a text file with title of code at top and count of that code underneath. hope this makes sense
drugs robberies accidents
10 120 234
this way i can upload it into my word chart without type each line.
thanks,
lorenzo

Here is a simple script I wrote to demonstrate how to solve the problem using string concatination. In the example is substituted job from the emp table for your code.
You could use the routine in conjunction with the UTL_FILE package to put your output in a flatfile.
WARNING: UTL_FILE has a line length limit (I think it is 1K), so you may have to split your line at some point.
Do a search on forums for examples on how to use UTL_FILE.
SQL> set serveroutput on
1 declare
2 voutput1 varchar2(1000);
3 voutput2 varchar2(1000);
4 cursor c1 is select job, count(*) as QTY
5 from emp
6 group by job;
7 begin
8 for i in c1 loop
9 voutput1 := voutput1||' '||i.job;
10 voutput2 := voutput2||' '||to_char(i.qty);
11 end loop;
12 dbms_output.put_line(voutput1);
13 dbms_output.put_line(voutput2);
14* end;
SQL> /
ANALYST CLERK MANAGER PRESIDENT SALESMAN
2 4 3 1 4
PL/SQL procedure successfully completed.

Similar Messages

  • J2SE XML to Flat File Content Conversion

    Hi
    I've currently got a scenario which sends a flat file to the integration server, it gets mapped and sent to a receiver adapter on a deployed j2se engine.
    I'm now trying to convert the xi-xml structured file to a flat file again on the j2se side (the same flat file format it originally had).
    My original flat file looks like this -
    477
    477=AA1
    My xml file looks like this -
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:ResultMessage xmlns:ns0="urn:xxxx-com:a_test_j2se_filetofile">
    <Item>
              <field1>477</field1>
    </Item>
    <Item>
              <field1>477</field1>
              <field2>AA1</field2>
    </Item>
    </ns0:ResultMessage>
    I am using these content conversion parameters:
    xml.addHeaderLine=0
    xml.fieldSeparator==
    xml.endSeparator='nl'
    I get this error on the integration engine (sxmb_moni):
    Error while sending by HTTP (error code: 500, error text: Internal Server Error:java.lang.NullPointerException) (See attachment HTMLError for details)
    and the j2se adpater log says this:
    17:16:32 (4120): Message "13b9d644-54c9-4ffb-0c40-db4c14458d77" of type "application/xml", kind "B" received
    17:16:32 (4124): Parsing XML message
    17:16:32 (4131): ERROR: Message processing failed with "java.lang.NullPointerException"
    What am i missing?

    So, now... I did a test configuration in XI and sent your test-payload...it worked.
    The J2SE adapter configuration:
    File adapter java class
    classname=com.sap.aii.messaging.adapter.ModuleXMB2File
    version=30
    mode=XMB2FILEWITHCONVERSION
    #Adress for XMB endpoint -
    XI.httpPort=8111
    XI.httpService=/file/receiver
    #File Adapter specific parameters -
    file.createDir=1
    file.targetDir=c:/transfer/inbound
    file.targetFilename=xmboutput.txt
    #file.writeMode=append
    #file.writeMode=overwrite
    file.writeMode=addCounter
    file.counterMode=immediately
    #file.counterMode=afterFirst
    file.counterSeparator=_
    file.counterFormat=00000
    file.counterStep=1
    #File Content Conversion specific parameters -
    xml.addHeaderLine=0
    xml.fieldSeparator==
    xml.endSeparator='nl'
    And here the configuration of the receiver communication channel in the integration directory:
    Adapter Type: XI
    Receiver
    Transport-Protocol: HTTP 1.0
    Message-Protocol: XI 3.0
    Adapter-Engine: Integration Server
    Adressing-Type: URL Address
    Target Host: <yourJ2SEip>
    Service Number: 8111
    Path Prefix:http://<yourJ2SEip>:8111/file/receiver
    Authentication Data
    Logon data for non-SAP systems
    User
    Password
    That's it... sent your payload and got the wished result:
    477
    477=AA1
    Regards,
    Heinrich

  • A question on ETL on flat file folder

    Hi,all, I am a student currently exploring Oracle Technologies.
    I used to use Microsoft SQL Server Integration/Analysis/Reporting Services. Wanted to achieve something like this too, using oracle.
    Had done some simple tutorial on owb for a day, I briefly understand the basics.
    Q:
    However, all the tutorial taught me on flat files was a single individual flat file, if I want to perform extraction on ALL the flat files in a particular folder, is this possible with OWB?
    Thank a lot!.

    yes, its available now, trying to learn it, thank guys.
    Another question: that I was pondering over about:
    I was always wondering what will happen to the data in table if I were to run the ETL process flow twice, what actually happens to the previous existing data?
    extra:
    What I was planning to do is, I have a folder is populated with new flat files everyday, I only want to extract these datas once and have them in the Cube/Fact table for analysis.
    Question is, do I have to move extracted flat files to another directory after process flow such that next time the process flow wont extract these flat files again?
    Edited by: 854616 on May 3, 2011 11:45 PM

  • Flat File Splitting into two Messages in Orchestration

    In BizTalk, Orchestration receives the following formatted file. After receiving the e file AS-IS to go to one subscriber and for another subscriber where Mode=”N” contains in a row those rows should be go as a another file.
    Please let me know how to implement this scenario
    INPUTFILE
    InvoiceNo InvoiceDate  Mode  Amount
    1000            20042015    Y          1244
    1001            20042015   N          2345
    1003            20042015    N          1244
    1004            20042015    Y        2345
    1005            20042015    Y          1244
    10046           20042015    N      2345
    From Above File,expecting two output files
    One File OutPut
    1000            20042015    Y          1244
    1001            20042015   N          2345
    1003            20042015    N          1244
    1004            20042015    Y        2345
    1005            20042015    Y          1244
    10046           20042015    N        2345
    Second File Output
    1001            20042015   N          2345
    1003            20042015    N          1244
    10046           20042015    N        2345

    Hi BizQ.
    If you are receiving the flat file directly in the orchestration, it will be very dificult to apply rules based on its content. Probably you will need to pass the entire  message content to a C# helper that will read it and return the rows that will
    be sent to each subscriber.
    A better option could be creating a custom pipeline component that will be executed in the receive port. this component will process the entire document and will create a new document that will contain the rows with the Mode value set to N. As a result it
    will return the two documents required by the subscribers.
    Other option could be disassembling the request document in a hierarchical XML structure before it's processed inside the orchestration. The orchestration will generate the different documents that will be sent to each subscriber based on the
    Mode property of each row. Next use a assembler pipeline to build the flat files again before sending to the target subscribers.
    Regards.

  • Export data to a flat file

    Hi all,
    I need to export a table data to a flat file.
    But the problem is that the data is huge about 200million rows occupying around 60GB of space
    If I use SQL*Loader in Toad, it is taking huge time to export
    After few months, I need to import the same data again into the table
    So please help me which is the efficient and less time taking method to do this
    I am very new to this field.
    Can some one help me with this?
    My oracle database version is 10.2
    Thanks in advance

    OK so first of all I would ask the following questions:
    1. Why must you export the data and then re-import it a few months later?
    2. Have you read through the documentation for SQLLDR thoroughly ? I know it is like stereo instructions but it has valuable information that will help you.
    3. Does the table the data is being re-imported into have anything attached to it eg: triggers, indices or anything that the DB must do on each record? If so then re-read the the sqlldr documentation as you can turn all of that off during the import and re-index, etc. at your leisure.
    I ask these questions because:
    1. I would find a way for whatever happens to this data to be accomplished while it was in the DB.
    2. Pumping data over the wire is going to be slow when you are talking about that kind of volume.
    3. If you insist that the data must be dumped, massaged, and re-imported do it on the DB server. Disk IO is an order of magnitude faster then over the wire transfer.

  • Export InfoCube Contents by Request to a flat file

    Hi SDN,
    Is it possible, other than going into the manage contents of an infoCube,
    to export to a CSV file, by a request id.
    I had a look at the function module
    RSDRI_INFOPROV_READ
    and it nearly does what i require, although i wasn't able to get it to work.
    Can anyone firstly provide me with the parameters to input to get it to write to a file, say C:\temp.csv.
    We are not able to use open hub service because of licensing issues.
    Finally, if there is no easy solution, i will just go to the manage contents, and dump out the 3 - 4 million records identified by a request id number
    Thank you.
    Simon

    Hi,
    hereunder an example populating an internal table; the same writing to flat file shouldn't be an issue (OPEN DATASET....)
    DATA:
      BEGIN OF ls_sls_infoprov_read,
        0PLANT      LIKE /BIC/VZMC0332-0PLANT,
        0RT_POSNO   LIKE /BIC/VZMC0332-0RT_POSNO,
        0RT_RECNUMB LIKE /BIC/VZMC0332-0RT_RECNUMB,
        0PSTNG_DATE LIKE /BIC/VZMC0332-0PSTNG_DATE,
        0TIME       LIKE /BIC/VZMC0332-0TIME,
        ZAIRLINE    LIKE /BIC/VZMC0332-ZAIRLINE,
        ZFLIGHTN    LIKE /BIC/VZMC0332-ZFLIGHTN,
        ZPAXDEST    LIKE /BIC/VZMC0332-ZPAXDEST,
        ZPAXDESTF   LIKE /BIC/VZMC0332-ZPAXDESTF,
        ZPAXCLASS   LIKE /BIC/VZMC0332-ZPAXCLASS,
        0CUSTOMER   LIKE /BIC/VZMC0332-0CUSTOMER,
        0MATERIAL   LIKE /BIC/VZMC0332-0MATERIAL,
        0RT_SALRESA LIKE /BIC/VZMC0332-0RT_SALRESA,
        0RT_POSSAL  LIKE /BIC/VZMC0332-0RT_POSSAL,
        0RT_POSSALT LIKE /BIC/VZMC0332-0RT_POSSALT,
        ZREASVAL    LIKE /BIC/VZMC0332-ZREASVAL,
        0RT_PRICRED LIKE /BIC/VZMC0332-0RT_PRICRED,
        0RT_PRICDIF LIKE /BIC/VZMC0332-0RT_PRICDIF,
      END OF ls_sls_infoprov_read.
    DATA: ls_sls_item LIKE ls_sls_infoprov_read.
    TYPES: ly_sls_infoprov_read LIKE ls_sls_infoprov_read,
           ly_sls_item          LIKE ls_sls_item.
    DATA:
        lt_sls_infoprov_read
        TYPE STANDARD TABLE OF ly_sls_infoprov_read
        WITH DEFAULT KEY INITIAL SIZE 10,
        gt_sls_infoprov_read LIKE lt_sls_infoprov_read.
    DATA:
         ls_sls_sfc  TYPE RSDRI_S_SFC,
         lt_sls_sfc  TYPE RSDRI_TH_SFC,
         ls_sls_sfk  TYPE RSDRI_S_SFK,
         lt_sls_sfk  TYPE  RSDRI_TH_SFK,
         ls_sls_range TYPE RSDRI_S_RANGE,
         lt_sls_range TYPE RSDRI_T_RANGE,
         lv_sls_end_of_data TYPE  RS_BOOL,
         lv_sls_first_call TYPE rs_bool,
        lv_sls_icube  TYPE RSINFOCUBE VALUE 'ZMC033'.
    *filling internal tables containing the output characteristics / KeyFigs
    * Shop ID
      ls_sls_sfc-chanm = '0PLANT'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 1.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    ** Till ID
      ls_sls_sfc-chanm = '0RT_POSNO'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 2.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    ** Receipt Number
      ls_sls_sfc-chanm = '0RT_RECNUMB'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 4.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    ** Posting Date
      ls_sls_sfc-chanm = '0PSTNG_DATE'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 3.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    * Time
      ls_sls_sfc-chanm = '0TIME'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    * Airline
      ls_sls_sfc-chanm = 'ZAIRLINE'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    * Flight Number
      ls_sls_sfc-chanm = 'ZFLIGHTN'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    * Destination
      ls_sls_sfc-chanm = 'ZPAXDEST'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    * Final Destination
      ls_sls_sfc-chanm = 'ZPAXDESTF'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    * Passenger Class
      ls_sls_sfc-chanm = 'ZPAXCLASS'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    * EU, non EU code
      ls_sls_sfc-chanm = '0CUSTOMER'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    ** Item Characteristics
    * Article
      ls_sls_sfc-chanm = '0MATERIAL'.
      ls_sls_sfc-chaalias = ls_sls_sfc-chanm.
      ls_sls_sfc-orderby  = 0.
      INSERT ls_sls_sfc INTO TABLE lt_sls_sfc.
    ** Key Figures
    * Sales Quantity in SuoM
      ls_sls_sfk-kyfnm    = '0RT_SALRESA'.
      ls_sls_sfk-kyfalias = ls_sls_sfk-kyfnm.
      ls_sls_sfk-aggr     = 'SUM'.
      INSERT ls_sls_sfk INTO TABLE lt_sls_sfk.
    * Gross Sales Value
      ls_sls_sfk-kyfnm    = '0RT_POSSAL'.
      ls_sls_sfk-kyfalias = ls_sls_sfk-kyfnm.
      ls_sls_sfk-aggr     = 'SUM'.
      INSERT ls_sls_sfk INTO TABLE lt_sls_sfk.
    * Tax Value
      ls_sls_sfk-kyfnm    = '0RT_POSSALT'.
      ls_sls_sfk-kyfalias = ls_sls_sfk-kyfnm.
      ls_sls_sfk-aggr     = 'SUM'.
      INSERT ls_sls_sfk INTO TABLE lt_sls_sfk.
    * Discounts
      ls_sls_sfk-kyfnm    = 'ZREASVAL'.
      ls_sls_sfk-kyfalias = ls_sls_sfk-kyfnm.
      ls_sls_sfk-aggr     = 'SUM'.
      INSERT ls_sls_sfk INTO TABLE lt_sls_sfk.
    * Reductions
      ls_sls_sfk-kyfnm    = '0RT_PRICRED'.
      ls_sls_sfk-kyfalias = ls_sls_sfk-kyfnm.
      ls_sls_sfk-aggr     = 'SUM'.
      INSERT ls_sls_sfk INTO TABLE lt_sls_sfk.
    * Differences
      ls_sls_sfk-kyfnm    = '0RT_PRICDIF'.
      ls_sls_sfk-kyfalias = ls_sls_sfk-kyfnm.
      ls_sls_sfk-aggr     = 'SUM'.
      INSERT ls_sls_sfk INTO TABLE lt_sls_sfk.
    ** filters
    FORM infoprov_sls_selections.
      REFRESH lt_sls_range.
      CLEAR ls_sls_range.
      ls_sls_range-chanm  = '0COMP_CODE'.
      ls_sls_range-sign   = 'I'.
      ls_sls_range-compop = 'EQ'.
      ls_sls_range-low    = 'NL02'.
      APPEND ls_sls_range TO lt_sls_range.
      CLEAR ls_sls_range.
      ls_sls_range-chanm    = '0PLANT'.
      ls_sls_range-sign     = 'I'.
      ls_sls_range-compop   = 'EQ'.
      ls_sls_range-low      = 'NLAA'.
      APPEND ls_sls_range TO lt_sls_range.
      ls_sls_range-low      = 'NLAB'.
      APPEND ls_sls_range TO lt_sls_range.
      ls_sls_range-low      = 'NLAC'.
      APPEND ls_sls_range TO lt_sls_range.
      ls_sls_range-low      = 'NLAM'.
      APPEND ls_sls_range TO lt_sls_range.
    ** Test Only
      CLEAR ls_sls_range.
      ls_sls_range-chanm    = '0CALDAY'.
      ls_sls_range-sign     = 'I'.
      ls_sls_range-compop   = 'BT'.
      ls_sls_range-low      =  s_datefr.
      ls_sls_range-high     =  s_dateto.
      APPEND ls_sls_range TO lt_sls_range.
    *  CLEAR ls_sls_range.
    *  ls_sls_range-chanm    = '0CALDAY'.
    *  ls_sls_range-sign     = 'I'.
    *  ls_sls_range-compop   = 'LE'.
    *  ls_sls_range-low      =  s_dateto.
    *  APPEND ls_sls_range TO lt_sls_range.
    * infoprov_sls_read
    DATA: lv_sls_records TYPE I, lv_sls_records_char(9) TYPE C.
      lv_sls_end_of_data = ' '.
      lv_sls_first_call  = 'X'.
    * read data in packages directly from the cube
      WHILE lv_sls_end_of_data = ' '.
            CALL FUNCTION 'RSDRI_INFOPROV_READ'
              EXPORTING  i_infoprov             = lv_sls_icube
                         i_th_sfc               = lt_sls_sfc
                         i_th_sfk               = lt_sls_sfk
                         i_t_range              = lt_sls_range
                         i_reference_date       = sy-datum
                         i_save_in_table        = ' '
                         i_save_in_file         = ' '
                         I_USE_DB_AGGREGATION   = 'X'
                         i_packagesize          = 100000
                         i_authority_check      = 'R'
              IMPORTING  e_t_data               = lt_sls_infoprov_read
                         e_end_of_data          = lv_sls_end_of_data
              CHANGING   c_first_call           = lv_sls_first_call
              EXCEPTIONS illegal_input          = 1
                         illegal_input_sfc      = 2
                         illegal_input_sfk      = 3
                         illegal_input_range    = 4
                         illegal_input_tablesel = 5
                         no_authorization       = 6
                         ncum_not_supported     = 7
                         illegal_download       = 8
                         illegal_tablename      = 9
                         OTHERS                 = 11.
            IF sy-subrc <> 0.
              BREAK-POINT.   "#EC NOBREAK
              EXIT.
            ENDIF.
            APPEND LINES OF lt_sls_infoprov_read TO gt_sls_infoprov_read.
      ENDWHILE.
    Another options for "adjusting" a cube:
    - converting it to transactional and use BPS functionalities (ABAP report SAP_CONVERT_TO_TRANSACTIONAL)
    - loopback scenario: generate export datasource on your cube with update rules to your cube itslef; you would extract the data to be corrected to PSA; you can then edit this PSA (manually or mass updates with ABAP code) and the post it again in your cube.
    hope this helps...
    Olivier.

  • Looking for a program to delete CVCs from a flat file, infocube, ztable.

    I am looking for a program that will delete CVCs from a flat file, infocube, or ztable. Based on the research I have done, I would imagine the core of such a program would be the use of function module /sapapo/ts_plob_delete.
    If anyone has such a program and would be willing to share it that would be great.
    Shane

    Hi
    Yes you can use this program, but I think the program /sapapo/ts_plob_delete will only delete the data from Master planning Object structure( MPOS). First you have to deactivate the planning are first than use this program which will delete everything and again activate the planning area.
    If you want to delete the data flat files or infocube, you require to delete the data directly from the infocubes and add this job in the process chain to delete everything.
    I hope this information help you.
    Thanks
    Amol

  • Object reference not set to an instance of an object error when generating a schema using flat file schema wizard.

    I have a csv file that I need to generate a schema for. I am trying to generate a schema using flat file schema wizard but I keep getting "Object reference not set to an instance of an object." error when I am clicking on the Next button after
    specifying properties of the child elements on the wizard. At the end I get schema file generated but it contains an empty root record with no child elements.
    I thought may be this is because I didn't have my project checked out from the Visual SourceSafe db first but I tried again with the project checked out and got the same error.
    I also tried creating a brand new project and generating a schema for it but got the same error.
    I am not sure what is causing Null Reference exception to be thrown and there is nothing in the Windows event log that would tell me more about the problem.
    I am using Visual Studio 2008 for my BizTalk development.
    I would appreciate if some has any insides on this issue.

    Hi,
    To test your environment, create a new BizTalk project outside of source control.
    Create a simple csv file on the file system.
    Name,City,State
    Bob,New York,NY
    Use the Flat file schema Wizard to create the flat file schema from your simple csv instance.
    Validate the schema.
    Test the schema using your csv instance.
    This will help you determine if everything is ok with you environment.
    Thanks,
    William

  • Flat File Loading

    Hi Experts,
    I am loading data from Flat File to DSO.I got the following error while activating Data in DSO.
    "No SID found for value '      0500136A' of characteristic 0MAT_SALES"
    I have checked in Material sales master data  I have the value 0500136A in .If you see above error it showing some space in front of 0500136A.I am not seeing that space in master data.I was edidted again and loaded to DSO and activated Then I got same error.Is it problem with Excel File formatting related.Pls through some light onthe issue.
    Thanks,
    Suryam.

    Hi All,
    Thank you.
    Khaja,
    I got the data upto PSA is fine and I am able to load the data in to DSO but I got the error while activating DSO
    Ashok,
    I loaded master data before loading Transactional data.May be it problem with file format.How can I check these format issues.
    Brian,
    We are using Transformation in BI7.0.I tried put conversion in DataSource level i.e Alpha for the field.But still I am getting same error.
    Pls advice me on the above issue.
    Thanks,
    Suryam.

  • Getting error while loading Flat File

    Hello All,
    I am getting error while loading flat file. Flat file is a CSV file.
    Value ',,,,,,,,' of characteristic 0DATE is not a number with 000008 spaces
    Data seprator  |
    Escape sign    ;
    It has 23708 entries , it s loading successfully till 23 665 entries
    Besides when checked in PSA
    for record having entries >23667 has calender day as ,,,,,, where as rest entries are  having date
    Besides when i checked in Flat file ,the total number of rows is 23,667 is there but i wonder why it has got 23,708 in
    RSMO
    Could you please let me know how to correct.
    regards
    path

    Hi,
    For date column you should maintain YYYYMMDD formate Eg: 20090601, kepp cursor on date column and right click and Formate >Custome>make it 00000000 then save teh file as .CSV . First type values on column and do like this formate and save it and without opening it load it. Once you open it you losw 00000000 formate you need to give again the same formate.
    Settings in Infopackage:
    Data Format  = CSV
    Data Separator = ,
    Escape Sign = ;
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/01ed2fe3811a77e10000000a422035/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/80/1a6567e07211d2acb80000e829fbfe/content.htm
    Thanks
    Reddy

  • Error while buliding biztalk flat file schema

    Hi Everyone,
    am using a flat file schema i had imported a schemas.
    but when i bulit schemas solution am getting this error.
    Only one SchemaInfo, GroupInfo, RecordInfo or FieldInfo is allowed in the appInfo block

    Hi Rajiv,
    Your error is related is much more related to your own flat file where there is some restriction over grouping the record . 
    I would suggest to go through your Flat File Schema Wizard again to see is there inconsistency you have done while generating schema .
    I would also suggest you to look over below MSDN Articles
    Walkthrough: Creating a Flat File Schema From a Document Instance
    Creating Schemas Using BizTalk Flat File Schema Wizard
    BizTalk Server: Transform text files (Flat Files) into XML – by @Sandro
    Thanks
    Abhishek

  • Picking a IDOC Flat File stored in SAP R/3 Application Server by SAP PI

    Hi,
    Can SAP PI pickup a IDOC Flat File stored in SAP R/3 Application Server Directory and send it back as an Inbound IDOC.
    Scenario:
    We have a data in the EXCEL Sheet, which will be used to a fill an IDOC and the IDOC will be just save in the SAP R/3 Application Server Directory, but can not be triggered due to its peculiar behavior. Afterwards, SAP PI should pole the SAP R/3 and pick up that IDOC Flat File from the R/3 Application Server and send it back to the SAP R/3 as an Inbound IDOC.
    For Ref: IDOC does not have a Outbound Process Code, thus can not be triggered and send to SAP PI. It is always used as a Inbound IDOC in SAP R/3 system.
    Regards,
    Saurabh

    SAP PI should pole the SAP R/3 and pick up that IDOC Flat File from the R/3 Application Server
    If SAP PI = 7.11 --> /people/william.li/blog/2009/04/01/how-to-use-user-module-for-conversion-of-idoc-messages-between-flat-and-xml-formats
    send it back to the SAP R/3 as an Inbound IDOC
    why to send  some information again into R3 which it already has? cant some internal code in R3 read the info from excel and then update the IDOC directly?

  • Golden Gate for flat file

    hi,
    I have tried with GoldenGate for Oracle/ non-Oracle databases. Now, I am trying for flat file.
    What i have done so far:
    1. I have downloaded Oracle "GoldenGate Application Adapters 11.1.1.0.0 for JMS and Flat File Media Pack"
    2. Kept it on the same machine where Database and GG manager process exists. Port for GG mgr process 7809, flat file 7816
    3. Following doc GG flat file administrators guide Page 9 --> configuration
    4. Extract process on GG manager process_
    edit params FFE711*
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    add extract FFE711, EXTTRAILSOURCE ./dirdat/oo*
    add rmttrail ./dirdat/pp, extract FFE711, megabytes 20*
    start extract  FFE711*
    view report ffe711*
    Oracle GoldenGate Capture for Oracle
    Version 11.1.1.1 OGGCORE_11.1.1_PLATFORMS_110421.2040
    Windows (optimized), Oracle 11g on Apr 22 2011 03:28:23
    Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:24:19
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 4628
    Description:
    ** Running with the following parameters **
    extract ffe711
    userid ggs@bidb, password ********
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.77G
    CACHESIZEMAX (strict force to disk): 1.57G
    Database Version:
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for 32-bit Windows: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    Database Language and Character Set:
    NLS_LANG environment variable specified has invalid format, default value will b
    e used.
    NLS_LANG environment variable not set, using default value AMERICAN_AMERICA.US7A
    SCII.
    NLS_LANGUAGE = "AMERICAN"
    NLS_TERRITORY = "AMERICA"
    NLS_CHARACTERSET = "AL32UTF8"
    Warning: your NLS_LANG setting does not match database server language setting.
    Please refer to user manual for more information.
    2011-11-07 18:24:25 INFO OGG-01226 Socket buffer size set to 27985 (flush s
    ize 27985).
    2011-11-07 18:24:25 INFO OGG-01052 No recovery is required for target file
    E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, at RBA 0 (file not opened).
    2011-11-07 18:24:25 INFO OGG-01478 Output file E:\GoldenGate11gMediaPack\V2
    6071-01\dirdat\ffremote is using format RELEASE 10.4/11.1.
    ** Run Time Messages **
    5. on Flat file GGSCI prompt-->_*
    edit params FFR711*
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\sample-dirprm\ffwriter.properties"
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    add extract ffr711, exttrailsource ./dirdat/pp*
    start extract ffr711*
    view report ffr711*
    Oracle GoldenGate Capture
    Version 11.1.1.0.0 Build 078
    Windows (optimized), Generic on Jul 28 2010 19:05:07
    Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.
    Starting at 2011-11-07 18:21:31
    Operating System Version:
    Microsoft Windows XP Professional, on x86
    Version 5.1 (Build 2600: Service Pack 2)
    Process id: 5008
    Description:
    ** Running with the following parameters **
    extract ffr711
    CUSEREXIT E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\flatfilewriter.dll CUSE
    REXIT passthru includeupdatebefores, params "E:\GoldenGate11gMediaPack\GGFlatFil
    e\V22262-01\sample-dirprm\ffwriter.properties"
    E:\GoldenGate11gMediaPack\GGFlatFile\V22262-01\ggs_Windows_x86_Generic_32bit_v11
    _1_1_0_0_078\extract.exe running with user exit library E:\GoldenGate11gMediaPac
    k\GGFlatFile\V22262-01\flatfilewriter.dll, compatiblity level (2) is current.
    SOURCEDEFS E:\GoldenGate11gMediaPack\V26071-01\dirdef\vikstkFF.def
    table ggs.vikstk;
    CACHEMGR virtual memory values (may have been adjusted)
    CACHEBUFFERSIZE: 64K
    CACHESIZE: 1G
    CACHEBUFFERSIZE (soft max): 4M
    CACHEPAGEOUTSIZE (normal): 4M
    PROCESS VM AVAIL FROM OS (min): 1.87G
    CACHESIZEMAX (strict force to disk): 1.64G
    Started Oracle GoldenGate for Flat File
    Version 11.1.1.0.0
    ** Run Time Messages **
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....
    Thanks,
    Vikas

    Ok, I haven't run your example, but here are some suggestions.
    Vikas Panwar wrote:
    extract ffe711
    userid ggs@bidb, password ggs12345
    discardfile E:\GoldenGate11gMediaPack\V26071-01\dirrpt\EXTFF.dsc, purge
    rmthost 10.180.182.77, mgrport 7816
    rmtfile E:\GoldenGate11gMediaPack\V26071-01\dirdat\ffremote, purge, megabytes 5
    ggsci> add extract FFE711, EXTTRAILSOURCE ./dirdat/oo
    ggsci> add rmttrail ./dirdat/pp, extract FFE711, megabytes 20
    ggsci> start extract  FFE711
    You of course need data captured from somewhere to test with. You could capture changes directly from a database and write those to a trail, and use that as a source for the flat-file writer; or, if you have existing trail data, you can just use that (I often test with old trails, with known data).
    In your example, you are using a data pump that is doing nothing more than pumping trails to a remote host. That's fine, if that's what you want to do. (It's actually quite common in real implementations.) But if you want to actually capture changes from the database, then change "add extract ... extTrailSource" to be "add extract ... tranlog". I'll assume you want to use the simple data pump to send trail data to the remote host. And I will assume that some other database capture process is creating the trail dirdat/oo
    Also... with your pump "FFE711", you can create either a local or remote trial, that's fine. But don't use a rmtfile (or extfile). You should create a trail, either a "rmttrail" or "exttrail". The flat-file adapter will read that (binary) trail, and generate text files. Trails automatically roll-over, the "extfile/rmtfile" do not (but they do have the same internal GG binary log format). (You can use a 'maxfiles' to force them to rollover, but that's beside the point.)
    Also, <ul>
    <li> don't forget your "table" statements... or else no data will be processed!! You can wildcard tables, but not schemata.
    <li> there is no reason that anything would be discarded in a pump.
    <li> although a matter of choice, I don't see why people use absolute paths for reports and discard files. Full paths to data and def files make sense if they are on the SAN/NAS, but then I'd use symlinks from dirdat to the storage directory (on Unix/Linux)
    <li> both windows and unix can use forward "/" slashes. Makes examples platform-independent (another reason for relative paths)
    <li> your trails really should be much larger than 5MB for better performance (e.g,. 100MB)
    <li> you probably should use a source-defs file, intead of a dblogin for metadata. Trail data is by its very nature historical, and using "userid...password" in the prm file inherently gets metadata from "right now". The file-writer doesn't handle DDL changes automatically.
    </ul>
    So you should have something more like:
    Vikas Panwar wrote:
    extract ffe711
    sourcedefs dirdef/vikstkFF.def
    rmthost 10.180.182.77, mgrport 7816
    rmttrail dirdat/ff, purge, megabytes 100
    table myschema.*;
    table myschema2.*;
    table ggs.*;For the file-writer pump:
    +5. on Flat file GGSCI prompt+
    extract ffr711
    CUSEREXIT flatfilewriter.dll CUSEREXIT passthru includeupdatebefores, params dirprm\ffwriter.properties
    SOURCEDEFS dirdef/vikstkFF.def
    table myschema.*;
    table ggs.*;
    ggsci> add extract ffr711, exttrailsource ./dirdat/pp
    ggsci> start extract ffr711
    Again, use relative paths when possible (the flatfilewriter.dll is expected to be found in the GG install directory). Put the ffwriter.properties file into dirprm, just as a best-practice. In this file, ffwriter.properties, is where you define your output directory and output files. Again, make sure you have a "table" statement in there for each schema in your trails.
    Problem I am facing_
    I am not sure where to find the generated flat file,
    even the reports are showing there is no data at manager process
    I am expecting replicat instead of extract at Flatfile FFR711.prm
    I have done this much what to do give me some pointers.....The generated files are defined in the ffwriter.properties file. Search for "rootdir" property, e.g.,
    goldengate.flatfilewriter.writers=csvwriter
    csvwriter.files.formatstring=output_%d_%010n
    csvwriter.files.data.rootdir=dirout
    csvwriter.files.control.ext=_data.control
    csvwriter.files.control.rootdir=dirout
    ...The main problem you have is: (1) use rmttrail, not rmtfile, and (2) don't forget the "table" statement, even in a pump.
    Also, for the flat-file adapter, it does run in just a "extract" data pump; no "replicat" is ever used. The replicats inherently are tied to a target database; the file-writer doesn't have any database functionality.
    Hope it helps,
    -m

  • Selecting fields for flat file input

    I've built a scenario that uses the File adapter to load a flat file into SAP and fully understand the concepts. My problem is that I have fixed length file I need to load which has 50+ fields per row. I only need 10 fields per row and those fields are spread throughout the row. Using fieldFixedlengths I specify the length of each field - and it appears they must be consecutive fields. How can I specify the fields that I need.
    For example my input file may be as follows
    AAAAAAABBBBBBBCCCCCCCDDDDDDEEEEEEE
    I only need AAAAAAA CCCCCCC and EEEEEEE. Is there a way to do this?

    Hi Robert,
    I don't know why you want to upload all junk data there apart from importing only relevent fields. if i am concered to do so, i'll do this in outside the XI BOX. If that file is in come fixed format or lets say its in csv format, just open that file into excel and then try to remove the row you don't want and then save it again to the same file format. and then import and map inside XI it will be simple.
    Also if you don't wanna to do this, then tell us what logic you wanna use to find which rows you don't want. I don't think if you are not sure or you already know what rows/fields u want.
    Regards
    Aashish Sinha
    PS : reward points if helpful

  • Unable to load the flat file  from client  work station

    Hi,
    I am trying load a flat file (.CSV file)from my desktop (Client work station) and getting the following error.
    An upload from the client workstation in the background is not possible
    Message no. RSM860
    Diagnosis
    You cannot load data from the client workstation in the background.
    Procedure
    Transfer your data to the application server and load it from there.
    I have recd a .XLS file and then I have converted to .CSV file , which I saved in my desktop and trying to load the same.
    Please help me how to go about this?...
    Thanks in advance.
    Christy.

    Hi All,
    Again, I have tried to load the flat file from clint work station with direct loading..I have got the following errors..
    Errors : 1
    Record                                                  990: Contents '50,000' from Field /BIC/ZPLQTY_B Not Convertible in Type QUAN -> Long Tex
    like this i recd so many errors.
    ERROR : 2
    Error in an arithmetic operation in record 259     
    Please help me how to load the flat file successfully.
    If I hv to save the flat file in Appl server..how to do that..Please provide step by step instruction..
    Thanks
    Christy

Maybe you are looking for

  • Network Cable Unplugged Status, PC to PLC

    I am working with a PLC connected to the PC via Ethernet.  I would like to receive notification that the cable is unplugged.  Here is the scenario: 1) Connect to PLC with ConnectToTCPServer (ok). 2) Send commands via ClientTCPWrite (ok). 3) Receive s

  • Disable Preferences in Adobe Reader 7

    Hello everybody, I have got a problem with Adobe Reader 7 on terminal servers (Windows 2000 SP4). All plug ins the users do not need are disabled. The last thing is, that I need to disable Preferences (Edit -> Preferences). Is that possible? If yes,

  • Desk Jet 810C Not Printing Magenta

    I need some troubleshooting help.  Months ago, my black ink cartridge started to empty and the printer started using reds or blues. I replaced the black cartridge with a refurbished one from Office Max. All worked fine.  The color cartridge started t

  • Is the Pen in PSE 6

    Is there a Pen tool in PSE 6? If there is can you tell me where to find it?

  • Printer utility problem

    after enabling printer sharing all printers dissapeared from printer list. the when trying to add the printer back, it was not to be found. printer setup utility is freezing and won't even force quit. i have to restart the computer to quit it. now tr