Currency and exhange rates data distribution to flat file

Hi All
I have a situation like i am sending the all  BI data to flat files but my problem is how can i send the currency exhange rates(TABLE:TCURR) data to flat files, is it possible using open hub desination or any other methods available please suggest me.
Regards
vasu reddy
Edited by: bvasu reddy on May 31, 2010 4:16 PM

Hi,
If i assume correct you want to supply ur data into flat file on daily basis.
You can use APD to Do this both in BW3.5 and & BI 7.0.
Creating dataflow in APD is pretty simple like process chain.
What you can do is select "table" as a data source and "Flatfile" at datatarget (both on client server or on desktop).
You will get APD option in RSA1 screen.
After creating this APD you can schedule it on daily basis etc.
Thanks and regards
Arun

Similar Messages

  • Which infocube contains inforamtion of date, currencies, and exchange rates

    Hi Experts,
    Cud u pls tell me which infocube contains inforamtion of date, currencies, and exchange rates?
    Any useful answer will be rewarded with suitable points.
    Thanks!
    Rohan

    Hi Rohan,
    It is stored in table TCURR . Ref. of which can be taken for creating cubes / ods .If requirement is for currency translation then i would say go for Tcode RRC1.
    There is one more program also which can be executed for currency translation ( Tcode SE38 : Program : RCURTEST )
    Hope that helps.
    Regards
    Mr Kapadia
    Message was edited by:
            Mr Kapadia

  • Infocube contains date, currencies, and exchange rates

    Hi Experts,
    Cud u pls tell me which infocube contains inforamtion of date, currencies, and exchange rates?
    Any useful answer will be rewarded with suitable points.
    Thanks!
    Rohan

    Hi Rohan,
    Currencies and Exchange Rates are maintained in a Table = TCURV.
    Goto SE11 and see that Table.
    Some other useful Tables are:
    RSDCUBEIOBJ          =     Objects per Info Cube (where-used list)
    RSDODSOATRNAV      =      Info Object in ODS (navigational attributes)
    RSDODSOIOBJ      =      Info Object in ODS
    Assign points if it is helpful.

  • Infoobjects for currencies and exchange rates related data

    Hi Experts,
    Please tell me standard Infoobjects for currencies and exchange rates related data that have been made available for third party tools?
    Any useful answer will be rewarded with suitable points.
    Thanks!

    Rohan
    Use 0currency - for currency
    0unit for unit of measures
    Arun

  • What is the best way to load and convert data from a flat file?

    Hi,
    I want to load data from a flat file, convert dates, numbers and some fields with custom logic (e.g. 0,1 into N,Y) to the correct format.
    The rows where all to_number, to_date and custom conversions succeed should go into table STG_OK. If some conversion fails (due to an illegal format in the flat file), those rows (where the conversion raises some exception) should go into table STG_ERR.
    What is the best and easiest way to archive this?
    Thanks,
    Carsten.

    Hi,
    thanks for your answers so far!
    I gave them a thought and came up with two different alternatives:
    Alternative 1
    I load the data from the flat file into a staging table using sqlldr. I convert the data to the target format using sqlldr expressions.
    The columns of the staging table have the target format (date, number).
    The rows that cannot be loaded go into a bad file. I manually load the data from the bad file (without any conversion) into the error table.
    Alternative 2
    The columns of the staging table are all of type varchar2 regardless of the target format.
    I define data rules for all columns that require a later conversion.
    I load the data from the flat file into the staging table using external table or sqlldr without any data conversion.
    The rows that cannot be loaded go automatically into the error table.
    When I read the data from the staging table, I can safely convert it since it is already checked by the rules.
    What I dislike in alternative 1 is that I manually have to create a second file and a second mapping (ok, I can automate this using OMB*Plus).
    Further, I would prefer using expressions in the mapping for converting the data.
    What I dislike in alternative 2 is that I have to create a data rule and a conversion expression and then keep the data rule and the conversion expression in sync (in case of changes of the file format).
    I also would prefer to have the data in the staging table in the target format. Well, I might load it into a second staging table with columns having the target format. But that's another mapping and a lot of i/o.
    As far as I know I need the data quality option for using data rules, is that true?
    Is there another alternative without any of these drawbacks?
    Otherwise I think I will go for alternative 1.
    Thanks,
    Carsten.

  • Currencies and Exchange Rate Requirement Matrix

    Hello All,
    Could anybody give me some inputs on my query. I have a TD for creating the BEX report. In the TD i have an option of below requirement. Please have a look :-
    Currencies and Exchange Rate Requirement Matrix               
    Value        Currency Type
         Document        Local       Group
    Actual                  X
    Plan               
    I meaan the Actual has value of group currency. I have no idea how to set in the report or query desginer? How do i set the value in the query desginer.
    Please throw some lite on my questions. Your view are appreciated.
    Thanks
    surendra

    Hello SRM Experts,
    May I know any answers or pointers to our issue/question.
    Thanks for your help.
    Regards,
    Sasikala

  • Error while uploading data from a flat file

    Hi All,
    I am trying to load data from a flat file to an ODS. The flat file contains a field, say FLAT01, of numerical type(Type Decimal, Length 18, Decimals 2). I have created an infoobject, say ZIO10, with type CURR and currency field, 0CURRENCY. In the transfer rules, I have assigned a constant for 0CURRENCY as the flat file doesn't have any currecy field.
    The problem is the flat file doesn't have any value in that field, FLAT01, for some records, infact most of the records. When I try to load it to BW,from applicatin server, it is throwing an error "Contents from field ZIO10 cannot be converted in type CURR ->longtext".
    I have debugged the transfer rules and find that the empty value is being represented as '#' and when it is trying to copy it into the CURR type field, it is throwing the error.
    Can someone please let me know how to solve it? why is it taking '#' for enpty value? Can't we change that?
    Any help would be highly appreciated.
    Best Regards,
    James.
    Message was edited by:
            James Helsinki

    Hi SS,
    I am sorry that I was not clear with my explanation. I have created ZIO10 as a keyfigure with type AMOUNT and the currency as 0CURRENCY.
    There is no currency coming in from the flat file so I used a constant in transfer rules to fill 0CURRENCY.
    The field which is coming as '#' is FLAT01 which is assigned to ZIO10.
    Best Regards,
    James.

  • BAPI to upload data from a flat file to VA01

    Hi guys,
    I have a requirement wherein i need to upload data  from a flat file to VA01.Please tell me how do i go about this.
    Thanks and regards,
    Frank.

    Hi
    previously i posted code also
        Include           YCL_CREATE_SALES_DOCU                         *
         Form  salesdocu
         This Subroutine is used to create Sales Order
         -->P_HEADER           Document Header Data
         -->P_HEADERX          Checkbox for Header Data
         -->P_ITEM             Item Data
         -->P_ITEMX            Item Data Checkboxes
         -->P_LT_SCHEDULES_IN  Schedule Line Data
         -->P_LT_SCHEDULES_INX Checkbox Schedule Line Data
         -->P_PARTNER  text    Document Partner
         <--P_w_vbeln  text    Sales Document Number
    DATA:
      lfs_return like line of t_return.
    FORM create_sales_document changing P_HEADER  like fs_header
                                       P_HEADERX like fs_headerx
                                       Pt_ITEM   like t_item[]
                                       Pt_ITEMX  like t_itemx[]
                                       P_LT_SCHEDULES_IN  like t_schedules_in[]
                                       P_LT_SCHEDULES_INX like t_schedules_inx[]
                                       Pt_PARTNER  like t_partner[]
                                       P_w_vbeln  like w_vbeln.
    This Perform is used to fill required data for Sales order creation
      perform sales_fill_data changing p_header
                                       p_headerx
                                       pt_item
                                       pt_itemx
                                       p_lt_schedules_in
                                       p_lt_schedules_inx
                                       pt_partner.
    Function Module to Create Sales and Distribution Document
      perform sales_order_creation using p_header
                                         p_headerx
                                         pt_item
                                         pt_itemx
                                         p_lt_schedules_in
                                         p_lt_schedules_inx
                                         pt_partner.
      perform return_check using p_w_vbeln .
    ENDFORM.                                 " salesdocu
        Form  commit_work
        To execute external commit                                    *
    FORM commit_work .
      CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
    EXPORTING
       WAIT          = c_x
    ENDFORM.                                 " Commit_work
    Include ycl_sales_order_header          " To Fill Header data and Item data
    Include ycl_sales_order_header.
         Form  return_check
        To validate the sales order creation
    FORM return_check using pr_vbeln type vbeln.
    if pr_vbeln is initial.
        LOOP AT t_return into lfs_return .
          WRITE / lfs_return-message.
          clear lfs_return.
        ENDLOOP.                             " Loop at return
      else.
        perform commit_work.                 " External Commit
        Refresh t_return.
        fs_disp-text = text-003.
        fs_disp-number = pr_vbeln.
        append fs_disp to it_disp.
      if p_del eq c_x or p_torder eq c_x or
        p_pgi eq c_x or p_bill eq c_x.
        perform delivery_creation.           " Delivery order creation
        endif.                               " If p_del eq 'X'......
      endif.                                 " If p_w_vbeln is initial
    ENDFORM.                                 " Return_check
    *&      Form  sales_order_creation
          text
         -->P_P_HEADER  text
         -->P_P_HEADERX  text
         -->P_PT_ITEM  text
         -->P_PT_ITEMX  text
         -->P_P_LT_SCHEDULES_IN  text
         -->P_P_LT_SCHEDULES_INX  text
         -->P_PT_PARTNER  text
    FORM sales_order_creation  USING    P_P_HEADER like fs_header
                                        P_P_HEADERX like fs_headerx
                                        P_PT_ITEM like t_item[]
                                        P_PT_ITEMX like t_itemx[]
                                        P_P_LT_SCHEDULES_IN like t_schedules_in[]
                                        P_P_LT_SCHEDULES_INX like t_schedules_inx[]
                                        P_PT_PARTNER like t_partner[].
        CALL FUNCTION 'BAPI_SALESDOCU_CREATEFROMDATA1'
        EXPORTING
          sales_header_in     = p_p_header
          sales_header_inx    = p_p_headerx
        IMPORTING
          salesdocument_ex    = w_vbeln
        TABLES
          return              = t_return
          sales_items_in      = p_pt_item
          sales_items_inx     = p_pt_itemx
          sales_schedules_in  = p_p_lt_schedules_in
          sales_schedules_inx = p_p_lt_schedules_inx
          sales_partners      = p_pt_partner.
    ENDFORM.                    " sales_order_creation
    plzz reward if i am usefull plzz

  • Data loading from flat file to cube using bw3.5

    Hi Experts,
                       Kindly give  me the detailed steps with screens  about Data loading from flat file to cube using bw3.5
           ...............Please

    Hi ,
    Procedure
    You are in the Data Warehousing Workbench in the DataSource tree.
           1.      Select the application components in which you want to create the DataSource and choose Create DataSource.
           2.      On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy.
    The DataSource maintenance screen appears.
           3.      Go to the General tab page.
                                a.      Enter descriptions for the DataSource (short, medium, long).
                                b.      As required, specify whether the DataSource builds an initial non-cumulative and can return duplicate data records within a request.
                                c.      Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only.
    Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type.
    In this case, after you have activated the DataSource you can load data into the PSA and correct it there.
           4.      Go to the Extraction tab page.
                                a.      Define the delta process for the DataSource.
                                b.      Specify whether you want the DataSource to support direct access to data.
                                c.      Real-time data acquisition is not supported for data transfer from files.
                                d.      Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server.
    Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BI, you have to specify this separator character and an escape character which specifies this character as a component of the value if required. After specifying these characters, you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the assigned field in BI.
    Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BI.
    Choose Properties if you want to display the general adapter properties.
                                e.      Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv.
    You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field.
                                  f.      Depending on the adapter and the file to be loaded, make further settings.
    ■       For binary files:
    Specify the character record settings for the data that you want to transfer.
    ■       Text-type files:
    Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred.
    Specify the character record settings for the data that you want to transfer.
    For ASCII files:
    If you are loading data from an ASCII file, the data is requested with a fixed data record length.
    For CSV files:
    If you are loading data from an Excel CSV file, specify the data separator and the escape character.
    Specify the separator that your file uses to divide the fields in the Data Separator field.
    If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Charactersfield.
    You chose the; character as the data separator. However, your file contains the value 12;45 for a field. If you set u201C as the escape character, the value in the file must be u201C12;45u201D so that 12;45 is loaded into BI. The complete value that you want to transfer has to be enclosed by the escape characters.
    If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified u201C as the escape character, the value 12u201D45 is transferred as 12u201D45 and 12u201D45u201D is transferred as 12u201D45u201D.
    In a text editor (for example, Notepad) check the data separator and the escape character currently being used in the file. These depend on the country version of the file you used.
    Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend that you use a different character as the escape character.
    If you select the Hex indicator, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character entry for a data separator or an escape sign is always interpreted as a hexadecimal entry.
                                g.      Make the settings for the number format (thousand separator and character used to represent a decimal point), as required.
                                h.      Make the settings for currency conversion, as required.
                                  i.      Make any further settings that are dependent on your selection, as required.
           5.      Go to the Proposal tab page.
    This tab page is only relevant for CSV files. For files in different formats, define the field list on the Fields tab page.
    Here you create a proposal for the field list of the DataSource based on the sample data from your CSV file.
                                a.      Specify the number of data records that you want to load and choose Upload Sample Data.
    The data is displayed in the upper area of the tab page in the format of your file.
    The system displays the proposal for the field list in the lower area of the tab page.
                                b.      In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. All fields are selected by default.
           6.      Go to the Fields tab page.
    Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here.
                                a.      To define a field, choose Insert Row and specify a field name.
                                b.      Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BI.
                                c.      Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields of the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BI. This allows you to transfer the technical properties of the InfoObjects into the DataSource field.
    Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field.
                                d.      Change the data type of the field if required.
                                e.      Specify the key fields of the DataSource.
    These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping.
                                  f.      Specify whether lowercase is supported.
                                g.      Specify whether the source provides the data in the internal or external format.
                                h.      If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required.
                                  i.      If required, specify a conversion routine that converts data from an external format into an internal format.
                                  j.      Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage.
                                k.      Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage.
                                  l.      Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required.
           7.      Check, save and activate the DataSource.
           8.      Go to the Preview tab page.
    If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview.
    This function allows you to check whether the data formats and data are correct.
    For More Info:  http://help.sap.com/saphelp_nw70/helpdata/EN/43/01ed2fe3811a77e10000000a422035/content.htm

  • Refreshing the data in a flat file

    Hi
    I am working on data integration.
    I need to fetch data from Oracle data base and then write it to a flat file.
    It is working fine now,but for the next fetch I don't need the old data to remain there in the file.The data should get refreshed and only the newly fetched data should be present.
    After the data is written to the flat file can i rename the file?
    I need the format of the file to be 'File_yyyyMMDDHHmmss.txt'.
    My final question is how should I FTP this to the target?
    Please help me on this as soon as possible since this is needed in an urgent part of the delivery.

    All you ask is achievable:
    1) The IKM SQL to file has a TRUNCATE option, which will if set to YES, will start from a clean file.
    2) You could rename the file after writing it, but why not just write it with that name? If you set the resource name to be a variable, (e.g. #MyProj.MyFilename), and be sure to set the variable in the package before executing your interface, you should be able to get the file you want. Otherwise, you can use the OdiFileMove tool in your package to rename the file.
    To set the name of the variable you can use a query on the database (if you are using Oracle, something like SELECT 'File_'||TOCHAR(SYSDATE) from DUAL.)
    3) ODI has ftp classes built in- you can find the doc under doc\webhelp\en\ref_jython\jyt_ex_ftp.htm
    Hope this helps
    Message was edited by:
    CTS

  • Error while uploading data from a flat file to the hierarchy

    Hi guys,
    after i upload data from a flat file to the hierarchy, i get a error message "Please select a valid info object" am loading data using PSA, having activated all external chars still get the problem..some help on this please..
    regards
    Sri

    there is o relation of infoobject name in flat file and infoobjet name at BW side.
    please check with the object in the BW and their lengths and type of the object and check your flat file weather u have the same type there,
    now check the sequence of the objects in the transfer rules  and activate them.
    there u go.

  • Delete data from a flat file using PL/SQL -- Please help.. urgent

    Hi All,
    We are writing data to a flat file using Text_IO.Put_Line command. We want to delete data / record from that file after data is written onto it.
    Please let us know if this is possible.
    Thanks in advance.
    Vaishali

    There's nothing in UTL_FILE to do this, so your options are either to write a Java stored procedure to do it or to use whatever mechanism the host operating system supports for editing files.
    Alternatively, you could write a PL/SQL procedure to read each line in from the file and then write them out to a second file, discarding the line you don't want along the way.

  • Sqlplus – spool data to a flat file

    Hi,
    Does any oracle expert here know why the sqlplus command could not spool all the data into a flat file at one time.
    I have tried below command. It seems like every time I will get different file size :(
    a) sqlplus -s $dbUser/$dbPass@$dbName <<EOF|gzip -c > ${TEMP_FILE_PATH}/${extract_file_prefix}.dat.Z
    b) sqlplus -s $dbUser/$dbPass@$dbName <<EOF>> spool.log
    set feedback off
    set trims on
    set trim on
    set feedback off
    set linesize 4000
    set pagesize 0
    whenever sqlerror exit 173;
    spool ${extract_file_prefix}.dat

    For me, this is working. What exactly are you getting and what exactly are you expecting?
    (t352104@svlipari[GEN]:/lem) $ cat test.ksh
    #!/bin/ksh
    TEMP_FILE_PATH=`pwd`
    extract_file_prefix=emp
    dbUser=t352104
    dbPass=t352104
    dbName=gen_dev
    dataFile=${TEMP_FILE_PATH}/${extract_file_prefix}.dat
    sqlplus -s $dbUser/$dbPass@$dbName <<EOF > $dataFile
    set trims on
    set trim on
    set tab off
    set linesize 7000
    SET HEAD off AUTOTRACE OFF FEEDBACK off VERIFY off ECHO off SERVEROUTPUT off term off;
    whenever sqlerror exit 173;
    SELECT *
    FROM emp ;
    exit
    EOF
    (t352104@svlipari[GEN]:/lem) $ ./test.ksh
    (t352104@svlipari[GEN]:/lem) $ echo $?
    0
    (t352104@svlipari[GEN]:/lem) $ cat emp.dat
          7369 SMITH      CLERK           7902 17-DEC-80        800                    20
          7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
          7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
          7566 JONES      MANAGER         7839 02-APR-81       2975                    20
          7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
          7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
          7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
          7788 SCOTT      ANALYST         7566 09-DEC-82       3000                    20
          7839 KING       PRESIDENT            17-NOV-81       5000                    10
          7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
          7876 ADAMS      CLERK           7788 12-JAN-83       1100                    20
          7900 JAMES      CLERK           7698 03-DEC-81        950                    30
          7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
          7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
    (t352104@svlipari[GEN]:/lem) $

  • Export HRMS data to a flat file

    Hi All!
    Are there any ways of exporting employee related data to a flat file without using a client app (PeopleCode or Integration Broker), that is simply generate a CSV feed from UI?

    You can Schedule a query and specify the output format as text. Note that when you select View Log/Trace in process monitor, you will see a file with a .csv extension. However, it will open by default in Excel, and even if you select Save instead of Open it will try to change the extension to .xls. You will have to change it back to .csv.

  • Export data to a flat file

    Hi all,
    I need to export a table data to a flat file.
    But the problem is that the data is huge about 200million rows occupying around 60GB of space
    If I use SQL*Loader in Toad, it is taking huge time to export
    After few months, I need to import the same data again into the table
    So please help me which is the efficient and less time taking method to do this
    I am very new to this field.
    Can some one help me with this?
    My oracle database version is 10.2
    Thanks in advance

    OK so first of all I would ask the following questions:
    1. Why must you export the data and then re-import it a few months later?
    2. Have you read through the documentation for SQLLDR thoroughly ? I know it is like stereo instructions but it has valuable information that will help you.
    3. Does the table the data is being re-imported into have anything attached to it eg: triggers, indices or anything that the DB must do on each record? If so then re-read the the sqlldr documentation as you can turn all of that off during the import and re-index, etc. at your leisure.
    I ask these questions because:
    1. I would find a way for whatever happens to this data to be accomplished while it was in the DB.
    2. Pumping data over the wire is going to be slow when you are talking about that kind of volume.
    3. If you insist that the data must be dumped, massaged, and re-imported do it on the DB server. Disk IO is an order of magnitude faster then over the wire transfer.

Maybe you are looking for