Data extraction / Conversion / Mapping / Miration

Hi All,
   Can any one please explain the meaning of the below terms and how data is moved from Non SAP system to SAP system?.
"Data extraction / Conversion / Mapping / Miration"
Kindly give me detailed step by step instructions on how to accomplish this. Also if there is any documentation please forward.
I appreciate the help in advance.
Raj

Hi Raj,
We can use LSMW, BDC, eCATT to upload the data from Legacy to SAP system.
Data Extraction - Data has to extract from legacy in to some flat file like  Xls.
Conversion - Convert as per the need of SAP.
Mapping - Mapping the extracted field with SAP field for upload.
Migration - Data migration from legacy to SAP using any tool like LSMW, BDC, CATT.
Good Luck
Om

Similar Messages

  • Conversion mapping is losing time zone data during daylight saving time

    We have a problem with conversion of Calendars to timestamp with timezone for the last hour of Daylight Saving Time (e.g. 01:00 EDT - 01:59 EDT) where it is being interpreted as Standard Time which is in reality 60 minutes later.
    We've written a JUnit test case that runs directly against TopLink to avoid any issues with WAS and its connection pooling.
    The Calendar theDateTime comes from an object called TimeEntry which is mapped to a TIMESTAMP WITH TIMEZONE field using conversion mapping with Data Type TIMESTAMPTZ (oracle.sql) and Attribute Type Calendar (java.util).
    We are using:
    Oracle TopLink - 10g Release 3 (10.1.3.0.0) (Build Patch for Bugs 5145690 and 5156075)
    Oracle9i Enterprise Edition Release 9.2.0.7.0 - 64bit Production
    Oracle JDBC driver Version: 10.2.0.1.0
    platform=>Oracle9Platform
    Execute this Java:
    SimpleDateFormat format = new SimpleDateFormat("MM/dd/yyyy HH:mm z");
    TimeZone tzEasternRegion = TimeZone.getTimeZone("US/Eastern");
    Calendar theDateTime = Calendar.getInstance(tzEasternRegion);
    theDateTime.setTime(format.parse("10/29/2006 01:00 EDT"));
    Persist to the database and execute this SQL:
    SELECT the_date_time, EXTRACT(TIMEZONE_REGION FROM the_date_time), EXTRACT(TIMEZONE_ABBR FROM the_date_time), EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    FROM time_table WHERE time_table_id=1
    This provides the following results:
    THE_DATE_TIME EXTRACT(TIMEZONE_REGION FROM the_date_time) EXTRACT(TIMEZONE_ABBR FROM the_date_time) EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    29-OCT-06 01.00.00.000000 AM US/EASTERN US/Eastern EST -5
    The wrong time zone is in the database. It should be EDT -4. Let's test the SQL that should be generated by TopLink. It should look like the following.
    Execute this SQL:
    UPDATE time_table SET the_date_time = TO_TIMESTAMP_TZ('10/29/2006 01:00 US/Eastern','mm/dd/yyyy HH24:MI TZR') WHERE (time_table_id=1)
    SELECT the_date_time, EXTRACT(TIMEZONE_REGION FROM the_date_time), EXTRACT(TIMEZONE_ABBR FROM the_date_time), EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    FROM time_table WHERE time_table_id=1
    This provides the same results:
    THE_DATE_TIME EXTRACT(TIMEZONE_REGION FROM the_date_time) EXTRACT(TIMEZONE_ABBR FROM the_date_time) EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    29-OCT-06 01.00.00.000000 AM US/EASTERN US/Eastern EST -5
    Now, execute this SQL:
    UPDATE time_table SET the_date_time = TO_TIMESTAMP_TZ('10/29/2006 01:00 EDT US/Eastern','mm/dd/yyyy HH24:MI TZD TZR') WHERE (time_table_id=1)
    SELECT the_date_time, EXTRACT(TIMEZONE_REGION FROM the_date_time), EXTRACT(TIMEZONE_ABBR FROM the_date_time), EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    FROM time_table WHERE time_table_id=1
    This provides better results:
    THE_DATE_TIME EXTRACT(TIMEZONE_REGION FROM the_date_time) EXTRACT(TIMEZONE_ABBR FROM the_date_time) EXTRACT(TIMEZONE_HOUR FROM the_date_time)
    29-OCT-06 01.00.00.000000 AM US/EASTERN US/Eastern EDT -4
    The correct time zone is now in the database. Let's test reading this with the following Java:
    System.out.println("cal= " + theDateTime);
    System.out.println("date= " + theDateTime.getTime());
    System.out.println("millis= " + theDateTime.getTimeInMillis());
    System.out.println("zone= " + theDateTime.getTimeZone());
    This provides the following results:
    cal= java.util.GregorianCalendar[...]
    date= Sun Oct 29 01:00:00 EST 2006
    millis= 1162101600000
    zone= sun.util.calendar.ZoneInfo[id="US/Eastern",...]
    The TimeZone object is correct since we are using the US/Eastern regional time zone, but the millis are wrong which makes the time EST instead of EDT. The millis should be 1162098000000.
    The conversion from java.util.Calendar to TIMESTAMPTZ loses the actual offset when setting to a regional time zone. It can maintain this info by specifying it explicitly.
    The conversion from TIMSTAMPTZ to java.util.Calendar also loses the actual offset even if the correct offset is in the database.
    Has anyone else encountered this conversion problem? It appears to be a conversion problem in both directions. I know that the Calendar is lenient by default and will assume Standard Time if time is entered during the repeated 1 o'clock hour at the end of Daylight Saving Time, but the Calendars we are using are explicit in their time, so this would be classified as data corruption.
    Nik

    Opened an SR. Looks like there is a problem with conversion either in TopLink or in JDBC.

  • How to validate a date in message mapping

    Hi experts,
                    how to validate a date in message mapping. For ex:  if date comes as 2008/02/31, then file it shold not get processed.how to achieve this in message mapping. Please help .
    Thanks&Regards,
    Reyaz Hussain

    Hi,
    There are few simple ways for date validation as follow,
    1.If you would like to handle it in XI only, then in message mapping you could verify about it with the help of generating smart exception.
    For e.g in mapping there is one Date conversion API i.e. somthing DateTransformation It converts the incoming date format to required format. Here give the date format i.e expected from Sender File.
    If in case the format miss-matched then it will create the exception.
    You could handle this exception with the use of [Alert notification|http://help.sap.com/saphelp_nw04/helpdata/en/2c/abb2e7ff6311d194c000a0c93033f7/frameset.htm] and could be even able to notify to sender system about it.
    2. The another solution is easy for SAP synchornous communication --If you are passing the file data to SAP, then you could use below function modules to verify date format in receiver RFC/BAPI or inbound IDOC program. If the sy-subrc is not 0 then don't process further.
    CONVERT_DATE_FORMAT
    ISU_DATE_FORMAT_CHECK
    Thanks
    Swarup

  • Data Extraction template

    Hi,
        In my current project there is a requirement for Data migration from legacy, So can anyone please help me on providing the data extraction template for Vendor and customer open line items, G/L balances and for Bank directory,
       Your help in this regard is highly appreciated
    Thanks
    Rajesh. R

    Hi Rajesh,
    When you extract the data from the legacy system the point that you should keep in mind are,
    1. How the organisation structure in legacy system is mapped in SAP system. Becuase the data upload into sap should also happen in the way the reports are expected from SAP.
    There is no standard layout which is used while extracting, you need to make sure that you extract all the necessary information from legacy system which is need to be uploaded in SAP.
    I can give you an example of the field that you may include in the layout of extraction
    Vendor account, Document data, Posting date, Document Type, Company code, Amount in Document currency, and local currency, etc
    2. Please keep in mind that there are some GL account which will be maintained in SAP as open item basis. Therefore your extraction should also possibly happen each transaction wise.
    3. There are certain GL which will be maintained in foreign currency, line bank GL which are in foreign currency. In such cases you need to extract the balance in foreign currency.
    My suggestion to you will be thinking through the precess first and then go ahead with the extraction.
    Hope this helps
    Regards
    Paul

  • Invalid data type conversions

    what are the invalid data type conversions in ABAP ???
    Moderator message: please search for available information/documentation.
    Edited by: Thomas Zloch on Mar 10, 2012 6:01 PM

    No Amanda,
    The values that I see in message monitor are also those that comes to XSLT programs as input.
    I investigated our problem a little further myself:
    1) XI always converts messages into XML format - this everybody knows.
    2) In XI documentation it is declared that XI uses ISO 8601 for DATE type formatting. That's why you see dates in format YYYY-MM-DD in XML data in message monitoring.
    3) The appearance of those decimal values are not that obvious for me. If XI uses ABAP transformation for message content then applies the rules for converting ABAP data types into XML. For this there is a SAP document 'ABAP - XML mapping' from TechEd2004. This document seems to describe how ABAP data types are handled by XML transformation.
    Anyway, we currently need to play with decimal values in XSLT mapping programs inside XI:
    a) add leading 0 integer if source value < 1
    RFC returns 0.123 -> XI converts to XML '.123' -> XSLT mapper should return '0.123' ->SOAP response returns '0.123'
    b) add decimal point and trailing zeros
    RFC returns 0.000 -> XI converts to XML '0' -> XSLT mapper should return '0.000' ->SOAP response return '0.000'
    With dates, the problem is actually on documenting mapping rules when writing interface specifications. If you would write an EAI-solution-independent conversion rule (RFC/date->SOAP/char) for date field it could sound like: YYYYMMDD -> dd.mm.yyyy. This works semantically between these systems but would not work for XI-developer, because he get YYYY-MM-DD from the RFC.
    Additionally the decimal type conversion requirements in XSLT mapping programs probably only applies XI and are therefore not reusable in other EAI environments.
    I would be glad if somebody would still have further comments for this data type conversion issue.
    br: Kimmo

  • Terms of Payment - T052 data extraction

    Hello Gurus,
    Can you help me in a matter? I`m looking for a standard data extraction mode of Terms of Payment (T052) table. Is tere a standard datasource, or I`m just able to get those data from R3 side to Bw using generic extractor?
    thx
    laszlo

    Hi,
    I don't think that there is any standard datasource which provides the information on Terms of Payment.
    We do have a datasource 0CUST_COMPC_ATTR which gives the mapping for a customer and terms of payment but i dont think this serves your purpose
    You would have to create generic datasource which fetches this detail.
    Edited by: Rahul K Rai on Aug 25, 2010 3:37 PM

  • SPM data extraction question: invoice data

    The documentation on data extraction (Master Data for Spend Performance Management) specifies that Invoice transactions are extracted from the table BSEG (General Ledger) . On the project I'm currently working the SAP ERP team is quite worried to run queries on BSEG as it is massive.
    However extract files are called BSIK and BSAK of; which seems to suggest that the invoices are in reality extracted from those accounts payable tables.
    Can someone clarify the tables really used, and if it's the BSIK/BSAK tables what fields are mapped?

    Hi Jan,
    Few additional mitigation thoughts which may help on the way as same concerns came up during our project .
    1) Sandbox Stress testing
    If available u2013 take advantage of an ECC Sandbox environment for extractor prototyping and performance impact analysis. BSEG can be huge (contains all financial movements), so e.g. BI folks typically do not fancy a re-init load for reasons outlined above. Ask basis to copy a full year of all relevant transactional data (normally FI & PO data) onto the sandbox and then run the SPM extractors for a full year extraction to get an idea about extraction system impact.
    Even though system sizing and parameters may differ compared to your P-box you still should get a reasonable idea and direction about system impact.
    2) In a second step you may then consider to break down the data extraction (Init/Full mode for your project) into 12 monthly extracts for the full year (this gives you 12 files from which you Init your SPM system with) with significant less system impact  and more control (e.g. can be scheduled over night).
    3) Business Scenario
    You may consider to use the Vendor related movements in BSAK/BSIK instead the massive BSEG cluster table as starting tables (and fetch/lookup BSEG details only on a need base) for the extraction process (Index advantages were outlined above already).
    Considering this we managed to extract Invoice data with reasonable source system impact.
    Rgrds,
    Markus

  • Data Extraction  or IDOC to flat file

    hi,
    I have a project to create a flat file from SAP, for an external legacy system. There are 3 requirement.
    What approach should I take. Simple data extraction OR Idoc to flat file.
    There are 4 requirements:
    1. first time extract all data.
    2. on subsequent run, extract only changed & new records
    3. if in SAP table, a record is deleted, then marked deleted in flat file.
    What approach should I take if I use data extraction.
    Thanks.

    I read your question, my first thought would be to look at where the data is going?
    What are the data requirements of the legacy system. IDOCs can speed up the development related to pushing the data out from SAP. Using ALE and change pointers you can automatically pass out the delta with a limited amount of development.
    However, the receiving system then needs to parse the IDOC data. depending on the IDOC you are working with this can be a challenge especially if the legacy developer doesn't get IDOCs.
    Sometimes its easier to collect and write the data from SAP using "simple data extraction". The data is more readily organized into a format the receiving system is expecting.
    You can also pass the idoc to a middleware maping application if one is available and do the SAP to legacy mapping there.
    Cheers

  • Data extraction from Siebel to BW - use BC or not

    Hello
    I am doing data extraction from Siebel to BW. I can map just 15% of Siebel fields to SAP BC data sources. Do you still recommend me to use BC or forget it and create whole buch of new infoobjects and custom ods/cubes?

    Hi,
    If data is comming from out of SAP, then carefully select the infoObects, because SAP provided InfoObjects are more meaningful and better integration with other infoobjects(Compound Objects, Navigational and so on).If You are created any custom objects, You have to take care all of them in overall design and architecture.
    I am prefering first to choose existing BC Infoobjects, if not available and then create new objects. But here You need lot of functional knowledge also

  • Data Extraction from AR Aging Tables to Acess

    Hi
    I used to work on developing the reports.But I am a new to the Data Extraction from AR Aging Tables into Acess and the data is upload from Acess to SAP.
    Can anybody help me to relove this issue.I really appreciate to you.After mapping then the data is loaded to SAP.

    Hi
    I used to work on developing the reports.But I am a new to the Data Extraction from AR Aging Tables into Acess and the data is upload from Acess to SAP.
    Can anybody help me to relove this issue.I really appreciate to you.After mapping then the data is loaded to SAP.

  • Data Extraction in Open Hub Destination using Process Chain

    Hi
    I want to extract data in Open Hub Destination (database table)from Data Store Object(ODS) through Process Chain.
    When i tried to create process chain, i found only one option under Process types - Data export into External System. Here it asks for Infospoke instead of Open Hub Destination.

    Michael is correct and below is the rationale...
    <a href="http://help.sap.com/saphelp_nw2004s/helpdata/en/43/58e1cdbed430d9e10000000a11466f/content.htm">http://help.sap.com/saphelp_nw2004s/helpdata/en/43/58e1cdbed430d9e10000000a11466f/content.htm</a>
    <b>Integration</b>
    You can use the data transfer process to update data to the open hub destination. The data is transformed in this process. Not all rules types are available in the transformation for an open hub destination: Formulas, reading master data, time conversion, currency translation and unit conversion are not available.
    Hope it Helps
    Chetan
    @CP..

  • Where are the Data Extract Views

    Hi,
    can anyone answer me where I would find the Oracle Data Extract Views? I created several data extract views and I would need to query the data in those views.
    I thought they would be in the Study Account but could not find the views there.
    Kindest regards
    Dennis

    Sorry I just found why I did not see my views ... there was an error in the mapping table which resulted in an error when creating the view and therefore it did not show up ...
    All my fault ...

  • MDMGX - After Data Extraction

    Hi,
    I am new for MDM Generic Extraction concept and understand the process up to generation of XSD and XML file generation. Below are the doubts in my mind. Please help me to get clear.
    1. What is the use of XSD file generation. Is it used for source template for MDM import manager?
    2. What is the use of Time out option in 'Define Repositories and FTP Server Details'.
    3. How will import generated multiple XML files into MDM server via Import Manager. For Ex
    1. Repository Fields :
    Product ID -- UNIQUE Key FIELD
    Product Desc
    Country
    Counry Description
    ISO Code
    Field1
    Field2
    2. Extracted XML files from MDMGX will be
    File1 --> Product ID & Desc
    File2 ---> Country & Country Desc
    Please take the above ex or any valid examples and explain.
    Thanks in advance.

    Hi Rakesh,
    SAP has delivered standard extraction for Reference Data and for Master Data
    The T-Code MDMGX is for Reference Data and it is used to load the Sub Table of MDM
    The MDM business content contains the standard ports and maps which is required for reference data
    The below thread explains the procedure to configure the MDMGX
    Extract Data usnig MDMGX
    There are sequences in extarcting the data.You can load only the revelant sub tables which is used in your repository using the selection creteria.
    There is no XI/PI required as you can configure FTP or you can download it your desktop and manually load
    Master Data Extraction
    The T-Code MDM_CLNT_EXTR is used for the Master Data Extraction.
    The distribution model is required for this and the configuration at PI is required
    Follow the below link for more details
    MDM_CLNT_EXTR
    Regards,
    Antony

  • BODS 3.1 : SAP R/3 data extraction -What is the difference in 2 dataflows?

    Hi.
    Can anyone advise as to what is the difference  in using the data extraction flow for extracting Data from SAP R/3 ?
    1)DF1 >> SAPR/3 (R3/Table -query transformation-dat file) >>query transformation >> target
    This ABAP flow generates a ABAP program and a dat file.
    We can also upload this program and run jobs as execute preloaded option on datastore.
    This works fine.
    2) We also can pull the SAP R/3 table directly.
    DF2>>SAPR/3 table (this has a red arrow like in OHD) >> Query transformation >> target
    THIS ALSO Works fine. And we are able to see the data directly into oracle.
    Which can also be scheduled on a job.
    BUT am unable to understand the purpose of using the different types of data extraction flows.
    When to use which type of flow for data extraction.
    Advantage / disadvantage - over the 2 data flows.
    What we are not understanding is that :
    if we can directly pull data from R/3 table directly thro a query transformation into the target table,
    why use the Flow of creating a R/3 data flow,
    and then do a query transformation again
    and then populate the target database?
    There might be some practical reasons for using these 2 different types of flows in doing the data extraction. Which I would like to understand.  Can anyone advise please.
    Many thanks
    indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 3:25 PM

    Hi Jeff.
    Greetings. And many thanks for your response.
    Generally we pull the entire SAP R/3 table thro query transformation into oracle.
    For which we use R/3 data flow and the ABAP program, which we upload on the R/3 system
    so as to be able to use the option of Execute preloaded - and run the jobs.
    Since we do not have any control on our R/3 servers nor we have anyone on ABAP programming,
    we do not do anything at the SAP R/3 level
    I was doing this trial and error testing on our Worflows for our new requirement
    WF 1 : which has some 15 R/3 TABLES.
    For each table we have created a separate Dataflow.
    And finally in between in some dataflows, wherein, the SAP tables which had lot of rows, i decided to pull it directly,
    by-passing the ABAP flow.
    And still the entire work flow and data extraction happens ok.
    In fact i tried creating a new sample data flow and tested.
    Using direct download and - and also execute preloaded.
    I did not see any major difference in time taken for data extraction;
    Because anyhow we pull the entire Table, then choose whatever we want to bring into oracle thro a view for our BO reporting or aggregate and then bring data as a table for Universe consumption.
    Actually, I was looking at other options to avoid this ABAP generation - and the R/3 data flow because we are having problems on our dev and qa environments - giving delimiter errors.  Whereas in production it works fine. Production environment is a old set up of BODS 3.1. QA and Dev are relatively new enviornments of BODS. Which is having this delimiter error.
    I did not understand how to resolve it as per this post : https://cw.sdn.sap.com/cw/ideas/2596
    And trying to resolve this problem, I ended up with the option of trying to pull directly the R/3 table. Without using ABAP workflow.  Just by trial and error of each and every drag and drop option. Because we had to urgently do a POC and deliver the data for the entire e recruiting module of SAP. 
    I dont know whether i could do this direct pulling of data - for the new job which i have created,
    which has 2 workflows with 15 Dataflows in each worflow.
    And and push this job into production.
    And also whether i could by-pass this ABAP flow and do a direct pulling of R/3 data, in all the Dataflows in the future for ANY of our SAP R/3 data extraction requirement.  And this technical understanding is not clear to us as regards the difference between the 2 flows.  And being new to this whole of ETL - I just wanted to know the pros and cons of this particular data extraction. 
    As advised I shall check the schedules for a week, and then we shall move it probably into production.
    Thanks again.
    Kind Regards
    Indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 7:02 PM

  • Open data extraction orders -  Applying Support Packs

    Dear All,
    I have done the IDES 4.6C SR2 installation.
    While updating the support packs, i get the message saying
    CHECK_REQUIREMENTS phase.
    Open data extraction orders
    There are still open data extraction orders in the system
    process these before the start of the object import because changes to the ABAP Dictionary structures could lead to data extraction orders not being able to be read after the import and their processing terminating
    For more details about this problem, see Note 328181.
    Go to the Customizing cockpit for data extraction and start the processing of all open extraction orders.
    I have checked the Note.
    But this is something m facing for the first time.
    Any suggestion!!!
    Rgds,
    NK

    The exact message is :
    Phase CHECK_REQUIREMENTS: Explanation of the Errors
    Open Data Extraction Requests
    The system has found a number of open data extraction requests. These
    should be processed before starting the object import process, as
    changes to DDIC structures could prevent data extraction requests from
    being read after the import, thus causing them to terminate. You can
    find more information about this problem in SAP Note 328181.
    Call the Customizing Cockpit data extraction transaction and process all
    open extraction requests.

Maybe you are looking for