Data Extraction Approach

Hi ,
I have 2 BW system BW01 and BW02, and source system SS01.
BW 01 connected to Sorce system SS01 and In operation for some years.
Now i need the same SS01 data(Logistics,Financial etc) to be modelled in BW02 which has its own list of source system.
Now 2 approach can be used
1.Connecting BW01 to BW02
2.Connecting BW02 to Source system SS01.
What is the advantage and disadvantage of both the approach?
PS:I know better would be connected to BW system rather than source system but i need all pro's and cons of both approach.
Thanks and Regards,
MuraliManohar

Hi ,
BW2 connected to source system :-
you would have to setup all the data sources in the source system.
The entire flow has to be maintained in BW2, from DWH layer to the Data warehouse Layer.
In addition to setting up these data sources, the master data also has to be extracted to  BW2 from source system.
In case of debugging, you would directly compare the values with those in R/3.
BW2 connected to BW1 :-
readily available processed data.
However, recent data in BW2 would be available only after the data is loaded into BW1.
Also, in case of any issues, you would have to debug through 3 layers now. Bw2, the in between BW1 and then the source system. So, analysis of issue will become a little time taking.
Also, any issues with data in BW1, would be replicated in BW2. Which means and discrepancies in BW1(delta misses, double data) would reflect in BW2 and hence the resolution of the issue would be dependent on data correction in BW1.
All said and done, the solution to choose should be decided on the scope of the requirement. In case, the requirement consists of getting all the data from the logistics cockpit, then connection to the source system is recommended.  However, if you would need to manipulate and work with smaller sets of data, then data can be fetched from BW1.
The higher the amount of data, the more complex the situation would be.
Best Regards,
Rahul

Similar Messages

  • Reconciliation and data verification approaches for data in  sap r/3 and bw

    Hi
    Cam anybody suggest what are the different reconciliation and data verification approaches to assure that BW and R/3 source data are in sync.
    Thanks in advance.
    Regards,
    Nisha.

    Hi
      What you can do is, go to R/3 transaction RSA3 and run the extractor, it gives you the number of records extracted and then go to BW Monitor to check the number of records in the PSA.
    if it's the same, there you go.
    there is an HOW-TO Document on service market place "Reconcile Data Between SAP Source Systems and SAP BW".
    Check this document as well which is very helpful.
    There is another HOW-TO document "Validate infocube data by comparing it with PSA Data". this is also a good document.
    Hope it helps.
    Hari Immadi
    http://immadi.com
    SEM BW Analyst

  • Data migration approach for Scheduling Agreements

    Gurus,
    Can anyone provide guidance on the data migration approach for Scheduling Agreements? How can we migrate the open delivery schedules and the respective cumulative quantities? The document type being used is "LZ".
    Can correction deliveries (Doc type - LFKO) be used update the initial cumulative quantities?
    Any help in this regard is higly appreciated.
    Regards,
    Gajendra

    Hi Zenith
    You might find useful information here: IS-U data export (extraction) for EMIGALL.
    I've done numerous IS-U migrations in the last 10+ years, but never one from IS-U to IS-U.
    One main questions to start with: Are the two systems on the same release level?
    If they are not and it would be too much effort to get them onto the same release level I would think creating your own extract programs and using the Migration workbench (EMIGALL) is the way to go.
    If they are on the same level, there might possible other ways to do it, dependent on the differences between the two systems (mainly around customising). Assuming there are significant differences - otherwise why would you bother migrating? - there's the high probability that using EMIGALL is the way to go as well.
    Yep
    Jürgen

  • Data Extraction  or IDOC to flat file

    hi,
    I have a project to create a flat file from SAP, for an external legacy system. There are 3 requirement.
    What approach should I take. Simple data extraction OR Idoc to flat file.
    There are 4 requirements:
    1. first time extract all data.
    2. on subsequent run, extract only changed & new records
    3. if in SAP table, a record is deleted, then marked deleted in flat file.
    What approach should I take if I use data extraction.
    Thanks.

    I read your question, my first thought would be to look at where the data is going?
    What are the data requirements of the legacy system. IDOCs can speed up the development related to pushing the data out from SAP. Using ALE and change pointers you can automatically pass out the delta with a limited amount of development.
    However, the receiving system then needs to parse the IDOC data. depending on the IDOC you are working with this can be a challenge especially if the legacy developer doesn't get IDOCs.
    Sometimes its easier to collect and write the data from SAP using "simple data extraction". The data is more readily organized into a format the receiving system is expecting.
    You can also pass the idoc to a middleware maping application if one is available and do the SAP to legacy mapping there.
    Cheers

  • Data extraction from an array of data

    So I want to extract a
    subset from an array of data. 
    I have two columns of data that I need to do with,
    these are potential energy (expressed in terms of kT) and relative separation.  The data typically results in a parabola-type
    shape with the potential energy on the y-axis and the separation on the
    x-axis.  The potential energy starts at
    high kT's at small separation and then decreases to zero as the separation
    increases and then it starts to increase again once it is past zero kT.  The issue I have is that I need to disregard
    any data that is greater than 6 kT.  Thus
    I need to extract the data (both the potential energy data and the
    corresponding separation data) from the array that is 6kT or below.  So the problem I am having is trying to work
    out how to approach the extraction of this data.  I would love to get some suggestion as to how
    I should approach it.
    I initially thought of using a mathscript node to do this,
    but I am not sure of how I should code this, if this is indeed possible.
    Another option I considered was to use one of the array vi's
    to perform this extraction as I thought I could use them to find the element
    where the kT is higher than 6 and the corresponding index element which I could
    then use to extract the relevant portions of the array.  But I don't see an easy way to achieve this.
    So the vi I will write for this will function will actually
    be a subvi of another vi I have written (this vi is named ‘non-linear fit to PE
    data mod.vi' and it also has a couple of other subvis).  I have attached this vi and a relevant data
    file (as an example of the data I will be getting and needing to analyse) in a
    zip file.  I haven't included an example
    of the subvi I need to write because I looking to find some pointers as to how
    I could do this first.  In the ‘non-linear
    fit to PE data mod.vi' I have included a text box in the location I will be
    putting this subvi.
    Any help in helping me out here would be appreciated.
    Thanks
    Solved!
    Go to Solution.
    Attachments:
    data extraction help needed.zip ‏31 KB

    Try the attached VI. as per my understanding you want to remove the data in the array which is grater than the specified value,am right?
    Balaji PK (CLA)
    Ever tried. Ever failed. No matter. Try again. Fail again. Fail better
    Don't forget Kudos for Good Answers, and Mark a solution if your problem is solved.
    Attachments:
    Remove Data.vi ‏8 KB

  • Data Extract Design

    Hi All,
    I have a requirement where I have to extract data from various different tables-->Only particular columns.
    The requirements are same for different databases, hence I thought to have a single generic approach and reuse the same code to perform the extract and create an ascii file.
    Below is the typical scenarion i want to achieve, hence need your expertise inputs to start off..
    a) Define the required columns -- This should be configurable, i.e., add or remove columns in future.
    b) Extract the column names from the database for those that are defined in the step a) above.
    c) Extract the data from relevent tables/columns for various conditions based on step a and b above.
    d) Create an ascii file for all the data extracted.
    I'm unsure if there is anything wrong or please suggest the best approach.
    Regs,
    R

    user10177353 wrote:
    I'm unsure if there is anything wrong or please suggest the best approach.
    The first thing to bear in mind is that developing a generic, dynamic solution is considerably more more complicated than writing a set of extract statements. So you need to be sure that the effort you're about to expend willl save you more time than writing a script and copying/editing it for subsequent re-use.
    You'll probably need three tables:
    1. Extracts - one record per extract definition (perhaps including info such as target file name)
    2. Extract tables - tableges for each extract
    3. Extract columns - columns for each extracted column.
    I'm writing this as though you'll be extracting more than one table per run.
    The writing to file is the trickiest bit. Choose a good target. Remember that although we called them CSV files, commas actually make a remarkably poor choice of separator, as way too much data contains them. Go for soemthing really unlikely, ideally a multi-character separator like ||¬.
    Also remember text files only take strings, so you need to convert your data to text. Use the data dictionary ALL_TAB_COLUMNS view to get the metatdata for the extracted columns, and apply explicit masks to date and numeric columns. You may want to allow date columns to have masks which include or exclude the time element.
    Consider what you want to do with complex data types (LOBs, UDTs, etc).
    Finally, you need to address the problem of the extract file's location. PL/SQL has a lot of utilities to wrangle files but they only work on the server side. So if you want to write to a local drive you'll need to use SPOOL.
    One last thought: how will you import the data? It would probably be a good idea to use this mechanism to generate DDL for a matching external table.
    Cheers, APC
    Edited by: APC on May 4, 2012 1:08 PM

  • BODS 3.1 : SAP R/3 data extraction -What is the difference in 2 dataflows?

    Hi.
    Can anyone advise as to what is the difference  in using the data extraction flow for extracting Data from SAP R/3 ?
    1)DF1 >> SAPR/3 (R3/Table -query transformation-dat file) >>query transformation >> target
    This ABAP flow generates a ABAP program and a dat file.
    We can also upload this program and run jobs as execute preloaded option on datastore.
    This works fine.
    2) We also can pull the SAP R/3 table directly.
    DF2>>SAPR/3 table (this has a red arrow like in OHD) >> Query transformation >> target
    THIS ALSO Works fine. And we are able to see the data directly into oracle.
    Which can also be scheduled on a job.
    BUT am unable to understand the purpose of using the different types of data extraction flows.
    When to use which type of flow for data extraction.
    Advantage / disadvantage - over the 2 data flows.
    What we are not understanding is that :
    if we can directly pull data from R/3 table directly thro a query transformation into the target table,
    why use the Flow of creating a R/3 data flow,
    and then do a query transformation again
    and then populate the target database?
    There might be some practical reasons for using these 2 different types of flows in doing the data extraction. Which I would like to understand.  Can anyone advise please.
    Many thanks
    indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 3:25 PM

    Hi Jeff.
    Greetings. And many thanks for your response.
    Generally we pull the entire SAP R/3 table thro query transformation into oracle.
    For which we use R/3 data flow and the ABAP program, which we upload on the R/3 system
    so as to be able to use the option of Execute preloaded - and run the jobs.
    Since we do not have any control on our R/3 servers nor we have anyone on ABAP programming,
    we do not do anything at the SAP R/3 level
    I was doing this trial and error testing on our Worflows for our new requirement
    WF 1 : which has some 15 R/3 TABLES.
    For each table we have created a separate Dataflow.
    And finally in between in some dataflows, wherein, the SAP tables which had lot of rows, i decided to pull it directly,
    by-passing the ABAP flow.
    And still the entire work flow and data extraction happens ok.
    In fact i tried creating a new sample data flow and tested.
    Using direct download and - and also execute preloaded.
    I did not see any major difference in time taken for data extraction;
    Because anyhow we pull the entire Table, then choose whatever we want to bring into oracle thro a view for our BO reporting or aggregate and then bring data as a table for Universe consumption.
    Actually, I was looking at other options to avoid this ABAP generation - and the R/3 data flow because we are having problems on our dev and qa environments - giving delimiter errors.  Whereas in production it works fine. Production environment is a old set up of BODS 3.1. QA and Dev are relatively new enviornments of BODS. Which is having this delimiter error.
    I did not understand how to resolve it as per this post : https://cw.sdn.sap.com/cw/ideas/2596
    And trying to resolve this problem, I ended up with the option of trying to pull directly the R/3 table. Without using ABAP workflow.  Just by trial and error of each and every drag and drop option. Because we had to urgently do a POC and deliver the data for the entire e recruiting module of SAP. 
    I dont know whether i could do this direct pulling of data - for the new job which i have created,
    which has 2 workflows with 15 Dataflows in each worflow.
    And and push this job into production.
    And also whether i could by-pass this ABAP flow and do a direct pulling of R/3 data, in all the Dataflows in the future for ANY of our SAP R/3 data extraction requirement.  And this technical understanding is not clear to us as regards the difference between the 2 flows.  And being new to this whole of ETL - I just wanted to know the pros and cons of this particular data extraction. 
    As advised I shall check the schedules for a week, and then we shall move it probably into production.
    Thanks again.
    Kind Regards
    Indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 7:02 PM

  • Confuse on PR & PO data migration approach new to MM module

    Hi All,
    I'm pretty confuse with the PO data migration approach when it comes to PO is GR or partial GR or GR&IR. I'm hoping that someone can enlighten me. i understand that we typically don't migrate PO when it is GR & IR, FI team usually will bring over to the new system as an vendor open item in AP. How about the PR or PO which have gone through or half release strategy? What is the follow up process?  I have created a criteria table below. How Could someone point me in the right direction? Thanks in advance.
    PR
    Criteria
    Data migration required
    Notes
    Open and Released
    Y
    Open and not Released
    Y
    Flag for Deletion
    N
    Flag for Block
    Y
    PO
    Criteria
    Data migration required
    Notes
    Open and Released
    Y
    Open and not Released
    Y
    GR but no IR
    GR & IR
    N
    AP will bring over as open item
    Flag for Deletion
    N
    Flag for Block
    Y
    Partial
    Y
    For partial GR to recreate PO only with missing GR quantity
    Regards,
    John

    Hi John,
    The approach that i have followed recently is that we have considered PO as the base document and converted other documents based on the PO condition. This means you first need to see if the PO is to be converted or not. Then you can proceed to convert the related documents like PR, Agreement, Info record, Source list, etc.
    Also open qty for PO should be considered for Material and Service line items both.
    Once a GR/SES is created, it gets updated in the PO history table EKBE with its related transaction/event type.i.e. EKBE-VGABE = 1 for GR and 9 for SES. Quantity and value also gets updated in case of material and services. You can compare this consumed quantities with PO quantity.
    Please see below from SCN and let me know if you need more info on PR or PO conversion.
    Purchase Requisition Conversion Strategy
    Thanks,
    Akash

  • Open data extraction orders -  Applying Support Packs

    Dear All,
    I have done the IDES 4.6C SR2 installation.
    While updating the support packs, i get the message saying
    CHECK_REQUIREMENTS phase.
    Open data extraction orders
    There are still open data extraction orders in the system
    process these before the start of the object import because changes to the ABAP Dictionary structures could lead to data extraction orders not being able to be read after the import and their processing terminating
    For more details about this problem, see Note 328181.
    Go to the Customizing cockpit for data extraction and start the processing of all open extraction orders.
    I have checked the Note.
    But this is something m facing for the first time.
    Any suggestion!!!
    Rgds,
    NK

    The exact message is :
    Phase CHECK_REQUIREMENTS: Explanation of the Errors
    Open Data Extraction Requests
    The system has found a number of open data extraction requests. These
    should be processed before starting the object import process, as
    changes to DDIC structures could prevent data extraction requests from
    being read after the import, thus causing them to terminate. You can
    find more information about this problem in SAP Note 328181.
    Call the Customizing Cockpit data extraction transaction and process all
    open extraction requests.

  • Bulk API V2.0 Data extract support for additional objects (Campaign,Email,Form,FormData,LandingPage)?

    allison.moore
    Any plans for adding following objects under Bulk API V2.0 for data extraction from Eloqua. Extracting the data using the REST API for these objects makes it complicated.

    Thanks for quick response. Extracting these objects using REST API in depth=Complete poses lots of complication from the code perspective since these object(s) contains multiple nested or embedded objects within it. is there any guideline on how to extract these objects using REST so that we can get all the data which is required for analysis/reporting.

  • Data Extraction and ODS/Cube loading: New date key field added

    Good morning.
    Your expert advise is required with the following:
    1. A data extract was done previously from a source with a full upload to the ODS and cube. An event is triggered from the source when data is available and then the process chain will first clear all the data in the ODS and cube and then reload, activate etc.
    2. In the ODS, the 'forecast period' field was now moved from data fields to 'Key field' as the user would like to report per period in future. The source will in future only provide the data for a specific period and not all the data as before.
    3) Data must be appended in future.
    4) the current InfoPackage in the ODS is a full upload.
    5) The 'old' data in the ODS and cube must not be deleted as the source cannot provide it again. They will report on the data per forecast period key in future.
    I am not sure what to do in BW as far as the InfoPackages are concerned, loading the data and updating the cube.
    My questions are:
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    Your assistance will be highly appreciated. Thanks
    Cornelius Faurie

    Hi Cornelius,
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    -->> Try to load data into ODS in Overwrite mode full update asbefore(adds new records and changes previous records with latest). Pust delta from this ODS to CUBE.
    If existing ODS loading in addition, introduce one more ODS with same granularity of source and load in Overwrite mode if possible delta or Full and push delta only subsequently.
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    --> Yes, It is correct. Otherwise you will loose historic data.
    Hope it Helps
    Srini

  • FI data extraction help

    HI All,
    I have gone thorugh the sdn link for FI extarction and founf out to be very  useful.
    Still i have some doubts....
    For line item data extraction...........Do we need to etract data from 0FI_GL_4, 0FI_AP_4, 0FI_AR_4, into ODS0FIGL_O02(General Ledger: Line Items) ? If so do we need to maintain transformation between ods and all three DS?
    ALso Please educate me on 0FI_AP_3 and 0FI_AP_4 data sources.......

    >
    Jacob Jansen wrote:
    > Hi Raj.
    >
    > Yes, you should run GL_4 first. If not, AP_4 and AR_4 will be lagging behind. You can see in R/3 in table BWOM_TIMEST how the deltas are "behaving" with respect to the date selection.
    >
    > br
    > jacob
    Not necessarily for systems above plug in 2002.
    As of Plug-In 2002.2, it is no longer necessary to have DataSources linked. This means that you can now load 0FI_GL_4, 0FI_AR_4, 0FI_AP_4, and 0FI_TX_4 in any order. You also have the option of using DataSources 0FI_AR_4, 0FI_AP_4 and 0FI_TX_4 separately without 0FI_GL_4. The DataSources can then be used independently of one another (see note 551044).
    Source:http://help.sap.com/saphelp_nw04s/helpdata/EN/af/16533bbb15b762e10000000a114084/frameset.htm

  • Generic Data Extraction From SAP R/3 to BI server using Function Module

    Hi,
    I want step by step procedure for Generic Extraction from SAP R/3 application to BI application
    using Functional module.
    If any body have any Document or any PPT then please reply and post it in forum, i will give point for them.
    Thanks & Regards
    Subhasis Pan

    please go thr this link
    [SAP BI Generic Extraction Using a Function Module|https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/business-intelligence/s-u/sap%20bi%20generic%20extraction%20using%20a%20function%20module.pdf]
    [Generic Data Extraction Using Function Module |Re: Generic Data Extraction Using Function Module;

  • Steps for Data extraction from SAP r/3

    Dear all,
    I am New to SAP Bw.
    I have done data extraction from Excel into SAP BW system.
    that is like
    Create info objects > info area> Catalog
                                                --> Character catalog
                                                --> Key catalog
    Create info source
    Upload data.
    create info cube
    I need similar steps for data extraction for SAP R/3
    1. when data is in Ztables ( using Views/Infosets/Function etc)
    2. When data is with Standard SAP using Business Content.
    Thanks and Regards,
    Gaurav Sood

    hi,
    chk the links
    Generic Extraction
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/84bf4d68-0601-0010-13b5-b062adbb3e33
    CO-PA
    http://help.sap.com/saphelp_46c/helpdata/en/7a/4c37ef4a0111d1894c0000e829fbbd/content.htm
    CO-PC
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/fb07ab90-0201-0010-c489-d527d39cc0c6
    iNVENTORY
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
    Extractions in BI
    https://www.sdn.sap.com/irj/sdn/wiki
    LO Extraction:
    /people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
    /people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
    /people/sap.user72/blog/2005/02/14/logistic-cockpit--when-you-need-more--first-option-enhance-it
    /people/sap.user72/blog/2004/12/23/logistic-cockpit-delta-mechanism--episode-two-v3-update-when-some-problems-can-occur
    /people/sap.user72/blog/2005/04/19/logistic-cockpit-a-new-deal-overshadowed-by-the-old-fashioned-lis
    Remya

  • How to add new fields to a data extract

    The following data extract program generates an output file which is displayed using a publisher template as a check format report.
    Oracle Payments Funds Disbursement Payment Instruction Extract 1.0
    How would I add new fields to the generated file? In other words what is the procedure to add new fields to the data extract?
    Thanks

    Do anyone pls advise how to customize the payment extraction program? We also have the similar requirement to extract extra fields to format a payment file.

Maybe you are looking for

  • Urgent: Portletizing an struts JSP application

    Hello all, I have a struts JSP application. I want to portletize this whole application, so that navigation is always within the portal framework. Using URL Services all I can see is that the first page will be a portlet. Can anyone suggest the best

  • VAT on Vendor Invoice (NON PO Invoice)

    Hi Folks, We are generating automtaic Vendor  Invoices  (based on a requirement  of leasing in Fixed assets).The clients wants a VAT applied on these invoices (standard rate). Any suggestion on how to handle this?  Thanks sapsri

  • How to download my icloud music to samsung tab s

    I have my music from my iPad  in the icloud but would like help in getting it on to my Samsung tab style Thank you for any advice you can give me. ...Dan

  • WCS Access Point information

    I am starting to setup my maps in WCS and have an issue with the way the information is displayed.  Currently I have some locations that have several AP's close to each other due to the walls between the spaces.The AP's where installed by someone els

  • Make Eyedropper Set Foreground Color not Background

    How do you make color selections/sampling like with the Eyedropper set the Foreground color? It was always that way before CS3 unless I accidentally changed a pref unknowingly. Now when I select/sample color it sets the background color. I looked, bu