Data Flow from Source systemside LUWS and Extarction strucures

Hi
Can Anybody Explain the Data flow from Source system to Bi System .Especially I mean the Extract Structure and LUWS where does they come in picture ,the core data flow of inbound and out bound queues .If any link for the document  would also be helpful.
Regards
Santosh

Hi See Articles..
http://wiki.sdn.sap.com/wiki/display/profile/Surendra+Reddy
Data Flow from LBWQ/SMQ1 to RSA7 in ECC (Records Comparison).
http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/enterprise-data-warehousing/data%20flow%20from%20lbwq%20smq1%20to%20rsa7%20in%20ecc%20(Records%20Comparison).pdf
Checking the Data using Extractor Checker (RSA3) in ECC Delta Repeat Delta etc...
http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/80f4c455-1dc2-2c10-f187-d264838f21b5&overridelayout=true 
Data Flow from LBWQ/SMQ1 to RSA7 in ECC and Delta Extraction in BI
http://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/business-intelligence/d-f/data%20flow%20from%20lbwq_smq1%20to%20rsa7%20in%20ecc%20and%20delta%20extraction%20in%20bi.pdf
Thanks
Reddy

Similar Messages

  • How to migrate  the data flow from DB CONNECT sourse system from 3.5 to BI

    Hi
    can any one tell me how to migrate the data flow from DB CONNECT sourse system from 3.5 to BI 7.

    Hi,
    Go to Infoprovider to which your DB connect DS feeds and Right Click on Data source-> Then Migrate-> With Export---> You have to build new 7.0 Transformations and DTP's etc.
    ~AK

  • How to schedule Job for data uploading from source to BI

    Hi to all,
    How to schedule Job for data uploading from source to BI,
    Why we required and how we do it.
    As I am fresher in BI, I need to know from bottom.
    Regards
    Pavneet Rana

    Hi.
    You can create [process chain |http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/502b2998-1017-2d10-1c8a-a57a35d52bc8?quicklink=index&overridelayout=true]for data loading pocess and schedule start process to any time/date etc ...
    Regadrs.

  • How to make data flow from one application to other in BPEL.

    Hi All,
    I am designing work-flow of my application through BPEL(JDeveloper), I am making different BPEL projects for different functions, like sales manager got the order from sales person and sales manager either approve it or reject it, if he approve it it goes to Production manager and he ships the goods, now I want to keep sales person, sales manger,production manager in seperate BPEL files and want to get the output of sales person to sales manager and sales manager to production manager please help me in dong this.
    I was trying to make partner link in Sales manager of sales person and getting the input from there. I dont know this is right even or not, if it is right I dont know how to make data flow from one application to other.
    Experience people please guide.
    Sales Person -----> Sales Manager ----> Production Manager
    Thanks
    Yatan

    Yes you can do this.
    If you each integration point to be in different process, you have to create three BPEL process.
    1. Create a Async BPEL process 'A' which will be initiated when sales person creates the order.
    2. From BPEL process 'A' call a ASync BPEL process 'B' which has the approval flow. Depending on the input from process 'A' the sales manager will review the order in workflow and approve or reject and send the result back to process 'A'.
    3. Based on the result from workflow, invoke the Sync BPEL process 'C', where you can implement the shipping logic.
    -Ramana.

  • Data Flow from CRM to BW

    Dear SAP Experts,
    Greetings for the Day!
    I am looking forward for some information on the Data flow happening from CRM to BW system. Some of the few queries are as below:
    Do we have any settings for this data flow in Transaction SMOEAC.
    How does the below setting impact the BDOC flow in BW. Also, if we un-check the “Do Not Snd”, will BDOCs
    flow to BW system ? <PFA>
    PS: We are on CRM 7.0 with EHP2.
    Thanks!
    Regards,
    Kanika

    Hi Kanika,
    Data flow from CRM to BW happens via XIF using IDocs. You can check in transaction WE21 for your RFC destination of BW and the output parameters, which decides what data would be send to the corresponding destination.
    You can also check my blog:
    External Interface (XIF) Setup but this is XIF setup in general and not specific to BW.
    Hope this helps.
    Best Regards,
    Shanthala.

  • Need to check the data flow from R/3 to BW server.

    Hi BI experts,
    This query is regarding need to check the data flow from R/3 to BW server.
    As of now I have some set of reports which I would need to take up in BW. The requirement is  to go through the list of transaction codes for reports in R/3 and find out if there are already  any existing objects in BW system which I can use for these reports.
    So, can u plz help me.

    Depends what are your Tcode or Reports users run in R/3 and they want the same in BW.Then in BI Content we have Out of the box Delivered reports.You can activate those Load data and use it.
    Gimme T-codes you have I can send you Standard reports in BI or Cube you can get these from.
    ~AK

  • Data flow task error failed validation and return validation status "VS_NEEDSNEWMETADATA"

    I have ETL with ~800 tables that I moving from Oracle to SQL Server (Prod Oracle -> Prod SQL)
    Now the Oracle/SQL new version was came from vendor that I need to test, and for that I created new DEV environments for Oracle and SQL , the update includes updated new columns in exists tables and new tables . (DEV Oracle -> DEV SQL)
    So what I tried to do is to take the old ETL(PROD) to change the connection to DEV servers.
    Then I executing the packages from local laptop it's working, and if I trying to execute the packages from job schedule it's giving me errors : "Data flow task error failed validation and return validation status "VS_NEEDSNEWMETADATA"
    I went to each table to check the columns if something different, and I was dropping some of the tables and recreated them in the destination but the error still shows. I also tried to change the package to "DelayValidation" to True but without
    success.

    I do not understand the difference between "... if I going to change the Connection Manager to new connection" and "didn't change the Connection Manager, only changed inside the Server name / user/ pass" 800 tables.
    What I see is some tables your packages sees in Dev (laptop) is not of the same schema once the package is deployed hence the metadata error.
    Arthur
    MyBlog
    Twitter

  • Mapping data flow from R/3 to BW

    Hello,
    I am pretty new to BW and I have been tasked with creating a detailed map of the data flow from R/3 into BW. 
    I need to record where the data originates from in R/3 (field names/tables) and literally track the flow of that data all the way including any info objects along the way to any cubes that it may be sitting in.
    How do I track this flow ? And how can I identify what a characteristic in BW is in R/3 ?
    Has anybody had to create a similar data flow ? If so how did you approach this ?
    Many Thanks,
    Matt

    Hi Matthew,
    From the R/3 side:
    BW treats all the data from R/3 as Datasources.
    From the Datasource the upload of data to the cube is done as..        
    <b>Datasource->Transfer Rule->psa/infosource->communication structure->cube</b>
    (for a 3.5 system)
    in case of 7.0 system... data flow is as follows...
    <b>Datasource->infopackage->psa->transformation/DTP-> Data target(cube)</b>
    -> Go to transaction <b>RSA5</b>( for Business Content datasources ) and <b>RSA6</b>( for all the active Datasources ) found in the system.
    -> There you can find all the data that you want...(For your mapping purpose this will do..)
    -> You can as well check from the BI side in the transaction RSA1 -> click on the Monitor button on the left ( for custom objects ) or Business Content button -> choose the object from the tree... right click and replicate to find if all of them were used.
    Hope this helps!!
    <b>*</b><i>Reward Pts if useful</i><b>*</b>
    regards,
    Naveenan.

  • How long CSS blocks flow, from source which detected as source DoS?

    My application generates except normal flow, flow which CSS treats as DoS attack. Both flows have the same source.
    I am afraid that, CSS can block proper flow.
    So, I have question: how long CSS blocks flow, from source which detected as source DoS?
    Krzysztof

    I am not very sure of the lenghth of time that it blocks the flow from the source, if it is considered as a source of DoS attack, but the workaround would be to bypass the cache for that particular source, since you are already aware that it might cause a problem. You could use a bypass rule to do so. You can also use the flow timeout feature with the flow port[1|2|3|4|5|6|7|8|9|10] timeout command to configure a flow timeout value for a TCP or UDP port. I am not very sure if this feature would help in your situation, bypass seems to be a better option.

  • Data Flow from SAP Source (ECC) system to SAP BI system

    Hi All,
    I wanted to know how data will be flown from SAP Source system to SAP BI system.Data flow should include
    1) Data will be flown by using the IDOCs?
    2) What all are the interfaces involved while data is transferring?
    3) What will happen exactly, if you execute the PSA?.
    If you have any info on this, could you please post here....I
    Regards,
    K.Krishna Chaitanya.

    Hi Krishna,
    Please go through  this article :
    "http://www.trinay.com/C6747810-561C-4ED6-B85C-8F32CF901602/FinalDownload/DownloadId-C2EB7035A229BFC0BB16C09174241DC8/C6747810-561C-4ED6-B85C-8F32CF901602/SAP%20BW%20Extraction.pdf".
    Hope this answers all the mentioned questions.
    Regards,
    Sarika

  • Data Flow from TXT to a table error

    Hello,
    I am trying to fill in the data from a .txt file I have into a table in a DB. Previously this worked fine in DTS and I can still do it when I import the DTS command but I want to update this to a data flow because the DTS commands needs to be run on 32 bit
    and I'm using 64 bit. 
    I'm getting 3 errors:
    [OLE DB Destination [322]] Error: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E21.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 10.0"  Hresult: 0x80040E21  Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
    [OLE DB Destination [322]] Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.  The "input "OLE DB Destination Input" (335)" failed because error code 0xC020907B occurred, and the error row disposition on "input "OLE
    DB Destination Input" (335)" specifies failure on error. An error occurred on the specified object of the specified component.  There may be error messages posted before this with more information about the failure.
    [SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED.  The ProcessInput method on component "OLE DB Destination" (322) failed with error code 0xC0209029 while processing input "OLE DB Destination Input" (335). The identified
    component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.  There may be error messages posted before this with more information about the
    failure.
    Before I changed the Flat File Source input advanced editor input and output properties to text stream [DT_TEXT] because the table has VarChar I also had an other error but this seems to be resolved. The only problem is if I look at the mappings the
    input is text stream [DT_TEXT] but the output is a string and I am unable to change this in the advanced editor of the OLE DB destination. I can change it but it changes back on it's own.
    Could I please get some help on these errors?
    Thanks

    Hi SQLNewbie101,
    According to your description, when you change column data type in the advanced editor of OLE DB Destination, it always changes back.
    Based on my research, the column data type is already confirmed by the destination table, it depends on the columns in the table, so we cannot change it.
    To fix this issue, one way as you said, we can use Data Conversion Transformation to convert the [DT_TEXT] data type to [DT_STR] after Flat File Source. Another way is directly change the column data type in the Advanced tab of Flat File Connection Manager
    Editor as below. Then double click the Flat File Source to update the columns.
    If there are any other questions, please feel free to ask.
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • How to restrict number of Data Records from Source system?

    Hi,
    How can I restrict the number of Data records from R3 source system that are being loaded into BI. For example I have 1000 source data records, but only wish to transfer the first 100. How can I achieve this? Is there some option in the DataSource definition or InfoPackage definition?
    Pls help,
    SD

    Hi SD,
    You can surely restrict the number of records, best and simplest way is, check which characteristics are present in selection screen of InfoPackage and check in R3, which characteristics if given a secection could fetch you the desired number of records. Use it as selection in InfoPackage.
    Regards,
    Pankaj

  • Significant slowness in data transfer from source DB to target DB

    Hi DB Wizards,
         My customer is noticing significant slowness in Data copy from the Source DB to the Target DB. The copy process itself is using PL/SQL code along with cursors. The process is to copy across about 7M records from the source DB to the target DB as part of a complicated Data Migration process (this will be a onetime Go-Live process). I have also attached the AWR reports generated during the Data Migration process. Are there any recommendations to help improve the performance of the Data transfer process.
    Thanks in advance,
    Nitin

    multiple COMMIT will take longer to complete the task than a single COMMIT at the end!Lets check how much longer it is:
    create table T1 as
    select OWNER,TABLE_NAME,COLUMN_NAME,DATA_TYPE,DATA_TYPE_MOD,DATA_TYPE_OWNER,DATA_LENGTH,DATA_PRECISION,DATA_SCALE,NULLABLE,COLUMN_ID,DEFAULT_LENGTH,NUM_DISTINCT,LOW_VALUE,HIGH_VALUE,DENSITY,NUM_NULLS,NUM_BUCKETS,LAST_ANALYZED,SAMPLE_SIZE,CHARACTER_SET_NAME,CHAR_COL_DECL_LENGTH,GLOBAL_STATS,USER_STATS,AVG_COL_LEN,CHAR_LENGTH,CHAR_USED,V80_FMT_IMAGE,DATA_UPGRADED,HISTOGRAM
    from DBA_TAB_COLUMNS;
    insert /*+APPEND*/ into T1 select *from T1;
    commit;
    -- repeat untill it is >7Mln rows
    select count(*) from T1;
    9233824
    create table T2 as select * from T1;
    set autotrace on timing on;
    truncate table t2;
    declare r number:=0;
    begin
    for t in (select * from t1) loop
    insert into t2 values ( t.OWNER,t.TABLE_NAME,t.COLUMN_NAME,t.DATA_TYPE,t.DATA_TYPE_MOD,t.DATA_TYPE_OWNER,t.DATA_LENGTH,t.DATA_PRECISION,t.DATA_SCALE,t.NULLABLE,t.COLUMN_ID,t.DEFAULT_LENGTH,t.NUM_DISTINCT,t.LOW_VALUE,t.HIGH_VALUE,t.DENSITY,t.NUM_NULLS,t.NUM_BUCKETS,t.LAST_ANALYZED,t.SAMPLE_SIZE,t.CHARACTER_SET_NAME,t.CHAR_COL_DECL_LENGTH,t.GLOBAL_STATS,t.USER_STATS,t.AVG_COL_LEN,t.CHAR_LENGTH,t.CHAR_USED,t.V80_FMT_IMAGE,t.DATA_UPGRADED,t.HISTOGRAM
    r:=r+1;
    if mod(r,10000)=0 then commit; end if;
    end loop;
    commit;
    end;
    --call that couple of times with and without  "if mod(r,10000)=0 then commit; end if;" commented.
    Results:
    One commit
    anonymous block completed
    Elapsed: 00:11:07.683
    Statistics
    18474603 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1737 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    10000 rows commit
    anonymous block completed
    Elapsed: 00:10:54.789
    Statistics
    18475806 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1033 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    one commit
    anonymous block completed
    Elapsed: 00:10:39.139
    Statistics
    18474228 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1123 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    10000 rows commit
    anonymous block completed
    Elapsed: 00:11:46.259
    Statistics
    18475707 recursive calls
    0 spare statistic 4
    0 ges messages sent
    0 db block gets direct
    0 calls to kcmgrs
    0 PX remote messages recv'd
    0 buffer is pinned count
    1000 buffer is not pinned count
    2 workarea executions - optimal
    0 workarea executions - onepass
    What we've got?
    Single commit at the end, avg elapsed: 10:53.4s     
    Commit every 10000 rows (923 times), avg elapsed: 11:20.5s
    Difference:     00:27.1s     3.98%
    Multiple commits is just 4% slower. But it is safer regarding Undo consumed.

  • Migrate data flow from 3.5 to 7.3?

    Dear Experts,
    After technical had upgrade SAP BW from 3.5 to 7.3, I did test migrating data flow. I found that if I specified "migration project" to another name different from DataStore Object name, I could not find related objects (e.g. transformation or DTP) under that DataStore Object. And the DataStore Object was also inactive version, even the migration was done without error.
    For example
    - Original DSO name = AAA was showed inactive
    - Migration Project name = AAA_Migrated
    - After selecting all the objects including process chains and clicking on 'Migration/Recovery' button, status showed with no error (Migration History displayed all green)
    - recheck objects in transaction = RSA1
    - DSO name = AAA was still showed inactive
    I just wonder where all objects under DSO name = AAA were gone?
    What happened to the migration project name = AAA_Migrated?
    How should I find the migration project name = AAA_Migrated?
    How to recover all objects under DSO name = AAA? (Just in case misspelling "migration project")?
    If you have similar case mentioned above, could you share any experience how to handle this?
    Thank you very much.
    -WJ-

    BW 7.30: Data Flow Migration tool: Migrating 3.x flows to 7.3 flows and also the recovery to 3.X flow
    Regards,
    Sushant

  • Data Load from Source to Destination

    Hello Forum Members,
    I have to databases on different solaris servers.
    I have to load data from Source Table[Server 1] to Target Current Table[truncate existing data and load from Source Table] [Server 2]and on every Wednesday from Target Current Table into Target History Table[Data always appended][Server 2].
    Any advice how to solve the issue is highly appreciated.
    Thanks and Regards,
    Suresh

    Unless there is a history table on the remote server, the history table would not appear to be an appropriate candidate for a materialized view.
    You could, in theory, create the live table as a materialized view rather than doing a TRUNCATE (or DELETE) and INSERT INTO SELECT in your stored procedure. Since it sounds like you need to have some coordination between the refreshing of the materialized view and the population of the history table, it's not obvious that creating the live table as a materialized view buys you anything.
    Justin

Maybe you are looking for