Data Reconciliation between BI and R/3 systems

Hi Gurus,
I want to know how to reconciliation the data between BI and R/3 systems.
is there any easy mentods to reconcile the data.i have also gone through the HOW TO GUIDE document, that document dosen't help me out.
Regards
Sreenivas.Y

1) Either we can reconcile using standard R3 tables eg VBAK , VBAP for sales .
2) You can go for automatic reconcillation where you will have to create a reconcillation data source and extract it to BW using virtual cube (Virtual cube would extract real time data from R3).
Include original cube and this virtual cube in a multiprovider .
Make a report taking keyfigures from both the infoproviders and add a column for difference .If there is 0 difference it means BW data matches with R3 .

Similar Messages

  • Issue with Data flow between Unicode and Non Unicode systems

    Hello,
    I have scenario as below,
    We have  a Unicode – ECC 6.0 and a UTF 7 – Legacy system.
    A message flow between Legacy system to ECC 6.0 system and the data is of 700 KB size.
    Will there be any issue in this as one is Unicode and other is non Unicode?
    Kindly let me know.
    Thanks & Regards
    Vivek

    Hi,
    To add to Mike's post...
    You indicate that your legacy system is non-Unicode and the ERP system is Unicode.  You also said that the data flow is only <i>from</i> the legacy system <i>to</i> the ERP system.  In this case, you should have no data issues, since the Unicode system is the receiving system.  There <b>are</b> data issues when the data flow is in the other direction: <i>from</i> a Unicode system <i>to</i> a non-Unicode system.  Here, the non-Unicode system can only process characters that exist on its codepage and care must be taken from sending systems to ensure that they only send characters that are on the receiving system's codepage (as Mike says above).
    Best Regards,
    Matt

  • Data Reconciliation between PSA and DSO.

    Hi Experts,
    The data records which is showing in PSA is 1000 and when we ran to load the data  till DSO the records were only 600 and we dont understand why the records are not fetching.
    DTP Settings:
    Extraction mode is :FUll
    If anyone having the good documentation on How to design DSO please farward some good links.
    Regards.

    Hi,
    This is not issue with the DSo.
    There might be filters at various points due to which you are getting some of the records filtered.
    Check the following.
    Check the PSA for number of records if the records is not equal o 1000 then the records are filtered while fetching from source
    1)Check in PSA if there are any filters.
    If the records in PSA equal to 1000 then
    1)Check in DTP also for some filters
    2)Check the routines in transformation also if there are any records that are filtering in the transformation.
    Hope this helps,
    Sri...

  • Data reconciliation between R/3 and the BW Systems

    How do we do data reconciliation between R/3 and the BW system for the following areas?
    Purchasing
    Controlling:
    Project System:
    COPA
    SD
    AP
    Trgards,
    Tony G

    Tony
    Have you looked these documents??
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/how%20to%20validate%20infocube%20data%20by%20comparing%20it%20with%20psa%20data
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/7a5ee147-0501-0010-0a9d-f7abcba36b14
    Re: BW v. R/3 data reconciliation
    Hope this helps
    Thanks
    Sat

  • Secure the file/data transfer between XI and any third-party system

    Hi All,,
    I would like to use to "secure" SSH on OS Level the file/data transfer between XI and any third-party system Run OS Command before processing and OS command After processing. right now my XI server installed on iSeries OS.
    with ISeries we can't call the Unix commands hope we need to go for AS400 (CL) Programming. If we created the AS400 programm how i can call that in XI.
    If any one have idea pls let me know weather it will work or not.
    Thanks in adavance.
    Venkat

    Hi,
    Thanks for your reply.
    I have red some blogs like /people/krishna.moorthyp/blog/2007/07/31/sftp-vs-ftps-in-sap-pi to call the Unix Shell script in XI.
    But as i know in iSeries OS we can write the shell script we need to go for AS400 programe. If we go with AS400 how we need to call that programe and it will work or not i am not sure there i need some help please.
    Thanks,
    Venkat

  • Data sync between oracle and sql server

    Greetings Everyone,
    Your expert views are highly appreciable regarding the following.
    We at work are evaluation different solutions to achieve data synchronization between oracle and sql server data bases. Data sync i mentioned here is for live applications. We are runnign oracle EBS 11i with custom applications and intending to implement a custom software based on .NET and SQL Server. Now the whole research is to see updates and data changes whenever happens between these systems.
    I googled and found Oracle Golden Gate, Microsoft SSIS, Wisdom Force from Informatica....
    If you can pour in more knowledge then it's great.
    Thank You.

    Most of the work involved has to be done through scripts and there is no effective GUI to implement OGG.However using commands is not vey togh and they are very intutive.
    These are the steps, from a high level:
    1.Get the appropriate GG Software for your source and target OS.
    2.Install GG on source and target systems.
    3.Create Manager and extract processes on source system
    4.Create Manager and replicat processes on target system
    5.Start these processes.
    First try to achieve uni-directional replication. Then Bi-directional is easy.I have implemented bi-directional active active replication using Oracle DBs as source and target. Refer to Oracle installation and admin guides for more details.
    Here is a good article that may be handy in your case.
    http://www.oracle.com/technetwork/articles/datawarehouse/oracle-sqlserver-goldengate-460262.html
    Edited by: satrap on Nov 30, 2012 8:33 AM

  • Role mapping between Portal and Back end systems

    I am new to SAP EP.
    I just want to know how the mapping between portal and back end system happens.
    Scenario : There is a role in ECC system...say FI India. Now there is a request by the FI team that they want to access this role from Portal. In this case, please tell me how the security team will do it. Because I guess, it has to be done by the security team.

    Hi,
    Usually the role from backend is uploaded to portal then it will be seen as Group and we need to assign our portal roles to this group. Please refer [this|http://help.sap.com/saphelp_nw73/helpdata/en/d6/7859ec80df46738e23ccb4f4c8c502/content.htm].
    Regards,
    Samir

  • Communication between SAP and 3rd Party Systems using IDOC HTTP XML Interfa

    Hi
    i am try do
    Communication between SAP and 3rd Party Systems using IDOC HTTP XML Interface
    With The help of SDN Contribution
    link----
    ( have look on it)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4943f2b7-0a01-0010-37af-faff35b2f08c
    I am getting error in
    Partner system as HTTPLOG and "Execute" to check the results
    Error is --  Port could not be created
    RFC destination HTTPLOG Not specified for system HTTPLOG
    any 1 have any idea  if plzzzzzzzz...........
    Thank u
    Ram

    Hello .
      we are also in  process of implementing the same
    could you share the knowledge pl?
    1)is it a separate add on with ALE to saphr
       or using ECC ??
    2)can u share the configuration part ??
    we are trying it on webas as addon 3.0 .

  • Data Replication Between Sqlserver and Oracle11g using materialized view.

    I have Sqlserver 2005 as my source and oracle11g as my target.I need to populate the target daily with change data from source.
    for that we have created a dblink between sqlserver and oracle and replicated that table as a Materialized view in Oracle.
    problem we are getting here is Fast refresh option is not available.each day it will pick full data from the source.
    is there any way to use Fast refresh in this scenario??
    Thanks in advance.
    Regards,
    Balaram.

    Pl do not post duplicates - Data Replication Between Sqlserver and Oracle11g using materialized view.

  • Data streaming between server and client does not complete

    Using an ad-hoc app, data streaming between server
    and client does not complete as it supposed to be.
    The process runs well in solaris 5.8, however under 5.9
    we have found the characters stream buffer length limitation
    is around 900 to 950 characters (by default we are using 3072
    characters).
    Example:
    - We are transfering HTML file, which will be displayed
    in the App client, with buffer=3072, the HTML only displayed / transfered
    as xxxxxxxx characters, but with buffer=900 the HTML is displayed properly,
    in this case, the only problem that we have is the file transfer will
    eventually longer than usual.
    - There is another case, where we have to transfer information (data) as stream
    to the client. A long data stream will not appear at all in the client.
    Any ideas why the change between 5.8 and 5.9 woudl cause problems?
    The current app-driver that we are using is compiled using Solaris 5.6,
    if possible we would like to have use of a later version, which is compiled using Solaris 5.9, do you think this will probably solve our problem?
    Thanks
    Paul

    Does this have anything to do with Java RMI? or with Java come to think of it?

  • Data mismatch between 10g and 11g.

    Hi
    We recently upgraded OBIEE to 11.1.1.6.0 from 10.1.3.4.0. While testing we found data mismatch between 10g and 11g in-case of few reports which are including a front end calculated column with division included in it, say for example ("- Paycheck"."Earnings" / COUNT(DISTINCT "- Pay"."Check Date")) / 25.
    The data is matching for the below scenarios.
    1) When the column is removed from both 10g and 11g.
    2) When the aggregation rule is set to either "Sum or Count" in both 10g and 11g.
    It would be very helpful and greatly appreciated if any workaround/pointers to solve this issue is provided.
    Thanks

    jfedynic wrote:
    The 10g and 11.1.0.7 Databases are currently set to AL32UTF8.
    In each database there is a VARCHAR2 field used to store data, but not specifically AL32UTF8 data but encrypted data.
    Using the 10g Client to connect to either the 10g database or 11g database it works fine.
    Using the 11.1.0.7 Client to go against either the 10g or 11g database and it produces the error: ORA-29275: partial multibyte character
    What has changed?
    Was it considered a Bug in 10g because it allowed this behavior and now 11g is operating correctly?
    29275, 00000, "partial multibyte character"
    // *Cause:  The requested read operation could not complete because a partial
    //          multibyte character was found at the end of the input.
    // *Action: Ensure that the complete multibyte character is sent from the
    //          remote server and retry the operation. Or read the partial
    //          multibyte character as RAW.It appears to me a bug got fixed.

  • Standard XML schema for Vendor data exchange between SAP and other system

    Is there a SAP standard way of XML schema that we exchange between SAP and other system? Please let me know.
    Thanks.

    See SAP Interface Repository (http://ifr.sap.com).
    My proposal is to leave old SAP connectors staff and use SAP Exchange Infrastructure. There is a support of industry XML standards in XI 3.0 like xCBL.

  • SNP planned order availability date difference between APO and ECC

    Hi,
    I have observed that SNP Planned order availability date is not matching between APO and ECC. Details are as follows.
    I ran SNP Optimizer with bucket offset of 0.5. After publishing the optimizer created planned orders to ECC, only start date is matching.
    Example:
    I am using PDS as a source of supply.
    Fixed production activity in SNP PDS is 1 day.
    GR processing time: 3 day
    After running optimizer planned order is created with dates explained below.
    Start date/time: 09.05.2011 00:00:00
    End date/time: 12.05.2011 23:59:59
    Availability date: 16.05.2011 00:00:00
    Because of bucket offset defined as 0.5 optimizer planned order availability is start of next monday.
    After publishing this planned order to ECC dates on the planned order is as follows.
    Start date: 09.05.2011
    End date: 09.05.2011
    Availability date: 12.05.2011
    I have not maintained any scheduling margin key in ECC. Also if I dont define the GR processing time, planned dates between APO and ECC always match. Can anyone explain the impact of GR time on the availability date.
    Regards,
    Venkat

    Hi Venkadesh,
    What's "state stamp"? Do you mean different time zone?
    note : 645597  mentioned by Nandha is very helpful:
    In standard, CCR will use duedate - "the available date of the output product".
    Nandha's words "In SAP APO, if the receipt date of the primary product deviates from the
    end date of the last activity of the order, the receipt date
    always identifies this as inconsistent. You cannot rectify
    inconsistencies of this type by using CCR."
    I guess in your PDS or PPM, the output product is not assigned to the End of the last activity. Someone changed it?
    Please CIF the PDS or PPM again.
    If you really want to apply a note, please use note 815509 as you're using planned order,
    and system will use order end  date in CCR instead.
    GR time is always considered. BR/Tiemin

  • Most efficient data transfer between RT and FPGA

    This post is related to THIS post about DMA overhead.
    I am currently investigating themost efficient way to transfer a set of variables to a FPGA target for out application.  We have been using DMA FIFOs for communications in both directions (to and from FPGA) but I'm recently questioning whether this is the most efficient approach.
    Our application must communicate several parameters (around 120 different variables in total) to the FPGA.  Approximately 16 of these are critical meaning that they must be sent every iteration of our RT control loop.  The others are also important but can be sent at a slightly slower rate without jeopardising the integrity of our system.  Until now we have sent these 16 critical parameters plus ONE non-critical parameter over a DMA to the FPGA card.  Each 32-bit value sent incorporates an ID which allows the FPGA to demultiplex to the appropriate global variables on the FPGA.  Thus over time (we run a 20kHz control loop on the RT system - we have a complete set of paramaters sent @ approx. 200Hz).  The DMA transfers are currently a relatively large factor in limiting the execution speed of our RT loop.  Of the 50us available per time-slot running at 20kHz approximately 12-20us of these are the DMA transfers to and from the FPGA target.  Our FPGA loop is running at 8MHz.
    According to NI the most efficient way to transfer data to a FPGA target is via DMA.  While this may in general be true, I have found that for SMALL amounts of data, DMA is not terribly efficient in terms of speed.  Below is a screenshot of a benchmark program I have been using to test the efficiency of different types of transfer to the FPGA.  In the test I create a 32MB data set (Except for the FXP values which are only present for comparison - they have no pertinence to this issue at the moment) which is sent to the FPGA over DMA in differing sized blocks (with the number of DMA writes times the array size being constant).  We thus move from a single really large DMA transfer to a multitude of extremely small transfers and monitor the time taken for each mode and data type.  The FPGA sends a response to the DMA transfers so that we can be sure that when reading the response DMA that ALL of the data has actually arrived on the FPGA target and is not simply buffered by the system.
    We see that the minimum round-time for the DMA Write and subsequent DMA read for confirmation is approximately 30us.  When sending less than 800 Bytes, this time is essentially constant per packet.  Only when we start sending more than 800 Bytes at a time do we see an increase in the time taken per packet.  A packet of 1 Byte and a packet of 800 Bytes take approxiamtely the SAME time to transfer.  Our application is sending 64 Bytes of critical information to the FPGA target each time meaning that we are clearly in the "less efficient" region of DMA transfers.
    If we compare the times taken when communication over FP controls we see that irrespective of how many controls we write at a time, the overall throughput is constant with a timing of 2.7us for 80 Bytes.  For a small dedicated set of parameters, the usage of front panel controls seems to be significantly faster than sending per DMA.  Once we need to send more than 800 Bytes, the DMA starts to become rapidly more efficient.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

    So to continue:
    For small data sets the usage of FP controls may be faster than DMAs.  OK.  But we're always told that each and every FP control takes up resources, so how much more expensive is the varsion with FP controls over the DMA.
    According to the resource usage guide for the card I'm using (HERE) the following is true:
    DMA (1023 Elements, I32, no Arbitration) : 604 Flip-Flops 733 LUT 1 Block RAM
    1x I32 FP Control: 52 Flip-Flops 32 LUTs 0 Block RAM
    So the comparison would seem to yield the following result (for 16 elements).
    DMA : 604 FLip-Flops 733 LUT 1 Block RAM
    FP : 832 FLip-Flops 512 LUT 0 Block RAM
    We require more FLip-Flops, less LUTs and no Block RAM.  It's a swings and roundabouts scenario.  Depending on which resources are actually limited on the target, one version or the other may be preferred.
    However, upon thinking further I realised something else.  When we use the DMA, it is purely a communications channel.  Upon arrival, we unpack the values and store them into global variables in order to make the values available within the FPGA program.  We also multiplex other values in the DMA so we can't simply arrange the code to be fed directly from the DMA which would negate the need for the globals at all.  The FP controls, however, ARE already persistent data storage values and assuming we pass the values along a wire into subVIs, we don't need additional globals in this scenario.  So the burning question is "How expensive are globals?".  The PDF linked to above does not explicitly mention the difference in cost between FP controls and globals so I'll have to assume they're similar.  This of course massively changes the conclusion arrived to earlier.
    The comparison now becomes:
    DMA + Globals : 1436 Flip-Flops 1245 LUTs 1 Block RAM
    FP : 832 FLip-Flops 512 LUT 0 Block RAM
    This seems very surprising to me.  I'm suspiscious of my own conclusion here.  Can someone with more knowledge of the resource requirements differences between Globals and FP controls weigh in?  If this is really the case, we need to re-think our approach to communications between RT and FPGA to most likely employ a hybrid approach.
    Shane.
    Say hello to my little friend.
    RFC 2323 FHE-Compliant

  • PPDS production order dates mismatch between APO and R3

    Hi Friends,
    We are facing one problem regarding production order date mish match between R3 and APO systems specific to a product.
    The product is planned in PPDS and the order automatically transfer to R3 through online CIF.
    PPM has got two operations 0010 and 0020 and activity relationships are like this:
    P(0010) - P(0020) Start - Start relation ship
    S(0010) - P(0010) End - Start relationship.
    In PPDS dates are shown as :
    on operation 0010 the start/end  dates are shown as  08.15.09 to 08.22.09
    on operation 0020 the start/end  dates are shown as 08.15.09 to 08.22.09
    Overall order start date is  08.15.09
    Overall order finish date is 08.22.09
    and in R3 dates are shown as:
    on operation 0010 the start/end  date are shown as  08.15.09 to 08.22.09
    on operation 0020 the start/end dates are shown as 08.22.09 to 08.29.09
    Overall order start date is  08.15.09
    Overall order finish date is 08.29.09
    The order is off one week (APO vs R3) with start and finish dates.  
    if we change DS Board settings to ignore internal relationships manually then the dates are matching exactly in APO and R3.
    We want the production order dates to be matched without manual intervention.
    Could some one please provide some hints on what is happening here and how to correct it.
    Thanks.
    Krish

    Hi Friends,
    Thanks a lot for your valuable replies in this regard.
    Actually this problem is in production environment and it took some time to test the problem with the master data modifications
    you suggested.
    As DB and Siddhrath mentioned, the problem was with routing definition. There is no parallel sequence maintained in the routing but there is start-start relation maintained in APO PPM.
    We corrected the routing definition and checked the order dates. Now the dates  are matching in R3 and APO.
    I am awarding DB and Siddharth each five points in this regard.
    Once again thank you all for your time and valuable replies.
    -Regards
    Krish.

Maybe you are looking for

  • My FlashDrive is no longer listed under my Computer, nor does its LED light once plugged in to a USB port.

    Hi, The FlashDrive was listed under my Computer once plugged in to a USB port of either a computer of mine. It was working well yesterday with all my Computers, HP laptop with a USB 3.0, 2.0.(Windows 7 64Bit), and with my Desktop Computer with P 4(Tw

  • Ugly Fonts Help

    Hi I'm having a rough time with getting the font I want using DW8. I've built a little website www.carltongreen.com and the text is just plain ugly in my estimation. I'm not understanding the 'css' panel whatsoever. No matter what I try ,I can't get

  • Resubmitting a removed podcast.

    I was creating a podcast for a client of mine, and decided to publish it to iTunes through my Apple ID in order to test it. Now that everything works fine, I want to re-submit it under my client's Apple ID. I successfully got it removed form the stor

  • Creating BADIs in ECC6.0

    Hello Everyone, As we  all know that Badis definition and Badis implementation outlook in 4.7 are different in 6.0 I want to ask that in 6.0 can we create classic badis or not. Suppose we go to tcode se18 in ecc6.0 and we have to create one badi . Th

  • Question on changing phones

    With the announcement on the iPhone on Verizon, I am interested in upgrading as soon as they are released although my "New Every 2" isn't until June. Will I receive any extra fees if I buy the phone and upgrade? I really want the iPhone and do not wi