IDoc Data Lost between ECC and MII Message Listener.

We have recently experienced some network issues which caused the IDoc's to stop flowing outbound from ECC to MII Message Listener.  This happened 3 times during a network switch reconfiguration.  2 of the three occasions the MII Message listener had to be restarted on the MII server in order to get data to flow again.  Interestingly some of the IDocs that were generated during this outage time were processed by the message listener and some didn't.
We are running MII 12.0 sp11 and ECC 6.0.  The ECC server and MII server are located in different geographic locations.
When we look at the ECC system in WE05 we see only sucessful status for the IDoc messages saying the Data passed to port OK. 
When we check the Message listener on the MII end there it only shows processed successfully.  There are no failed messages or messages without rules.
Where can I check further to see the IDocs that really didn't make it to be processed by the Message Listener on the MII server?
Also is it possible that these IDocs that got lost somewhere are still sitting on the ECC or MII servers somewhere where we can re-process them?
And the Last question is, Why didn't the Message Listener handle the network issue automatically instead of needing to restart it?

Hi Robert,
Did SAP ever respond/resolve the ticket you created for this issue? Someone told me that this was a known issue, but I have yet to have it verified by SAP. They asked me to simulate the issue - but our non-prod systems are on a VM server with other systems. So I can't exactly start pulling network cables out - I did drop it from the network a few time while sending IDOCs down - but this failed to hang the processing of the IDOCs.
We're hopefully going to MII 14/15 next year - but the business is keen to resolve this issue as it has impacted production a few times. The MII sites are in some remote regions and I don't think we can really improve the WAN links much further.
@Christian - thanks for the link, we don't really have an issue with IDOC's not being received, its just the message processing just hangs and we either need to restarted the services or sometimes the system.

Similar Messages

  • IDocs have disappeared between ECC and PI

    Hi All,
    A couple of days ago 3 idocs went missing between ECC and PI.
    That day 293 idocs of the same message type were sent to the same partner through PI. In PI we only found 290. The 3 IDocs were sent in the same second in the early afternoon. All IDocs before and after were processed correctly.
    The IDocs are unknown in IDX5 and in Runtime Workbench on PI.
    The idocs were not in SM58.
    I found them in WE05, with the correct status. The status update from status = 01 to 30 to 03 took place in the same second.
    25 seconds later, new IDocs of the same message type were created and processed successfully by PI.
    At the time the idocs disappeared there were no problems with either ECC or PI, as far as we know.
    The interface is live for more than 2 years. In the meantime no changes have taken place.
    I have read the links below
    Finding missing IDocs
    Interface Troubleshooting Basics
    It's a mystery to me. Where have those idocs gone?
    Anybody any idea?
    Kind Regards
    Edmond Paulussen

    Hi Edmond,
    On the SAP PI system have a look in transaction IDX5. Take the IDoc number that you could not find in PI and search them in IDX5.
    If you cannot find those IDoc in IDX5 then they never reached PI.
    Regards,
    Jannus Botha

  • Data Inconsistency for 0EMPLOYEE between ECC and BI

    Hi,
    We do a full load to 0EMPLOYEE using 0EMPLOYEE_ATTR from ECC. There were records deleted for lot of employees (some action types) in ECC. This has caused data inconsistency for 0EMPLOYEE master data (time-dependent) between ECC and BI. In BI we have more records for these employees (Additional records have time-dependent range that were deleted from ECC but still exist in BI). These employee records are already being used in lot of InfoProviders. Is there an efficient way to fix this issue? One of the solution is to delete data from all InfoProviders and then delete from Master data for 0EMPLOYEE, but the deletion of employee records can happen quite often, so we don't want to take this route. Also, I tried to re-organize master data attributes for 0EMPLOYEE through process chain, but that didn't work either.
    Message was edited by:
            Ripel Desai

    Hi Ripel,
    I share your pain. This is one of the real pains of Time-Dependant master data in BW. I have been in your exact position and the only way round the issue for me was to clear out all the cubes that used 0EMPLOYEE and then delete and re-load 0EMPLOYEE data.
    I know this responce doesn't help you but at least you are not alone.
    Regards,
    Pete
    http://www.teklink.co.uk

  • Communication between ECC and SCM via XI

    I am trying to set up the scenario 'Purchase_Order_Processing' between
    ECC and SCM using XI. The SCM content for XI has been loaded into XI
    and this scenario is included in the content.
    We are running the following:
    ECC - Version 5.0 - 640 SP level 12
    XI - Version 3.0 - 640 SP Level 14
    SCM - 4.10 - 640 SP Level 12
    The connectivity between the systems has been set up as described in the configuration guides. On sending an IDOC (basic type ORDERS05) into XI I get the following error message 'Http server code 500 reason Internal Server Errorexplanation Error during parsing of SOAP header'.
    The Error XML is as follows:
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!-- Call Adapter
    -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30"
    xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/"
    SOAP:mustUnderstand="">
    <SAP:Category>XIAdapter</SAP:Category>
    <SAP:Code area="PLAINHTTP_ADAPTER">ATTRIBUTE_SERVER</SAP:Code>
    <SAP:P1>500</SAP:P1>
    <SAP:P2>Internal Server Error</SAP:P2>
    <SAP:P3>Error during parsing of SOAP header</SAP:P3>
    <SAP:P4 />
    <SAP:AdditionalText />
    <SAP:ApplicationFaultMessage namespace="" />
    <SAP:Stack>Http server code 500 reason Internal Server Error
    explanation Error during parsing of SOAP header</SAP:Stack>
    <SAP:Retry>M</SAP:Retry>
    </SAP:Error>
    Any assistance would be much appreciated.

    Sravya,
    Yes this an IDOC to http scenario. There was no dump at the receiver end. I've found out what was causing the problem - I was using an adapter type of HTTP. It should have been of type 'XI'. I've altered this and it works. Hopefully this will resolve your problem Vinod.
    Richard

  • Best communication between ECC and PI

    Hi All
    SAP is planning to adapt Enterprise Service Architecture (ESA) in the journey towards Service Oriented Architecture (SOA).
    We are working on new implementation project. We have many asynchrounous scenarios. We would like to configure acknowledgements and also like to handle errors effectively.
    Could you please suggest which communication between ECC and PI  is preferrable?
    (Webservices, IDoc, BAPI or RFC)
    Thanks
    Sai

    IDoc, BAPI , rfc - all can be used for seamless communication between ECC and PI. It is totallu dependant on the application and way information is exchanged. For master and transactional data idoc are prefered as for the acknowledgements supported by it. Not sure how effeciently we can handle system and appl ack incase of bapi/rfc. but at the same time idoc will be a drawback for sync comunication where rfc/bapi is preferd. It is SOA or ESA the concepts build in SAP are targeted to serve the enterprise needs where idoc,bapi/rfc are used for servicing the business unit objectives

  • Withdrawal Qty don't match between ECC and APO

    Hi Experts.
    I am facing a discrepancy for Withdrawal Qty between ECC and APO before delivery/PGI creation.
    In ECC T.code MD63 I am able to see the withdrawal Qty as expected but in APO a duplicated withdrawal Qty is being created. But this is not happening for all materials, only some specifics materials. I did a check in Material Master data and I have no found any difference between a material that is working and one that is not working...
    Can please somebody explain me why APO is dobling the Withdrawal Qty.
    Thanks a Lot
    Daniel Campos.

    Please check wthether you have dp index duplication in this problem.
    Please follow the process.
    For this process , you need to follow these steps .
    1. From the table /SAPAPO/MATKEY , find the product id of the product for which you feel there is problem .
    2. Find the location id from table /SAPAPO/LOC for the location.
    3. Use the above found 'product id' and 'location id' to find the dp index in table "/SAPAPO/DP_HEAD "  .
    Enter the planning version ' 000 ' in this table . If there is some error you will find multiple entries of the 'dp index ' in this table for the ' 000 ' planning version. If error found use below program to delete the dp index .
    4.  Precaution should be taken before we delete the 'Dp Index' as the consumption history is lost due to this .          In the program  'Z_DEL_WITHDRAWL'  there are 2 check boxes " update " and " list ". List will show the DP index and update will delete it. Need to extremely careful in this program as wrong selection will cause problem .

  • Communication between ECC and CRM

    Hello,
    I am in phase of showing communication between SAP CRM 5.0 and ECC 6.0.
    I am struggling with finding some simple transactions which are common in both, with which I can show correct connectivity between ECC and CRM.
       I need to save some data from CRM and access the same via transaction within ECC, save some data via ECC and access it via CRM. I already established RFC link between ECC and CRM.
    Probably communication between SD module of ECC with CRM.
    Can you please help me out with this, a clue/correct direction will be of very helpful. Basically I want to know, what transaction I can use for same.
    Any help is really appreciated.
    Thanks
    -Vishal

    Hi Glenn,
          I followed all steps in document mentioned above. In the end during execution of transaction I selected appropriate source and destination, when I click start transfer objects, it gets stuck in waiting phase.
    Among those steps I was not able to perform step#10. Creating Subscriptions for OLTP. My ERP is not displaying this option under subscription wizard. Should it be reason for not starting transfer ? I also do not see ERP logon screen when I select transfer.
    My another question is when transfer will complete, where I'll be able to see transferred objects within ERP database ?
    Thanks
    -Vishal

  • Optimize the performance of the RFC call between ECC and CRM

    Hi,
    We are planning to extract sales orders, sales activites and service orders to dispaly it on the  PDF factsheet of the account.
    As of now, the PDF factsheet takes a long time to retrieve the data from ECC to CRM. Can you please suggest us on ways to  optimize the performance of the RFC call between ECC and CRM.
    Thanks in advance,
    Vamsi.

    Hello,
    [SAP Note 636906 |https://service.sap.com/sap/support/notes/636906]is quite useful here.
    Many times, the performance is poor due to function module CRM_CCKPT_EXPORTSUMMARY. This function module gets the customer number, the sales organization and the fact sheet view. If in CRM customizing, you use complete view (001), then all the views in ERP including all the info blocks will be retrieved, which will cause performance issue.
    To solve the issue, please use a limited view to retrieve the data from ERP - especially a view, which does not contain info block 013.
    Hope it helps
    Joaquin

  • I did my phone reset and i lost my contact and my messages please can i get them back

    i did my phone reset and i lost my contact and my messages please can get them back

    The only ways to get them back are to sync with your iCloud account, if you have one set up with your contact and messages being backed up there, or to sync with your computer that you use to backup your iPhone data and apps.
    Hope this helps

  • Data Replication Between Sqlserver and Oracle11g using materialized view.

    I have Sqlserver 2005 as my source and oracle11g as my target.I need to populate the target daily with change data from source.
    for that we have created a dblink between sqlserver and oracle and replicated that table as a Materialized view in Oracle.
    problem we are getting here is Fast refresh option is not available.each day it will pick full data from the source.
    is there any way to use Fast refresh in this scenario??
    Thanks in advance.
    Regards,
    Balaram.

    Pl do not post duplicates - Data Replication Between Sqlserver and Oracle11g using materialized view.

  • Block Demand Depend Interfaces between ECC and SCM

    Hello All.
    Is there a way to block only interfaces of Dependent Demand between ECC and SCM.
    I need to do this because ECC writes over my plan in SCM.
    Thanks and Best Regards

    Hello PrasunM
    For example:
    When you create an Order on finished material, SCM using PP method "3 - Cover Ind. Requ." creates DepReq on raw material.
    SCM using CIF sends PlOrd, PrdOrd and DepReq to ECC. I want send only PlOrd and PrdOrd, DepReq i only need in SCM side.
    Best Regards.

  • Data streaming between server and client does not complete

    Using an ad-hoc app, data streaming between server
    and client does not complete as it supposed to be.
    The process runs well in solaris 5.8, however under 5.9
    we have found the characters stream buffer length limitation
    is around 900 to 950 characters (by default we are using 3072
    characters).
    Example:
    - We are transfering HTML file, which will be displayed
    in the App client, with buffer=3072, the HTML only displayed / transfered
    as xxxxxxxx characters, but with buffer=900 the HTML is displayed properly,
    in this case, the only problem that we have is the file transfer will
    eventually longer than usual.
    - There is another case, where we have to transfer information (data) as stream
    to the client. A long data stream will not appear at all in the client.
    Any ideas why the change between 5.8 and 5.9 woudl cause problems?
    The current app-driver that we are using is compiled using Solaris 5.6,
    if possible we would like to have use of a later version, which is compiled using Solaris 5.9, do you think this will probably solve our problem?
    Thanks
    Paul

    Does this have anything to do with Java RMI? or with Java come to think of it?

  • Secure the file/data transfer between XI and any third-party system

    Hi All,,
    I would like to use to "secure" SSH on OS Level the file/data transfer between XI and any third-party system Run OS Command before processing and OS command After processing. right now my XI server installed on iSeries OS.
    with ISeries we can't call the Unix commands hope we need to go for AS400 (CL) Programming. If we created the AS400 programm how i can call that in XI.
    If any one have idea pls let me know weather it will work or not.
    Thanks in adavance.
    Venkat

    Hi,
    Thanks for your reply.
    I have red some blogs like /people/krishna.moorthyp/blog/2007/07/31/sftp-vs-ftps-in-sap-pi to call the Unix Shell script in XI.
    But as i know in iSeries OS we can write the shell script we need to go for AS400 programe. If we go with AS400 how we need to call that programe and it will work or not i am not sure there i need some help please.
    Thanks,
    Venkat

  • Data mismatch between 10g and 11g.

    Hi
    We recently upgraded OBIEE to 11.1.1.6.0 from 10.1.3.4.0. While testing we found data mismatch between 10g and 11g in-case of few reports which are including a front end calculated column with division included in it, say for example ("- Paycheck"."Earnings" / COUNT(DISTINCT "- Pay"."Check Date")) / 25.
    The data is matching for the below scenarios.
    1) When the column is removed from both 10g and 11g.
    2) When the aggregation rule is set to either "Sum or Count" in both 10g and 11g.
    It would be very helpful and greatly appreciated if any workaround/pointers to solve this issue is provided.
    Thanks

    jfedynic wrote:
    The 10g and 11.1.0.7 Databases are currently set to AL32UTF8.
    In each database there is a VARCHAR2 field used to store data, but not specifically AL32UTF8 data but encrypted data.
    Using the 10g Client to connect to either the 10g database or 11g database it works fine.
    Using the 11.1.0.7 Client to go against either the 10g or 11g database and it produces the error: ORA-29275: partial multibyte character
    What has changed?
    Was it considered a Bug in 10g because it allowed this behavior and now 11g is operating correctly?
    29275, 00000, "partial multibyte character"
    // *Cause:  The requested read operation could not complete because a partial
    //          multibyte character was found at the end of the input.
    // *Action: Ensure that the complete multibyte character is sent from the
    //          remote server and retry the operation. Or read the partial
    //          multibyte character as RAW.It appears to me a bug got fixed.

  • Data sync between oracle and sql server

    Greetings Everyone,
    Your expert views are highly appreciable regarding the following.
    We at work are evaluation different solutions to achieve data synchronization between oracle and sql server data bases. Data sync i mentioned here is for live applications. We are runnign oracle EBS 11i with custom applications and intending to implement a custom software based on .NET and SQL Server. Now the whole research is to see updates and data changes whenever happens between these systems.
    I googled and found Oracle Golden Gate, Microsoft SSIS, Wisdom Force from Informatica....
    If you can pour in more knowledge then it's great.
    Thank You.

    Most of the work involved has to be done through scripts and there is no effective GUI to implement OGG.However using commands is not vey togh and they are very intutive.
    These are the steps, from a high level:
    1.Get the appropriate GG Software for your source and target OS.
    2.Install GG on source and target systems.
    3.Create Manager and extract processes on source system
    4.Create Manager and replicat processes on target system
    5.Start these processes.
    First try to achieve uni-directional replication. Then Bi-directional is easy.I have implemented bi-directional active active replication using Oracle DBs as source and target. Refer to Oracle installation and admin guides for more details.
    Here is a good article that may be handy in your case.
    http://www.oracle.com/technetwork/articles/datawarehouse/oracle-sqlserver-goldengate-460262.html
    Edited by: satrap on Nov 30, 2012 8:33 AM

Maybe you are looking for