Performance problem in IDoc receive (DELFOR01)

Hi,
I have serious problems in receive of Idoc Delfor01 (IDOC_INPUT_DELINS). We receive more than 3000 Idoc daily of this type and the process of then during all day....
One of more common error is foreing_lock of VBAK table. This error occurs in the Form beleg_sperren (line 459 from IDOC_INPUT_DELINS) when the process try enqueue one vbeln entrie from VBAK (o think by the high number of Idoc received) and, the process waits a large time trying enqueue the table entry until the error...
When I analyze the IDOC_INPUT_DELINS i see a batch-input for the VA32 transaction (at line 528). I think if call a Bapi instead a Call Transaction have more performance... it´s right?
I would appreciate help with possible solutions to this problem.. Have a better mode to process large amount of these IDoc type? Or have other FM than IDOC_INPUT_DELINS with better performance???
Regards,

Not need immediatly, but a quickly possible... In WE20 if I choose the background program the process will executed* fastly than I choose Imediattly start?
*executed via schedule job of program RBDAPP01.
I stil reading the notes above..

Similar Messages

  • Problems in IDOC receiver communication channel

    Hi,
    I am trying to build an interface to get data from 3rd party legacy system using JDBC and post the data into ECC using IDOCS. I have IDOC built in ECC, I am able to import the metadata in IDX2 after creating port in IDX1. But after building the whole interface, I dont see the IDOC receiver communication channel which has to post IDOCS in the ECC 6.0 system.
    Please assist.
    Rgds
    Kishore

    OK in that case, we are getting the following error in the Message monitoring:
    <SAP:Category>XIServer</SAP:Category>
      <SAP:Code area="OUTBINDING">CO_TXT_OUTBINDING_ERROR</SAP:Code>
      <SAP:P1>-BS_AVN_TO_FILE_JDBCSERVER</SAP:P1>
      <SAP:P2>-BS_WOAV_IDOC_SENDER,urn:sap-com:document:sap:idoc:messages.ZAVENTITY01.ZENTITY</SAP:P2>
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText>No standard agreement found for , BS_AVN_TO_FILE_JDBCSERVER, , BS_WOAV_IDOC_SENDER, urn:sap-com:document:sap:idoc:messages, ZAVENTITY01.ZENTITY</SAP:AdditionalText>
      <SAP:Stack>Problem occurred in receiver agreement for sender -BS_AVN_TO_FILE_JDBCSERVER to receiver -BS_WOAV_IDOC_SENDER,urn:sap-com:document:sap:idoc:messages.ZAVENTITY01.ZENTITY: No standard agreement found for , BS_AVN_TO_FILE_JDBCSERVER, , BS_WOAV_IDOC_SENDER, urn:sap-com:document:sap:idoc:messages, ZAVENTITY01.ZENTITY</SAP:Stack
    Please check and assist.
    Rgds
    Kishore

  • A problem about IDOC Receiver

    Hi, everybody.
    This time, I'm developing a IDOC Receiver though the SAPIDocReceiver class. With the dotnet Connector manual from SAP, the receiver side on dotnet is easy to develop. But how to configure the SAP R/3 to send IDOC to the dotnet side, I can't find the clear steps in the manual.
    So I searched some material about how to configure IDOC, and hope it will be useful to my work. But most of them mainly focus on how to configure a IDOC transformation between SAP system but not NON-SAP system.
    So what I want to ask is IS there some difference with the two configuration, I mean the SAP to SAP one and the SAP to NON-SAP one. I guess the difference should be have. Are the necessary configure step like create a logic system, allocate logical systems to clients and so on also necessary under the NON-SAP case? I think some of the configure actions is different in this two case. Can anyone give me some clear steps or tell me where I can get them?
    thanks to all.

    Hi,
    for testing purposes you can do the following:
    (1) via SM59 create the RFC destination for your dotnet application
    (2) via WE21 create a trfc-port pointing to the RFC destination created in (1)
    (3) via SM30 maintain view V_TBDLS to create a new logical system for your dotnet application for example DOTNET (and maybe for the SAP client you are testing with - here the name could be for example <SID>CLNT<CLIENT>)
    (4) via SCC4 assign the logical system to the SAP client (for example <SID>CLNT<CLIENT>) only if not already maintained.
    (5) In WE20 create a partner profile for logical system created for the dotnet application in step (3)
    (6) In WE20 add outbound parameters for the message type (for example MATMAS) and idoc type (MATMAS03) you want to send. As partner port use the trfc-port of step (2)
    (7) In WE19 create an IDoc via basic type (the idoc type used in step (6), for example MATMAS03) and maintain the following fields in the control record EDIDC:
    <b>recipient port</b> the port created in step (2)
    <b>recipient partner number</b> the logical system name of the dotnet application created in (2)
    <b>recipient partner type</b>'LS'
    <b>sender port</b> SAP<SID>, where SID is the SAP system id
    <b>sender partner number</b> the logical system name the SAP client maintained in step step (4)
    <b>sender partner type</b>'LS'
    <b>message type</b> the message type used in step (6), for example MATMAS
    In the other segments E1* you can fill in whatever you like
    Hit the push button "Standard outbound processing"
    (8) you can check if the IDoc has been sucessfully created by BD87. If an error has occurred in the receiving application or there is any other error you can also see it here. Maybe the IDoc is not passed to the receiver, then you can process the IDoc also from here.
    This is how you can create your test environment. If you want to get the IDoc created form the SAP application it depends on the application how to do that!
    Best regards,
       Willy

  • Problem in IDOC Receiving on MII

    Hi
    I am using MII Version 12.1.4 Build(46). We have configured IDOC in SAP ECC, SAP Netviewer and in MII.
    We are able to generate iDoc from ECC but we are not able to receive it on MII Message Monitor.

    Manoj,
    The underlying NW CE installation must also be at SP3 (minimum).  Please check and let me know if you already have the update.  The IDoc Listener configuration changed a bit from 12.0 to 12.1 so there may be some other modifications needed.
    Regards,
    Mike

  • Performance Problem on Soap Receiver Side

    Hi,
    my scenario is: FTP to SOAP (asynchronous) on PI 7.31 SP11.
    The ftp sender is providing a few hundred files at once, unfortunately the processsing at productive environment takes too long, like 15-60 minutes, depending on the number of files.
    What is really strange here, the same scenario is working 100% properly at the test environment! So i did some tests to find out the bottleneck:
    FTP -> PI prod -> SOAP prod - slow
    FTP -> PI test   -> SOAP prod - fast
    FTP -> PI prod -> SOAP test   - fast
    FTP -> PI prod -> NFS prod    - fast
    So if found: Only slow, if i use the productive PI to productive soap receiver.
    The adapter configuration is 100% the same as on test system.
    I watched the slow messages and found:
    First message:
    Normal processing of the steps until "XISOAP: XI message received for processing".
    Then it takes 4 seconds before "SOAP: Processing completed". This happens withing milliseconds on test environment.
    Later messages stay longer and longer in a queue.The processing time is going up to 36 seconds (instead of 4).
    -> Something is stopping / slowing them to be processed. This "stopper" becomes worse if more messages are processed.
    Does anybody have an idea to explain this strange behaviour?
    Regards,
    Udo

    Hi Udo,
    The problem seems to be in the message system queue in the AFW in the production environment. It has sense because the SOAP pro and NFS pro are different adapters, however afaik the SOAP test channel uses the same thread pool that the SOAP pro in PI pro.
    Have you check to increase the number of threads?. You can check the number of them,
    in this URL http://host:port/MessagingSystem/monitor/systemStatus.jsp (System Status):
    Regards.

  • Problem in Idoc Receiver

    Hi guys.
    I have a FILE to IDOC scenario.
    The original idoc have only 1 ocurrence but I've changed the ocurrence with an external definition based on idoc xsd with ocurrence 0..N.
    When any idoc is generated in mapping Im getting the next erro in moni:
    Error: MSGGUID 901EF81123CC4D131091DF1F61EB83B4: Tag found instead of tag IDOC BEGIN=
    Why Im getting this error in sxi_monitor if I've especified my cardinality to 0..N??
    Thanks a lot.
    Regards.

    >> Error: MSGGUID 901EF81123CC4D131091DF1F61EB83B4: Tag found instead of tag IDOC BEGIN=
    First export the idoc locally and change the occurence of idoc tag from 1  to  0 to unbounded. Check in the mapping you specify the modified idoc. Also acivate the objects and update cache.
    Tips
    a) Use your modified idoc in the message mapping target side
    b) Operation mapping - Specify original idoc as target message in the inbound side. (you might get some warning during activation.. please ignore it)
    c) In the interface determination check you specify operation mapping and inbound interface as original idoc....
    If you do all the above steps, You should not get this error. Hope this helps.
    Edited by: Baskar Gopal on Feb 24, 2011 9:30 AM

  • IDOC performance problem

    Hi All,
            Currently I come across a idoc performance problem,in we20,the idoc inbound process code invoke BDC manner.
    the detail as follows:
    (1)Seebeyond send the idoc to SAP side.
    (2) Idoc added(Status 50)
    (3)IDoc ready to be passed to application(64)  11:06:27
    (4)IDoc passed to application(62)                   11:21:22
    My question is why the process time spend so much when idoc status from 64 to 62,what is the affect factor.whether the process time include BDC execution time.Thanks.

    Hi,
    I think it has taken much time to get the status 62 from 64 status.
    In the partner profile WE20 transaction for this interface you have used Collect IDOC's option. If that is the case till your background runs it will get into 62 status.
    If you use trigger immediately option in WE20 transaction then as soon as the IDOC is in 64 status it will be processed that is it will 62 status immediately.
    I hope I have answered your question.
    Thanks,
    Mahesh.

  • IP Job in BW finished, but how can I check the IDoc receive status in BW ?

    Hi,experts
    After I execute a Infopackage with loading more than 200000 records data, the infopackage monitor show me yellow light, 186020 from 200000 records. this infomation still show me till now, seaval hours,it looks pause here.
    And I check the request in the R/3, the Job has finished. as you know it means the R/3 push the IDoc already finished.
    and how can I do now ? or how can I check the IDoc receive status in BW side?

    Hi,
    I too had the same problem, i executed the following to solve the proble, may be this will help you:
    1) Go to T-code SM58 and select TRFC and press F6,
    2) For manual push of I-Docs, GO to T-COde BD87 select perticular idoc and see the status if it is not executed properly then do the manual push by pressing Execute option,
    Thanks,

  • Performance problems with File Adapter and XI freeze

    Hi NetWeaver XI geeks,
    We are deploying a XI based product and encounter some huge performance problems. Here after the scenario and the issues:
    - NetWeaver XI 2004
    - SAP 4.6c
    - Outbound Channel
    - No mapping used and only the iDocs Adapter is involved in the pipeline processing
    - File Adapter
    - message file size < 2Ko
    We have zeroed down the problem to Idoc adapter’s performance.
    We are using a file channel and  every 15 seconds a file in a valid Idoc format is placed in a folder, Idoc adapter picks up the file from this folder and sends it  to the SAP R/3 instance.
    For few minutes (approx 5 mins) it works (the CPU usage is less then 20% even if processing time seems huge : <b>5sec/msg</b>) but after this time the application gets blocked and the CPU gets overloaded at 100% (2 processes disp_worker.exe at 50% each).
    If we inject several files in the source folder at the same time or if we decrease the time gap (from 15 seconds to 10 seconds) between creation of 2 Idoc files , the process blocks after posting  2-3 docs to SAP R/3.
    Could you point us some reasons that could provoke that behavior?
    Basically looking for some help in improving performance of the Idoc adapter.
    Thanks in advance for your help and regards,
    Adalbert

    Hi Bhavesh,
    Thanks for your suggestions. We will test...
    We wonder if the hardware is not the problem of this extremely poor performance.
    Our XI server is:
    •     Windows 2003 Server
    •     Processors: 2x3GHZ
    •     RAM: 4GB (the memory do not soak)
    The messages are well formed iDocs = single line INVOICES.
    Some posts are talking 2000 messages processed in some seconds... whereas we got 5 sec per message.
    Tnanks for your help.
    Adalbert

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance problem in RFC to JDBC interface

    Hello everybody!
    i'm working whit SAP PI 7.1
    We defined some interfaces RFC - PI - JDBC (SQL server) but we have some performance problem.
    If we have many row to write on the table then interface finish in timeout :
    Synchronous timeout exceeded.
    Returning to application. Exception: com.sap.engine.interfaces.messaging.api.exception.MessageExpiredException: Message 1d1f00b0-fecf-11de-8738-0015600446f0(OUTBOUND) expired.
    I read the PI tuning document and i tried to apply configuration whit Advanced Adapter Engine but whitout result.
    Now we want change the timeout in visual admin and maybe we solve the error but i'm asking myself....:
    It's normal that for write 1500 row in a table we need more than 4 minuts????
    It's possible accelerate this process??? After go live we will write messages whit more than 50.000 row.
    somebody may help me?
    PS: please no link to tuning guide or to notes (to increase the timeout parameter).

    This could be because your Database system (JDBC server) is taking more time to insert. The problem is not on PI side but on the receiving system side. Try inserting the same number od rows on the database server itself and check for the time taken for execution. Adding indexes on your database table solves the issue lot of times.
    Here PI is not the culprit but definitely  the receiver system.
    VJ

  • Problem sending IDOC DESADV  /AFS/DELVRY03 to XI

    Hi Gurus,
    I have problems sending IDOCs DESADV (basic type /AFS/DELVRY03) created by function LSEND_IDOC to XI system. IDOC is correctly created by R3 system (AFS ECC 5.0) and correctly sent on the XI port (Status: Data passed to port OK - 03), but XI system doesn't receive IDOC.
    If I resend the same IDOC by WE19 tcode goes ok!!
    In R3 sending system nothing warnings/errors in syslog or sm58.
    Any suggests? Thanks in advance.

    Issue solved! It is necessary close the master idoc with a commit

  • Performance problem on wait event PX Deq: Execute Reply

    Hi everybody
    I encounter some performance problem, I've made a tkprof on a select statement and I saw that more than 95% of the elapsed time is due to event PX Deq: Execute Reply.
    This request is not CPU or paging consuming. What is this event and how could I reduce it ? Could it be a disk problem ?
    Thanks a lot, best regards
    Greg
    Here is a sample of my tkprof:
    call count cpu elapsed disk query current rows
    Parse 1 0.03 0.03 0 0 0 0
    Execute 1 0.22 2.16 68 177 12 0
    Fetch 2 0.17 511.97 38 40 0 1
    total 4 0.42 514.16 106 217 12 1
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 38
    Rows Row Source Operation
    1 PX COORDINATOR (cr=202 pr=103 pw=0 time=513984636 us)
    0 PX SEND QC (RANDOM) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
    0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
    0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
    0 PX SEND HASH :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
    0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
    0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
    0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
    0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
    0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
    473 TABLE ACCESS FULL DIM_CALL_DISTANCE (cr=8 pr=7 pw=0 time=27259 us)
    0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
    0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
    0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
    0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
    4 TABLE ACCESS FULL DIM_AUDIT_CALL (cr=32 pr=31 pw=0 time=35037 us)
    0 PX BLOCK ITERATOR PARTITION: 1 16 (cr=0 pr=0 pw=0 time=0 us)
    0 TABLE ACCESS FULL FACT_CALL PARTITION: 1 48 (cr=0 pr=0 pw=0 time=0 us)
    Elapsed times include waiting on following events:
    Event waited on Times Max. Wait Total Waited
    ---------------------------------------- Waited ---------- ------------
    db file sequential read 67 0.05 0.95
    os thread startup 4 0.21 0.80
    PX Deq: Join ACK 4 0.00 0.00
    PX Deq: Parse Reply 3 0.13 0.17
    SQL*Net message to client 2 0.00 0.00
    PX Deq: Execute Reply 304 1.96 511.68
    db file scattered read 6 0.01 0.03
    PX qref latch 12 0.00 0.00
    SQL*Net message from client 2 94.93 94.94
    PX Deq: Signal ACK 6 0.10 0.11
    enq: PS - contention 1 0.00 0.00
    ********************************************************************************

    PX Deq: Execute Reply is an idle event associated with Parallel Query. Are your tables partitioned or have a degree greater then 1?
    The tables appear to be small in size. The overhead associated with parallel query generally hinders response time on queries involving small tables.

  • Performance Problem with File Adapter using FTP Conection

    Hi All,
    I have a pool of 19 interfaces that send data from R/3 using RFC Adpater, and these interfaces generate 30 TXT files in a target Server. I'm using File Adapters as Receiver Comunication Channel. It's generating a serious perfomance problem. In File Adpater I'm using FTP Conection with Permanently Conection, Somebody knows if PERMANENTLY CONECTION is the cause of performances problem ?
    These interfaces will run once a day with total of 600 messages.
    We still using a Test Server with few messages.

    Hi Regis,
        We also faced teh same porblem. Whats happening is that when the FTP session is initiated by the file adapter, then its getting done from teh XI server. Hence the memory of the server is also eaten up. Why dont you give a try by using 'per file transfer'.
        If this folder to which you are connecting is within your XI server network then you can mount(or map) that drive to the XI server and use it with a NFS protocol of the file adapter and thereby increasing the performance.
    Cheers
    JK

  • IDOC Receiver - Unable to convert sender service to ALE

    Hi!!!
    I am trying to configure the following scenario: FILE - XI - IDoc to R/3. 
    A business service called IDOC_Demo recovers from the server the file, and after mapping it, it tries to send it to IBP, the R/3 system using a IDOC Receiver comunication channel.
    IBP is configured in SLD.
    I have in XI System the RFC conexion, and I have define the port via TCODE IDX1.
    But it does not work, I am geting the following error:
    "Unable to convert sender service IDOC_Demo to an ALE logical system"
    I have checked the adapter specific identifiers, but I can not see anything wrong...
    I do not know what can I check! Could someone help me?! What can I do?
    Thank a lot!!
    Araitz.

    Thank you very much, but I still get the error message! Of course, I read your blogs Michal before posting the question, and they have been very useful, but still...
    In SLD I only have the R/3 system, the name is IBP and the business landscape and the logical system name (I do not know if this could cause a problem…).
    The connexion is IBP, and it works, and in idx1 I have configured a port, and its name is SAPIBP, using the IBP connexion.
    Design… I have imported the CREMAS.CREMAS03 IDOC, and I have done a mapping interface, I have disabled the EDI_DC40, and set “begin” and “segment” to 1.
    Configuration… I have created a business service, IDOC_Demo, that has a communication channel, sender, file type.
    And I have the IBP service. In adapter specific I can see: logical system IBP, and R/3 System IBP, client 100. If I push the “Compare with SLD” button, nothing happens… what should happen?
    IDOC_Demo receives the file, and via an Outbound, async interface, calls IBP, that receives the IDoc using a receiver IDoc type adapter…
    Now the receiver agreement has information in the header mapping, sender service, IDOC_Demo, receiver service, IBP.
    And I do not know why, but it does not work… any idea?
    Than you!
    Araitz.

Maybe you are looking for

  • How to find PO number from ORDRSP Idoc.

    Hi, Can  anybody let me know how to find the PO number from the Idoc ORDRSP. Scenario is as below: SAP system1 will send PO to SAP system2.(Idoc type ORDRS05). When SO is created in SAP system2, it will send a confirmatory Idoc of message type ORDRSP

  • Will isight cam work on Skype?

    Hi all.. I am totally out of my depth here as i normally only use my Mac for music. My question is will an isight cam work with Skype, and what else do i need to set up a chat. I have a ibook G4 with a 400 firewire port, 1 Ghz of memory, and would de

  • Import a local DC into DTR

    Hi everybody, is it possible to import a local DC or a local Software Component into a DTR Track? If yes, how can i do that? Best regards, Sid

  • Write result to file problem

    Hi , the following is some code get result from html file and write them to a file, but I got error at this line, String PrintWriter fileOut =new PrintWriter (new FileWriter("survey.txt", true)) any help? Thank you

  • How to create hypertext links between various text within a Pages document?

    I have documents with paragraphs or sentences that are usually numbered like within a constitution, legal document or biblical text and then later in the document want to link back to the text. How do I create those links within Pages?