TCODE: ME2N is processing very slow

Hi, we have an issue with tcode: ME2N processing very slow. This tcode
was working fast when we went live with SAP. Recently we have noticed
slowness. When users run MEN2N with purchase group 124 to pull PO's, it
takes for ever. I researched and coulnd't find anything specific on
this. Please advise.

Hi,
While the end user executing the t-code put a trace against that process id.
In that trace u can definitely find the error.
For example how much time it will take to fetch the data and insert like that,
probably u have to create a secondary index for that particular table or else it is going somewhere in the loop  of a program which it is internally using.
Do these things may be u r problem will be solved.
Regards,
Anil

Similar Messages

  • SM58 entries processed very slow

    Hi Gurus,
    currently we are facing problems with TRFC queue entries, during sometimes all entries are struck up in the queue and processed very slowly. For that we have scheduled a TRFCs processing job, but still entries are getting process very slow.
    if any one of come across the similar issue, please provide your inputs.
    Thanks and Regards,
    venkat.

    A SAP notes search with keywords SM58 and performance could give you a start.
    For example:
    Note 375566 - Large number of entries in tRFC and qRFC tables
    Note 460235 - tRFC/qRFC: Low-speed processing

  • Enqueue is processing very slow

    Hello gurus,
    Enqueue is processing very slow, still i can see lot queues are waiting in Enqueue.PLeas esome quickly suggest on this.
    Our Production system badly effected by this
    regards
    Vamsi

    So, why opening the same thread with 2 different users?
    Enqueue is processing very slow and system is alomst hanging

  • How can I stop the program from going through all of the text previously put in.  It is making the process very slow.

    How can I stop the program from going through all of the text previously put in.  It is making the process very slow. 

    while True:
    and just indent everything else under it
    Ends when you press ^C.
    if you wish to have a way to break out of the loop, say, when you press enter at the VPC number without entering anything, add the following after your vpc input:
    if not vpc:
    break
    which will break out of the while loop and end your program.
    Last edited by buttons (2007-12-14 22:03:06)

  • Update Process Very Slow in Oracle 8 which update bulk data

    Dear all
    i am just updating data through SQLsub-query,but i want to get to column from sub-query and need to update my source table, but there is problem is that through sub-query just return a single column while updating,but i don't wan't to re-type another query for another column due to performance issue.
    Also the other issued related performance is very slow,how should i fast update in bulk.
    Please suggest,
    Thanks

    Actually i am update time roster table with machine date, first i get from file & insert into Machine_table & then
    i make joing query & then update roster table which is like below.
    in roster table data consist 1 to last day of month of every employee.
    update roster a
    set (a.timein,a.timeout) = (select timein,timeout from machine_table mch
    where a.roster_date = mch.roster_date and a.person_id = mch.person_id);
    this query is updating around 7750 & it takes to much time.
    please help urgent thanks.

  • Idoc trfc processing very slow

    Hello,
    I am troubleshooting slow processing of idocs in an Xi test system.
    idocs are being processed at a rate of betwen 40 / 60 per hour so at the current rate there will stil be another 3 days (approx) until the queue is cleared.
    In SMQS on the Xi (source) system the specific idoc destination is set-up as follows: -
    Clt - 10
    Destination - R2RREG100
    Type -  R
    W/o tRFC -
    Max.Conn. - 10
    Max. Runtime - 60
    Status - WAITING
    Act.Conn - 5
    Host ID - ax2t003a_X2T_43
    The RFC connection between the two systems has been tested successfully.
    Selecting TRFC Monitor and there are still over 3000 rows similar to the following (was over 7000 when first checked)
    Caller - PIAFUSER
    Function Module - IDOC_INBOUND_ASYNCHRONOUS
    Target System - R2RREG100
    Date - 15.12.2011
    Time - 12:38:22
    Status Text - Transaction recorded
    Transaction ID - 8218046C00014EC9DE911C9B
    Host - ax2t003a
    Tctn
    Program - SAPMSSY1
    Cln - 10
    Rpts - 30
    I am told by the application team that processing 7000+ idocs is not a large amount for this system
    The Xi (source) system and the R3 destination system have both been restarted to clear memory etc but this has not improved speed of processing.
    I presume the problem was caused by trying to process too many idocs at once through the Xi system as mentioned  in SAP Note 1548446.
    I have read SAP Note 1483757 and checked ST03n last minute load on RFC table in the R3 destination system -
    Task Type Name - RFC
    #Sequential Read - 4,782,264
    T Seq Read - 1,157
    Avg Seq Read - 0.2
    #Accesses -      13,515,579
    T Time - 204
    Avg T - 0.0
    #Changes - 54,206
    T Changes - 63
    Avg Changes - 1.2
    Calls - 18,352,049
    Avg Time - 0.1
    #Reads - 1,828,651
    #DB Recs- 109,376
    #Records - 7,687,467
    #Calls - 0
    T Time - 0.0
    Avg Time - 0.0
    Last minute load Table Statistics for the destination system ARFCRSTATE table shows the following :
    Table Name - ARFCRSTATE
    Records -      1,822
    Modifiable - 226
    records - 0
    Sequential Reads - 1,596
    T Access - 1
    In accordance with the note I tried setting the following on the detaination system but did not see any improvement in processing speed so have now removed the parameter and scheduling of the report.
        1. Set the profile parameter abap/arfcrstate_col_delete to 'X' on the default profile of the destination system.
        2. Periodically schedule the RSTRFCEU batch job in intervals of 5 minutes, in the destination system.
    Q1 -after setting 'abap/arfcrstate_col_delete' I did not restart the R3 system.  Is a restart required for this to take effect?
    Q2 - What else should I check to identify why idocs are being processed so slowly ?
    Q3 - Is there anything else I could change to speed up processing ?
    Many thanks,

    Many thanks jbagga for your reply.
    My issue was resolved by SAP who suggested changing the processing mode for message type in WE20 on the receiver system from "Trigger Immediately" to "Trigger by background program".
    With the processing mode set to "Trigger Immedaitely" the sender system calls the receiver, the idoc is stored on the dateabase and the application processing was carried out in the same LUW.  As a result of this the send must wait until the application finishes processing the messages before it can anything else.
    The processing mode was changed to "Trigger by background program" which decoupled the IDoc transfer from the IDoc application processing, performance immediately improved in the XI (sender) system and cleared the queue.
    Regards,

  • SM58 Transaction Recorded Very Slow

    Dear Experts,
    I have many tRFC in PI system in SM58 with status transaction recorded. This seems to be because there were an unusual data (file to idoc) from 1 interface that suddenly sends more data than usual. This happens at midnight, since then we have a queue at in SM58 with many  transaction recorded (until tonight).
    Strange things is actually that when I try to execute the LUW (F6), the transaction could be processed successfully, even though sometimes after pressing F6, the processing time could take some time (marked by the loading cursor). When the processing time is long, I just stop the transaction, re-open SM58, then the transaction that was executed before will be in the "executing" status and then I execute LUW for the transaction again and it never takes a long time to execute it.
    Trying to execute LUWs and ticking the recorded status would end up no transactions executed.
    Checking in SMQS for the destination, it is rarely that the actual connection reach the maximum connection. Seeing QRFC resource in SMQS, the Resource Status shows OK.
    Going to SM51 then Server Name > Information > Queue Information, there are no waiting requests.
    Actually the transactions are do processed, it just they are processed very slow and this impact to the business.
    What can I do when this happens? How to really re-process those recorded transactions?
    Could this be because the receiver resources? How to check for this?

    Dear Experts,
    According to this link,
    http://wiki.scn.sap.com/wiki/pages/viewpage.action?original_fqdn=wiki.sdn.sap.com&pageId=145719978
    "Transaction recorded" usually happens when A. processing idocs or B. BW loads.
    A.If it occurs when processing idocs you will see function module "IDOC_INBOUND_ASYNCH" mentioned in SM58.
    Check also that the idocs are being processed in the background and not in the foreground.
    Ensure that background processing is used for ALE communications.
    Report RSEOUT00 (outbound)can be configured to run very specifically for the high volume message types on their system. Schedule regular runs of report RESOUT00 it can be run for
    IDoc Type and\or Partner etc..
    To set to background processing for Outbound idocs do the following:
    -> go to transaction WE20 -> Select Partner Select Outbound Message Type and change the processing method from
    "Transfer IDoc Immedi." to "Collect IDocs".
    Reading that explanations, should the setting of IDoc processing to background (Collect IDocs) is done in PI or the receiver?
    If the IDocs is processed collectively, will it make the sending IDoc process faster from PI to the receiver system? What is the explanation that if the IDoc is processed collectively, it would make the IDoc sending to be faster?
    Why should we use RSOUT00 report when we already have SM58, and we can execute LUWs in SM58?
    Thank you,
    Suwandi C.

  • Multiple crashes fixed by turning off hardware and Flash acceleration; very slow now. Takes five seconds to exit and still listed as a running process

    A workaround was suggested by a member of the community to turn off both hardware and Flash acceleration. It worked fine (no crashes since), but runs very slowly. In particular, takes five seconds to exit and is often still listed as a running process. Very slow in connecting to Web pages, and very slow loading them because of the graphics. Very slow in loading video. I expected slower responses, but this is REALLY slow. I'm running 64-bit Windows 7 and an Nvidia GE Force 7800 graphics card with all the drivers updated and the plugins for Firefox mostly set on "ask to activate". Should I expect this much reduction in performance when the workaround I mentioned was put into place? If so, it's half a loaf at best. The only thing questionable is that I have two Youtube downloaders that I am trying, but I made the assumption that these were only applied when you downloaded something from Youtube.

    In case you are using "Clear history when Firefox closes": do not clear the Cookies
    If you clear cookies then Firefox will also try to remove cookies created by plugins and that requires to start plugin-container processes that can slow down closing Firefox.
    Instead let the cookies expire when Firefox is closed to make them session cookies.
    *Firefox/Tools > Options > Privacy > "Use custom settings for history" > Cookies: Keep until: "I close Firefox"

  • Ps -p running very slow (1-2 seconds) on Java process

    Hi Solaris gurus:
    I encountered a issue that running ps -p on java process running very slow, it took almost 2 seconds to complete.
    I issued a truss on the "ps -p " command, the following is part of the output:
    /1: 0.0001 fstat(1, 0xFFFFFFFF7FFFE1F0) = 0
    /1: 0.0000 write(1, " U I D P".., 55) = 55
    /1: 0.0002 open("/proc/19299/psinfo", O_RDONLY) = 3
    */1: 1.3170 read(3, "02\0\0\0\0\0\011\0\0 K c".., 416) = 416*
    /1: 1.2401 close(3) = 0
    /1: 0.0002 stat("/dev/tty", 0xFFFFFFFF7FFFE830) = 0
    It seems that the read() spent the most time.
    Anyone can help?

    Not enough memory, page-outs is too large
    8.91 GB    Page-outs
    After removing adware, and do a safe boot of Safari and remove extensions, then reset cache history etc. You need to do a boot into Recovery Mode and run Disk Repair from there. Also boot the system in Safe Mode.
    On startup it sounds like you have a problem with the directory which would also account for long startups and checking the directory. Along with or instead of DU Repair Disk you can use Single User Mode and fsck -fy to try to fix the directory but in some cases that may not be enough.
    Backups from before you got this adware and problems helps and should always be ready and able to restore a system from known good backups or system restore image.
    4GB of RAM may have been fine originally but "Early 2011" is now 5 years and 4 OS version changes. You can upgrade the memory and while at it consider a nice SSD internal drive which will help as well. Take a look and see what prices and options there are from http://www.macsales.com for your 2011 MacBook Pro.
    http://www.everymac.com
    Community for MBP MacBook Pro

  • XI message processing is very slow

    XI message processing is very slow.I did everything in tuning guide, but it is still very slow.
    I executed <b>DB13-->Check and update optimizer statistics</b>
    XI message is faster in another XI server.Although another XI server's hardware is weaker.
    I checked DB2 tablespaces and extend them.
    What can I do for higher performance?
    Thanks.

    Hi Cemil
    The following thread can throw some light on what you are looking for..
    Where could I find the information about XI performance?
    Also, check the following pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3fbea790-0201-0010-6481-8370ebc3c17d
    Regards
    krishna

  • After closing large documents (drawings) the window closes but the process runs still in the background. I open the next document, the same procedure and after dowing this several times the RAM is full the system becoms very slow. what can i do???

    after closing large documents (drawings) the window closes but the process runs still in the background. I open the next document, the same procedure and after dowing this several times the RAM is full the system becoms very slow. what can i do???

    You can always shut it down manually via the Task Manager
    (CtrlShiftEsc)...
    On Mon, Sep 1, 2014 at 3:05 PM, frank koethen <[email protected]>

  • XI getting slow after process very large message.

    Hi All,
    I had load data more than 50 MB after that I cannot access any function from browser(RWB,message monitoring,..) but in ABAP side that still work fine. I guess that j2ee engine still process the message that is a reason. So how I can stop the process when I load the the large files. I already try delete queue,delete workitem from SWWL,cancel the message. But nothing happen XI still very slow.
    Do I have to only start/stop J2EE engine?
    Could you please to advice?
    Thanks
    Park

    HI Park Saeiam  ,
    in adition to Marcos
    If the "browser" functions are not working it is possible that the java part of the XI just crashed and you have to restart the J2EE engine. your message may show error HTTP STATUS CODE 401 authorization error
    this error generally occurs when java part of the XI crashes and server is unable to recognise  your browser and your credentials.
    In that case you need to restart J2EE engine and also you need to manually restart all asynchronous messages which will come in error due to restarting of J2EE engine.
    I advise you not to create such huge messages.
    also you may try to send this kind of huge message through IDOC package
    IDocs are normally sent in packages between systems. You configure this within the IDoc configura-tion partner profile by specifying processing mode Collect IDocs and transfer. IDocs are then proc-essed by report RSEOUT00 in the sender system. During inbound processing in the IDoc adapter of the Integration Server, this IDoc package is split up into single XI messages. Therefore, an IDoc pack-age cannot be processed as one unit within XI. This also has an impact on performance.
    In SP11, a solution will be available for IDoc adapter outbound processing to collect IDoc-XML mes-sages and transfer them as an IDoc package, as in the traditional IDoc world.
    The IDoc adapter is designed to be able to process IDoc-XML structures that represent IDoc pack-ages.
    for more help you can refer SAP XI TUNING GUIDE
    Thanks
    sandeep sharma
    PS: if helpful Reward points

  • Recently bought a samsung NX30 camera which came with lightroom 5, after several hours of googling i got it setup properly with the correct camera raw to be able to access my raw images. I found my old laptop was very slow when processing the hundreds of

    recently bought a samsung NX30 camera which came with lightroom 5, after several hours of googling i got it setup properly with the correct camera raw to be able to access my raw images. I found my old laptop was very slow when processing the hundreds of images I usually take on a weekly basis.  I bought a new faster laptop but when I looked for the cd with the software and serial I could not find it in the Xmas clutter.  I downloaded a trial version of lightroom and got it working on my new laptop. Is there a way to access the serial number from my previously installed version and insert it into the trail version to make it work for me?
    Myron

    Mea culpa
    While i registered all my cs2 thru 6 versions and my lightroom 2 thru 4 versions and recorded the serials and saved the adobe registration emails.  Since this came as a freebee with the camera i neglected this vital step.  I have the instslled version on my old laptop but want to install it on my new one.  Thanks to Jim Hess I got the serial so all is well in my world and I will register it with ADOBE
    Myron

  • Very slow internet after Mavericks update. Opening websites very slow. Console info here. 9/25/14 11:12:51.416 AM com.apple.launchd[1]: (com.apple.qtkitserver[22059]) Could not terminate job: 3: No such process 9/25/14 11:12:51.416 AM com.apple.launc

    Very slow internet after Mavericks update. Opening websites very slow. Console info here. 9/25/14 11:12:51.416 AM com.apple.launchd[1]: (com.apple.qtkitserver[22059]) Could not terminate job: 3: No such process 9/25/14 11:12:51.416 AM com.apple.launc

    Please read this whole message before doing anything.
    This procedure is a diagnostic test. It’s unlikely to solve your problem. Don’t be disappointed when you find that nothing has changed after you complete it.
    The purpose of the test is to determine whether the problem is caused by third-party software that loads automatically at startup or login, by a peripheral device, by a font conflict, or by corruption of the file system or of certain system caches.
    Disconnect all wired peripherals except those needed for the test, and remove all aftermarket expansion cards, if applicable. Start up in safe mode and log in to the account with the problem. You must hold down the shift key twice: once when you turn on the computer, and again when you log in.
    Note: If FileVault is enabled, or if a firmware password is set, or if the startup volume is a software RAID, you can’t do this. Ask for further instructions.
    Safe mode is much slower to start up and run than normal, with limited graphics performance, and some things won’t work at all, including sound output and Wi-Fi on certain models. The next normal startup may also be somewhat slow.
    The login screen appears even if you usually login automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
    Test while in safe mode. Same problem?
    After testing, restart as usual (not in safe mode) and verify that you still have the problem. Post the results of the test.

  • Healing brush/Clone stamp tool and others very slow processing

    Healing brush/Clone stamp tool and others very slow processing. System resources monitor shows only 700MB (of 5GB allocated to Photoshop) RAM and 30% CPU is used to perform such tasks as healing brush. Process bar comes up and is painfully slow to complete the task. I don’t understand why does Photoshop not use all the resources available/allocated? No other programs are running at the time, I have tried all the suggestions on this forum and all over the Google. My brand new system is: Intel I7 4770K overclocked to 4.3GHz, 8GB 1600 RAM, Samsung PRO 120GB SSD system drive, 100GB scratch disk. The photos I usually work on don’t exceed 20MB in size. Any suggestion would be appreciated.  Thanks

    Thanks for the answer.
    I have not mentioned the GPU because the processes I described (as far as I know) are not supposed to be using GPU resources. So far I have not been able to afford the GPU i want which is GTX770, so I use Intel 4600 built in graphics instead. To be honest I find, that 4600 graphics is quite powerful. I am able to play Mass Effect 3 on Full spec on max res without game slowing down. And Windows 7 64bit index score is 7.8 for the 4600 graphics. Overall index score is 7.8 on my PC righ now.
    I'll try to get the Photoshop system info as soon as possible.
    Thanks again.
    Zee

Maybe you are looking for