Very High processing time in STAD

Hi
I have a problem in my NW04 BI system.
Users are occasionally experiencing high responsetimes while using the HTTP interface to work with BI reports.
If I display the statistical records STAD I can see that while using the SAPMHTTP program the "processing time" and therefor "Total time in workprocs" are very high but other times in the record are very low. CPU time is very low. Below is the detailed analysis.
CPU time                      94 ms
RFC+CPIC time                  0 ms
Total time in workprocs  481.566 ms
  Response time          481.566 ms
Wait for work process          0 ms
Processing time          481.093 ms
Load time                      1 ms
Generating time                0 ms
Roll (in+wait) time            0 ms
Database request time        226 ms
Enqueue time                   0 ms
DB procedure call time       246 ms
Number      Roll ins            1    
            Roll outs           1    
            Enqueues            8    
Load time   Program             1  ms
            Screen              0  ms
            CUA interf.         0  ms
Roll time   Out                 0  ms
            In                  0  ms
            Wait                0  ms
Frontend    No.roundtrips       0    
            GUI time            0  ms
            Net time            0  ms                                     
No. of DB procedure calls       1    
Can anyone tell me what is going on in the system or how I can go further to analyze this. You might assume that this is a CPU bottleneck but it is not, I am using 4 x 64bit processors and 12Gb memory with load of 5-10% when this problem occurs.
Best Regards
Sindri

why should the database be a problem, it is
> Database request time 226 ms
Go to the details (double click) of the line with problems. In the details you should have a button 'http' in the action line. There you can chewck the http details which should tell you where the time was spend.
As always, repeat your measurements a few times to see whether the behaviour is reproducible.
Siegfried

Similar Messages

  • SG200-26P [FW-1.1.2.0] - Very High Response Time: 1000ms!

    Hello,
    Problem: New SG-200 26P Smart Switch with Latest Firmware - Very High Responce Time 500-800ms
    We've a EdgeMarc 4500 Router with 10 VPN tunnels to 10 brach locations. SG-200 26P Smart Switch is connected to 7 Servers (2 Terminal, SQL, and Other) All locations have 50MB Download and 20MB Upload speed from Verizon FiOS Internet service.
    As per the SolarWind tool, the response time of this switch is around at 500ms. At the same time, the EdgeMarc 4500 router response time is around 40ms and less.
    We've 60 desktops remotely connected to our SQL Server database and 40 RDP Users via Remote Desktop. The configuration is same from past 3 years. But we change the switch from HP 1800-24G to Cisco due to some Connection Failures. For Connection Failures, we first suspect the old HP switch, but it's look like issue with EdgeMarc Router.
    Is this Response Time is normal? I attached two screenshots of both Cisco Switch and EdgeMarc Router Response Time from past 24 hours according to SolarWind tool. Any further advice would be greatly appreciated. Thank you.

    Hello Srinath,
    Thank you for participating in the Small Business support community. My name is Nico Muselle from Cisco Sofia SBSC.
    The response time from the switch could be considered as quite normal. Reason for this is that the switch gives CPU priority to it's actual duties which would of course be switching, access lists, VLANs, QoS, multicast and DHCP snooping etc etc. As a result of that, ping response times of the switch itself do not show in any way the correct working of the switch.
    I invite you to try pinging clients connected to the switch, you should be able to notice that response times to the clients are a lot lower than response times of the switch itself.
    Hope this answers your question !
    Best regards,
    Nico Muselle
    Sr. Network Engineer - CCNA - CCNA Security

  • Deadlocks and very high wait times

    We are seeing a very high number of deadlocks in the system. The deadlocks trace all show a 'enq: TX - row lock contention' with wait times of around 2929700+ seconds ex:
    last wait for 'enq: TX - row lock contention' blocking sess=0x70000006d85e1b8 seq=55793 wait_time=2929704 seconds since wait started=4
    name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
    Dumping Session Wait History
    for 'enq: TX - row lock contention' count=1 wait_time=2929704
    name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
    for 'latch: enqueue hash chains' count=1 wait_time=1649
    address=70000006dbb4a20, number=13, tries=0
    for 'enq: TX - row lock contention' count=1 wait_time=2929708
    name|mode=54580006, usn<<16 | slot=1d0010, sequence=705f
    for 'SQL*Net message from client' count=1 wait_time=101740
    driver id=54435000, #bytes=1, =0
    for 'SQL*Net message to client' count=1 wait_time=1
    driver id=54435000, #bytes=1, =0
    for 'direct path write temp' count=1 wait_time=921
    file number=fb, first dba=6521b, block cnt=2
    for 'SQL*Net more data from client' count=1 wait_time=3
    driver id=54435000, #bytes=10, =0
    for 'SQL*Net more data from client' count=1 wait_time=5
    driver id=54435000, #bytes=1e, =0
    for 'SQL*Net more data from client' count=1 wait_time=10
    driver id=54435000, #bytes=2c, =0
    for 'SQL*Net more data from client' count=1 wait_time=5
    driver id=54435000, #bytes=3a, =0
    Any ideas on how to resolve this ?
    Thanks
    Surya

    Sorry for the typo, that Ora-0060 error we are seeing. Here is the deadlock graph:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP, Data Mining Scoring Engine
    and Real Application Testing options
    ORACLE_HOME = /orasw/product/10.2.0.4.0
    System name: AIX
    Node name: spda5001
    Release: 3
    Version: 5
    Machine: 00074D5AD400
    Instance name: IAMS01P1
    Redo thread mounted by this instance: 1
    Oracle process number: 21
    Unix process pid: 2306444, image: oracle@spda5001
    *** 2011-12-24 05:05:39.885
    *** SERVICE NAME:(IAMS01P) 2011-12-24 05:05:39.884
    *** SESSION ID:(443.2130) 2011-12-24 05:05:39.884
    DEADLOCK DETECTED ( ORA-00060 )
    [Transaction Deadlock]
    The following deadlock is not an ORACLE error. It is a
    deadlock due to user error in the design of an application
    or from issuing incorrect ad-hoc SQL. The following
    information may aid in determining the deadlock:
    Deadlock graph:
    ---------Blocker(s)-------- ---------Waiter(s)---------
    Resource Name process session holds waits process session holds waits
    TX-00080020-000c3957 21 443 X 58 391 X
    TX-001d0010-0000705f 58 391 X 21 443 X
    session 443: DID 0001-0015-0000002E session 391: DID 0001-003A-00000081
    session 391: DID 0001-003A-00000081 session 443: DID 0001-0015-0000002E
    Rows waited on:
    Session 391: obj - rowid = 0001098B - AAATtpAAGAAADROAAD
    (dictionary objn - 67979, file - 6, block - 13390, slot - 3)
    Session 443: obj - rowid = 00010B25 - AAARRwAAGAAAAdgAAN
    (dictionary objn - 68389, file - 6, block - 1888, slot - 13)
    Information on the OTHER waiting sessions:
    Session 391:
    pid=58 serial=16572 audsid=52790041 user: 93/IAMS_USR
    O/S info: user: , term: , ospid: 1234, machine: mac3023
    program:
    Current SQL Statement:
    update spt_identity set created=:1, modified=:2, owner=:3, assigned_scope=:4, assigned_scope_path=:5, extended1=:6, extended2=:7, extended3=:8, extended4=:9, extended5=:10, extended6=:11, extended7=:12, extended8=:13, extended9=:14, extended10=:15, extended11=:16, extended12=:17, extended13=:18, extended14=:19, extended15=:20, extended16=:21, extended17=:22, extended18=:23, extended19=:24, extended20=:25, name=:26, description=:27, protected=:28, iiqlock=:29, attributes=:30, manager=:31, display_name=:32, firstname=:33, lastname=:34, email=:35, manager_status=:36, inactive=:37, last_login=:38, last_refresh=:39, password=:40, password_expiration=:41, password_history=:42, bundle_summary=:43, assigned_role_summary=:44, correlated=:45, auth_question_lock_start=:46, failed_auth_question_attempts=:47, controls_assigned_scope=:48, certifications=:49, activity_config=:50, preferences=:51, history=:52, scorecard=:53, uipreferences=:54, attribute_meta_data=:55, workgroup=:56 where id=:57
    End of information on OTHER waiting sessions.
    Current SQL statement for this session:
    update spt_workflow_case set created=:1, modified=:2, owner=:3, assigned_scope=:4, assigned_scope_path=:5, stack=:6, attributes=:7, launcher=:8, host=:9, launched=:10, completed=:11, progress=:12, percent_complete=:13, type=:14, messages=:15, name=:16, description=:17, complete=:18, target_class=:19, target_id=:20, target_name=:21, workflow=:22 where id=:23

  • Fopen() mode lead to very different processing time...

    Hello everybody,
    I am programing some command line routines in C. I just observed a weird behavior using the very classical fopen() function.
    It's useless to explain in details what the soft is making. Simply, I start from some data, I perform some processing then save the results. In a second step I restart from the former results to perform an other processing step.
    So, the first result file must be open or created in read/write mode. In fact, if the file does not exists, the file is created using the "w+" mode. If the file already exists, it is open in "r+" mode.
    Processing time are drastically different.
    A first run, creating the file in "w+"mode requires 15 minutes.
    If rerunning exactly the same routine, without any changes except the file is open in "r+" mode since it already exists leads to a processing time 3X longer...
    But final results are well the same as expected.
    I make the test on both 10.7 and 10.8.
    If someone has some explanation, it is welcome.
    DD.

    You might be better off posting this in the Developer forums.

  • Very high parse times for query rewrite using cube materialized views

    We recently upgraded to version 11.2.0.2 (both AWM and Oracle database server). We are using cube materialized views with query rewrite enabled. Some observations of changes that took place when we rebuilt all the dimensions and cubes in this version:
    1. Queries against the base tables take about 35 seconds to parse. Then they execute in a tenth of a second. Even simple queries that just get a sum of the amount from the fact table (which is joined to all the dimensions) takes that long to parse. Once parsed, the queries fly.
    2. I noticed that the materialized views used to use grouping sets in the group by clause in version 11.2.0.1, but now they use group by rollup, rollup, rollup...
    If we disable query rewrite on the MV or for my session, parse times drop to less than a second. Ideas?

    There does appear to be a slow down in parse times between 11.1.0.7 and 11.2. We are still investigating this, but in the meantime here is a way to force the code in 11.2 to generate a GROUPING SETS clause instead of the new ROLLUP syntax.
    The trick is to create a dummy hierarchy containing only the leaf level. This is necessary for all dimensions that currently have a single hierarchy. As a simple example I created a dimension, PROD, with three levels, A, B, and C, in a single hierarchy. I then created a one dimensional cube, PC. Here is the SELECT statement for the MV in 11.2. Note the ROLLUP clause in the GROUP BY.
    SELECT
      GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID")  SYS_GID,
      (CASE GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID")
       WHEN 3
       THEN TO_CHAR(('A_' || T3."CLASS_ID") )
       WHEN 1
       THEN TO_CHAR(('B_' || T3."FAMILY_ID") )
       ELSE TO_CHAR(('C_' || T3."ITEM_ID") )  END)     "PROD",
      T3."CLASS_ID" "D1_PROD_A_ID",
      T3."FAMILY_ID" "D1_PROD_B_ID",
      T3."ITEM_ID" "D1_PROD_C_ID",
      SUM(T2."UNIT_PRICE")     "PRICE",
      COUNT(T2."UNIT_PRICE")  "COUNT_PRICE",
      COUNT(*)  "SYS_COUNT"
    FROM
      GLOBAL."PRICE_AND_COST_FACT" T2,
      GLOBAL."PRODUCT_DIM" T3
    WHERE
      (T3."ITEM_ID" = T2."ITEM_ID")
    GROUP BY
      (T3."CLASS_ID") ,
      ROLLUP ((T3."FAMILY_ID") , (T3."ITEM_ID") )Next I modified the dimension to add a new hierarchy, DUMMY, containing just the leaf level, C. Once I have mapped the new level and re-enabled MVs, I get the following formulation.
    SELECT
      GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID")  SYS_GID,
      (CASE GROUPING_ID(T3."CLASS_ID", T3."FAMILY_ID", T3."ITEM_ID")
       WHEN 3
       THEN ('A_' || T3."CLASS_ID")
       WHEN 1
       THEN ('B_' || T3."FAMILY_ID")
       WHEN 0
       THEN ('C_' || T3."ITEM_ID")
       ELSE NULL END)  "PROD",
      T3."CLASS_ID" "D1_PROD_A_ID",
      T3."FAMILY_ID" "D1_PROD_B_ID",
      T3."ITEM_ID" "D1_PROD_C_ID",
      SUM(T2."UNIT_PRICE")     "PRICE",
      COUNT(T2."UNIT_PRICE")  "COUNT_PRICE",
      COUNT(*)  "SYS_COUNT"
    FROM
      GLOBAL."PRICE_AND_COST_FACT" T2,
      GLOBAL."PRODUCT_DIM" T3
    WHERE
      (T3."ITEM_ID" = T2."ITEM_ID")
    GROUP BY
      GROUPING SETS ((T3."CLASS_ID") , (T3."FAMILY_ID", T3."CLASS_ID") , (T3."ITEM_ID", T3."FAMILY_ID", T3."CLASS_ID") )This puts things back the way they were in 11.1.0.7 when the GROUPING SETS clause was used in all cases. Note that the two queries are logically equivalent.

  • Long processing time but why?

    Hi all .......
    in TCode STAD "SAP WorkLoad: Single Statistical Records - Details".
    some time or many times I found that, processing time is 10 time of cpu time.
    and there no bottleneck in CPU its always free and also Roll wait time.
    So can some one tell me where the long waiting time come from?
    Thank you

    Hi,
    SAP note 8963 eplains the definition of the processing time. And SAP note 99584 explains why a very large processing time might appear in STAD.
    If a long running dialog step performs many DB operations then an overflow in the DB statistic may occur. Reason: a 4-byte counter for microseconds will wrap around after approx. 71 minutes. Then the measured DB time is very small compared to the response time. This is the reason why a very large processing time is shown.
    Maybe this explains your situation. It has been solved with SAP_BASIS 700 where larger counters have been implemented.
    I hope this information helps.
    Kind Regards,
    Andreas

  • High OLAP Time - Important

    During Stress testing BEx Query through Web, Experiencing 86% of OLAP Time(very High) and 10% of DB Time for to produce 1.35Million records. Query contains 25 Char.(3 Navigational attr.) and 2 keyfigures, No hierarchy, No variables, no Restricted/calculation keyfigures.
    Query Property: Read Mode: 3; CacheMode: Persistant Cache on Application server with Blob (Transparent Table).
    Question is : Wondering why there is very high OLAP time and how to minimize it?

    Are you really talking about returning 1.35 million rows to the OLAP processor?  That's an awful lot of cells in a cube. 
    Based on your stmts, your problem as you mention, is OLAP, not DB, so disregard suggestions from others on the DB side like rebuild index and collect stats. 
    I haven't used Business Objects against BW, but have worked with it the past.  Do you really want to run this query wide open - can't you pass filters/variables thru Business Objects to the query? Is your intent to have each user run this query, or are you trying to run the query wide open to cache all possible results? Since you specify Read Mode 3, that's my guess as to what you are trying to do.  I would go back and revisit query usage. Read Mode 3 returns all free characterics in your result set and with the characteristics you site, probably has to build a very large number cells in your cube (depends on number of unique values for each characteristic) - an awful lot of work for the App Server.  I would target your query at initial view and possibly include any other regularly / frequently drilled characteristics and switch back to Read Mode 1.
    Other than Robertos thoughts, the only other thing I would look at is App server memory and CPU - monitor those while query is running - maybe you are just out of steam on your appl server.

  • High roll wait time, gui time and processing time

    Hi all,
    We are having a performance issue with the system at the moment. After running ST03N and STAD, we have determined that we are having abnormally high roll-wait, GUI and processing times.
    Here are some results from STAD:
    CPU time: 2956ms
    Processing time: 4417ms
    Roll wait time: 1319ms
    GUI Time: 2859ms
    Can someone direct me towards finding the problem?
    If you need more information please let me know. I would post a screenshot if I was able to.
    Thanks,
    HL

    Hi,
    Regarding Performance Of SAP system it will not be possible to Come to conclusion with the one Single data.
    GUI time is basically the time spent in front end i.e Your Network load or such time
    Coming to CPU time Can you see ST06 what is the CPU Utilization of your system?
    Roll Wait time is the time Spent in buffer if you can share us the OS and Memory configuration.
    We can Try to tune the Roll memory ( extended memory tuning) so that we can try to reduce the Roll time.
    If possible try to find the Top Dialog Response Time Transaction Is it Standard or for Z Programs.
    https://cw.sdn.sap.com/cw/docs/DOC-14387
    Go through this link you might get clear idea of performance.
    Thanks & Regards,
    Balaji.S

  • Final Cut Pro X 's process is very high.

    Final Cut Pro X process is very high and  its use higest memory..what can i do to decress and how to incress speed?
    when i use Final Cut Pro X then my MacBook Pro working very slow.

    I'm taing pics now...for some reason I have version 10.1?? My memory is good, but I recall there being not a 1 there...

  • Process SERVER0 its taking High CPU Time in XI

    Hi,
    We installed XI 3.0 develpoment server ,Process <b>SERVER0</b> its taking more CPU time.
    We are using AS/400 OS & DB2 datbase.Can any one tell me the reason & Solution for this.
    Thanks & Regards,
    Gopinath.

    hi,
    Actually the user XIRWBUSER its an RFC user but its running on many dialog process.I think high CPU time due to this user only.
    using 5 work processes and 3/4 of the available CPU for an extended amount of time.
    Total Total DB
    Job or CPU Sync Async CPU
    Task User Number Thread Pty Util I/O I/O Util
    WP11 D6464 485368 00000010 20 54.0 25 37 27.8
    WP05 X4242 498642 00000098 20 49.6 5 0 36.7
    WP04 X4242 498641 00000014 20 48.8 1 0 39.1
    WP02 X4242 498639 0000025A 20 47.1 2 0 37.8
    WP06 X4242 498643 000001E6 20 43.7 0 0 38.3
    WP00 X4242 498637 00000014 20 11.1 502 194 2.9
    pls can any one help me
    Regards,
    Gopinath.

  • STAD : Processing time when calling c program

    Hi,
    when I look at the central instance of our system, I see that much of the response time is recorded as "Processing time". Much more than 2 * cpu time. According to sap performance optimisation guide, this implies CPU or network bottleneck. We have no such bottleneck at this time.
    I am wondering if the time spent in a c program when called from a function is recorded as processing time because the system has no way of calculating its ressource usage ?
    Thank you
    Thierry

    Hi Thierry,
    For your CPU's.... if they are 16 cores in total then yes, you appear overloaded and semaphore waits can occur with so many things sharingthe OS memory areas.
    For monitoring STD Dialog in fact it is easier than you think. Here are 2 processes that you should find handy.
    Creating a usable monitor in RZ20
    1. Go to RZ20 and from the menu choose Extras - Activate Maintenenace Function
    2. Click on the Create button (blank page button) And in the popup, press enter and in the next popup type Thierry and press enter.
    3. Under My Favourites you should now see Thierry, single click this and click the Create button again.
    4. On the next screen single click on the work New Monitor and click the Create button again, press enter on the pop up and on the next popup Type Standard Dialog Response Time (or similar) and press enter
    5. Now single click on the line that says Standard Dialog Response Time and click the Create button again.
    6. In the Popup change the radio button option to Rule Node and press enter
    7. On the next Popup, in the Rule field do a drop down and double click on CCMS_GET_MTE_BY_CLASS and press enter
    8. On the next popup in the R3System Field do a dropdown and double click on the word <CURRENT> (it is important to choose this option) and in the MTECLASS field paste the word R3DialogDefLoadTime and press enter.
    9. Now you are backon the main screen, press the standard Save button, in the Popup type the word Performance and press enter.
    10.Now expand your monitor set and double click on the performance word to get the monitor, expand Standard Dialog Response time and you will be able to see the current performance of all instances.
    Now you have an RZ20 performance monitor you can add other nodes in here like dialog steps per minute, CPU, Paging and even context switches. Have a look at the SAP CCMS Monitor Templates - Entire System - Application Server - <instance> - R3Services - Dialog for some really useful ones. To get the MTECLASS single click on a noce and click the Properties button.
    Because you set up your collector system as <CURRENT> you can now export you defined monitor and import it into another system within seconds!... very very handy. It is just an XML you can have saved on your PC.
    Saving Standard Dialog Response Time History in RZ23N
    1. Go to transaction RZ23N and click on the button Assign Procedure
    2. In the middle panel of the top three panels click on the binoculars and find R3DialogDefLoadTime and come out of the find popup with a cancel (not a tick) so that R3DialogDefLoadTime is highlighted.
    3. In the left hand pane single click on your system (or if it is the only system and you do system copies, choos ethe * option)
    4. In the right hand pane single click on the option SAP_MIN_COLLECTION__NO_AGGREGATION
    5. Now you have the system, the MTECLASS and the collection method highlighted, click the button Assign Procedure
    and thats it!.... you will now keep 8 days (or a few more worth of data) but it reorgs itself... two jobs will now be running to collect and reorg the data. Go to the main RZ23N screen and press Collection/reorganisation jobs to see them without the hasstle of SM37.
    Click on the Maintain Schemas button to even create your own collection schemas. You may want to have one that aggregates values all the way up to yearly values. This is a lovely transaction. I recommend avoiding using weekly collection until you read teh help.
    Click in the 'Overview of Available Data' to be able to see the data collected and even graph it or copy it out to excel. You can run the reports in place also but its too much to mention here
    The Rolls Royce Reporter
    BI now has standard cubes to collect CCMS data from RZ23N and graph it. We do this for all our SLA data for customers and we in fact do it ourselves without BI help directly in the Central monitoring system so we have control and can design our own graphs that are far nicer than the horrible Solution Manager ones.
    feedback? your opinion?
    regards
    G

  • Load+gen time very high

    Hi all
    i m using SAP 4.7 on Oracle 9.2
    Server- IBM P series Processor 375Mhzx 2 RAM - 3 GB
    I hav problem that in ST03n i have Load +Gen time very high (upto 500)
    is it some H/w problem or some tuning prob.
    Thanks in Advance

    SGEN will recompile ABAP loads that you want.
    your system look old, so, this can take anything from 1 hour to 8 hours.
    so, do this when user load is minimal.
    I usually choose all options and do a complete generation. this takes a lot of space in your DB, but system will be faster even if you use rarely used transactions.

  • Response times (PING) very high with CAP3602i access point

    I have installed an access point CAP3602i in mode HREAP a controller Model 5508  with version 7.4.100.0 but the response times of the connected users are very high.
    The less users are connected to the access point is faster surfing the internet and response times are low. But if there are many users connected, increase response times.
    I'll be grateful someone comments any experience with this problem.
    Thanks

    We are talking about 15 users per AP, but that everything is surfing the internet and not so heavy downloads before aps 1141 and had had no such problem response times were normal between 2,3,4 ms
    There is performing some function more cap3602i ap causes high response times.
    Supposedly CAP3602i I say are better than the 1141, which is why we made the change but we found with surprise that high times.
    The SSID was HREAP doubt it has anything to do with the 1141 does not give the issue of high times.

  • Issue with Query OLAP time very high

    Hello Guyz
    I ran my query in RSRT, and noticed tht the QOLAPTIME was almost 432 seconds (the query had crossed the 65556 records limit by then). The DBTIME was 70 secs.
    1. Are the above times in seconds?
    2. What are the performance techs I can implement, to improve the OLAP time? Since I think aggregates, indexing and partitioning only improve the DB time?
    3. I already have cache active on this query.
    Any suggestions?
    Please don't post the same question across the different forums
    Edited by: Moderator on Jul 8, 2009 11:46 AM

    Hello,
    One more thing, do any of the standard tech. of Indexing, Partitioning, Aggregate creation help in decreasing the OLAP time?
    These tech will be helpful for DB time but no use for OLAP time. And RKF didn't cost extra OLAP time but CKF and CELL calculation do.
    In your post you said there are more than 65535 rows. That's the main cause of high OLAP time according to my experience. Why users want so many rows? It's almost impossible to read. You can imaging how long it would take to transfer so many data from bw server to user (result in high OLAP time).
    Please reduce the lines by filter or something else. If you can't reduce the line number, I don't think the OLAP time would be low.
    Regards,
    Frank

  • Update work process has too high respose time.

    Our functional team is facing issue while saving data in va03 transaction. IT prompts Sales order saved successfully and when we go to va02 and try to open getting error Document "SD document not in database or has been archived"
    So when I checked sm66 update WP is on hold due to RFC response. and in st03 it has high response time
    Task          steps  response time                                             wait time
    UPDATE2    2       1.130,0               335,0    65,0    724,0    0,0    0,0                0,0    0,0    70,5
    UPDATE    29        1.188.035,2       19,6    8,6    35,9    0,0         1.187.979,4    0,0    0,0    0,3
    Also in SM13 all updates are at Initial status. Nothing get updated in database.
    Please help as functional team is no more able to work and as basis person We need to resolve the same.

    Hi Spr,
    Please check the following
    1) Update has status active in SM13
    2) You have enough space in the oraarch directory
    3) You have enough space in tablespaces
    4) Check for any error message on in alert_<SID>.log file
    5) Update oracle dictionary, missing and all statistics in the database.
    Hope this helps.
    Regards,
    Deepak Kori

Maybe you are looking for

  • Two storage location concept in EWM AFS ROD.

    Hi Experts, This post looks very old so opening a new thread. http://forums.sdn.sap.com/thread.jspa?threadID=1563967 As mentioned I would like to know the working of AFS and ROD storage locations in EWM. ROD -- F1 Unrestricted in putaway  ,      AFS

  • Item category determination during Purchase Requisition creation from APO

    I’m creating a  SNP Purchase Requisition in APO and when it is  published to R/3, a Purchase Requisition document is created in R/3. When this PR is created in R/3, I want the Item Category to be determined based on the Material master Special Procur

  • Freetalk Connect•Me adapter

    Hi, I'm interested in how the adapter works. I tried looking for reviews and setup guides to no avail. From my understanding, the adapter gives a regular phone the ability to be used on Skype. However I have a few questions regarding the adapter. 1.

  • Olap Version on Oracle 8

    Hi, all, I need to know with version of OLAP is recommend use with Oracle 8? And where I can Download it. Someone say to me than version 1.5 or OLAP is ok, but I not sure about it. Thanks in advance. Jose.

  • After running the download i get a mesage error from windows

    I get a message from windows saying i have to verify this program? i just downloaded other things and it worked? what is going wrong?