Transaction level response time goal

Hi
We have a test suite consisting of coded web tests for load and performance testing. We want to define response time goals at a transaction level and not at a web request level. Our application under test makes multiple web service calls per single user
interaction and our scripts group these web service calls in a transaction.
We have seen that it is possible to add a response time goal to an individual request but we need to know the performance of the sum of these requests i.e. the transaction. As a very basic example:
this.BeginTransaction("Transaction1");
            WebTestRequest request1 = new WebTestRequest("Some URL1");
            yield return request1;
            request1 = null;
            WebTestRequest request2 = new WebTestRequest("Some URL2");
            yield return request2;
            request2 = null;
this.EndTransaction("Transaction1");
We need to know if Transaction1 completes in 5 seconds irrespective of how the underlying requests perform. Is a Transaction-level goal possible?
Thanks,
Miren

Hi Miren,
As far as I know, the Transaction doesn't have the response time goal property. It is used for every request by design.
Reference:
https://msdn.microsoft.com/en-us/library/ms404691(v=vs.110).aspx
You could submit this feature request:
http://visualstudio.uservoice.com/forums/121579-visual-studio.
The Visual Studio product team is listening to user voice there. You can send your idea there and people can vote.
Best Regards,
Jack
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Average response Time

    All,
    What is the ideal Avg Response Time for a System?
    Is there any formula to calculate?
    What does SAP reccomend?

    Dear Bidwan,
      Response time for dialog process is <b>generally based on sum of wait timeroll in timeload generation timeprocessing timelock time+db request time</b> and is measured at the application server level.
    U can get these values from ST03(N) transaction.
    Response time will differ for different systems, there is no such thing as ideal response time. System can be tuned to get any value for response time based on system resources and system load.
    reward points if usefull.
    Awaiz.

  • Urgent:How to Speed Up the Response time for PDF Report of  500 pages

    hi all
    i am running the 9ias on Solaris and Generating some reports
    Which fetches around 50,000 records and Display them in pdf format.
    When i am ruuning the Query at Database level Response time is Very fast.
    But when i Run the Report in Web Browser same report takes 7-8 minutes.
    So it seems that the Conversion to pdf and Displaying it takes most of time.
    Does any one has idea What parameter's needs to be changed/Caching ??? /
    or any other ways or methods which can be used to Reduce the Response time.
    (its once a monthly Report and User need to Download all the 500 pages as
    a Single Document).
    Any help / suggestion please
    thanks
    jai

    You aren't by any chance calling a function in your repeating frame that in turn goes back and queries the database, are you? If so ... don't. We regularly do 500+ page PDF-file reports, and one thing we discovered early on was that repeatedly going back to the database while generating the report output (in our case, in calculations that were being done on each line of a report) slowed the output down by an order of magnitude. Instead, we now retrieve all the data needed for each report up front (via functions or views called in the initial SQL for the report), and just use Reports to format the output. MUUUUUUUCH faster -- 200 page reports that used to take 15 minutes to complete now complete in just seconds.
    One way you can visually see if this is part of your problem is to watch the report execute in the Report Queue Manager application. If it spends all its time on the "Opening" stage then breezes through each page, this is not your problem. If instead it seems to take a long time generating each page, I'd suspect that this may be at least part of your delay.
    - Bill

  • How to find the Response time for a particular Transaction

    Hello Experts,
            Am implementing a BAdI to achieve some customer enhancement for XD01 Transaction . I need to confirm to customer that after the implementation and before implementation what is the response time of the system
    Response time BEFORE BAdI Implementation
    Response time AFTER BAdI Implementation
    Where can i get this.
    Help me in this regard
    Best Regards
    SRiNi

    Hello,
    Within STAD, enter the time range that the user was executing the transaction within as well as the user name. The time field indicates the time when the transaction would have ended. STAD adds some extra time on using your time interval. Depending on how long the transaction ran, you can set the length you want it to display. This means that if it is set to 10, STAD will display statistical records from transactions that ended within that 10 minute period.
    The selection screen also gives you a few options for display mode.
    - Show all statistic records, sorted by star
    This shows you all of the transaction steps, but they are not grouped in any way.
    -Show all records, grouped by business transaction
    This shows the transaction steps grouped by transaction ID (shown in the record as Trans. ID). The times are not cumulative. They are the times for each individual step.
    -Show Business Transaction Tots
    This shows the transaction steps grouped by transaction ID. However, instead of just listing them you can drill from the top level down. The top level will show you the overall response time, and as you drill down, you can get to the overall response time.
    Note that you also need to add the user into the selection criteria. Everything else you can leave alone in this case.
    Once you have the records displayed, you can double click them to get a detailed record. This will show you the following:
    - Breakdown of response time (wait for work process, processing time, load time, generating time, roll time, DB time, enqueue time). This makes STAD a great place to start for performance analysis as you will then know whether you will need to look at SQL, processing, or any other component of response time first.
    - Stats on the data selected within the execution
    - Memory utilization of the transaction
    - RFCs executed (including the calling time and remote execution time - very useful with performance analysis of interfaces)
    - Much more.
    As this chain of comments has previously indicated, you are best off using STAD if you want an accurate indication of response time. The ST12 (combines SE30 ABAP trace and ST05 SQL trace) trace times are less accurate that the values you get from ST12. I am not discounting the value of ST12 by any means. This is a very powerful tool to help you tune your transactions.
    I hope this information is helpful!
    Kind regards,
    Geoff Irwin
    Senior Support Consultant
    SAP Active Global Support

  • SBWP  transaction - viewing folders/sending documents Long response times

    Hi all,
    I have some complains from some users (about 3-4 out of ~3000 user in my ECC 6.0 system) within my company about their response times on the Business Workplace. In particulat they started complaining about the response time of calling the TCOD SBWP . For 1-2 of them up to 4-5 minutes when myself as well as my other 2 colleagues were getting response of 500ms.
    Then they wanted also to view some folders on the Workplace and they had also response times of minutes instead of seconds.
    I checked that some of their shared folders as well as the Trash Bin had thousands of PDFs. I told to delete them and they deleted most of them. Stil when they want to open a folder it takes >2 minutes while for me to open the same shared folder takes 1-2 seconds.
    I checked in ST03N (user profiles, single records) and 1 of them had long database calls and request time in the Analysis of ABAP/4 database requests (Single Statistical Records).
    I am running out of ideas as I cannot explain why only for those 2-3 users happens to have long response times.
    Is it related to their folders in the Workplace? Where should I focus my investigation for the SBWP like transactions? Is it the case that some Oracle parameters might need to be checked?
    I run the automatic Oracle parameters check (O/S AIX 5.3 , Oracle 10.2 , ECC 6.0) and here are some recommandations:
    fixcontrol (5705630)                   add with value "5705630:ON"                                                                                use optimal OR concatenation; note 176754                    NO                                                          5705630:ON                                         B          1             
    fixcontrol (5765456)                   add with value "5765456:3"                                                                                no further information available                             NO                                                          5765456:3                                          B          1             
    optimpeek_user_binds                   add with value "FALSE"                                                                                avoid bind value peaking                                     NO                                                          FALSE                                              B          1             
    optimizerbetter_inlist_costing         add with value "OFF"                                                                                avoid preference of index supporting inlist                  NO                                                          OFF                                                B          1             
    optimizermjc_enabled                   add with value "FALSE"                                                                                avoid cartesean merge joins in general                       NO                                                          FALSE                                              B          1             
    sortelimination_cost_ratio             add with value "10"                                                                                use non-order-by-sorted indexes (first_rows)                 NO                                                          10                                                 B          1             
    event (10027)                            add with value "10027 trace name context forever, level 1"                                                               avoid process state dump at deadlock                         NO                                                          10027 trace name context forever, level 1          B          1             
    event (10028)                            add with value "10028 trace name context forever, level 1"                                                               do not wait while writing deadlock trace                     NO                                                          10028 trace name context forever, level 1          B          1             
    event (10091)                            add with value "10091 trace name context forever, level 1"                                                               avoid CU Enqueue during parsing                              NO                                                          10091 trace name context forever, level 1          B          1             
    event (10142)                            add with value "10142 trace name context forever, level 1"                                                               avoid Btree Bitmap Conversion plans                          NO                                                          10142 trace name context forever, level 1          B          1             
    event (10183)                            add with value "10183 trace name context forever, level 1"                                                               avoid rounding during cost calculation                       NO                                                          10183 trace name context forever, level 1          B          1             
    event (10191)                            add with value "10191 trace name context forever, level 1"                                                               avoid high CBO memory consumption                            NO                                                          10191 trace name context forever, level 1          B          1             
    event (10411)                            add with value "10411 trace name context forever, level 1"                                                               fixes int-does-not-correspond-to-number bug                  NO                                                          10411 trace name context forever, level 1          B          1             
    event (10629)                            add with value "10629 trace name context forever, level 32"                                                              influence rebuild online error handling                      NO                                                          10629 trace name context forever, level 32         B          1             
    event (10753)                            add with value "10753 trace name context forever, level 2"                                                               avoid wrong values caused by prefetch; note 1351737          NO                                                          10753 trace name context forever, level 2          B          1             
    event (10891)                            add with value "10891 trace name context forever, level 1"                                                               avoid high parsing times joining many tables                 NO                                                          10891 trace name context forever, level 1          B          1             
    event (14532)                            add with value "14532 trace name context forever, level 1"                                                               avoid massive shared pool consumption                        NO                                                          14532 trace name context forever, level 1          B          1             
    event (38068)                            add with value "38068 trace name context forever, level 100"                                                             long raw statistic; implement note 948197                    NO                                                          38068 trace name context forever, level 100        B          1             
    event (38085)                            add with value "38085 trace name context forever, level 1"                                                               consider cost adjust for index fast full scan                NO                                                          38085 trace name context forever, level 1          B          1             
    event (38087)                            add with value "38087 trace name context forever, level 1"                                                               avoid ora-600 at star transformation                         NO                                                          38087 trace name context forever, level 1          B          1             
    event (44951)                            add with value "44951 trace name context forever, level 1024"                                                            avoid HW enqueues during LOB inserts                         NO

    Hi Loukas,
    Your message is not well formatted so you are making it harder for people to read. However your problem is that you have 3-4 users of SBWP with slow runtimes when accessing folders. Correct ?
    You mentioned that there is a large number of documents in the users folders so usually these type of problems are caused by a large number of table joins on the SAPoffice tables specific to your users.
    Firstly please refer to SAP Note 988057 Reorganization - Information.
    To help with this issue you can use report RSBCS_REORG in SE38 to remove any deleted documents from the SAPoffice folders. This should speed up the access to your users documents in folders as it removes unnecessary documents from the SAPoffice tables.
    If your users do not show a significant speed up of access to their SAPoffice folders please refer to SAP Note 904711 - SAPoffice: Where are documents physically stored and verify that your statistics and indexes mentioned in this note are up to date.
    If neither of these help with the issue you can trace these users in ST12 and find out which table is cauing the longest runtime and see if there is a solution to either reduce this table or improve the access method on the DB level.
    Hope this helps
    Michael

  • Report to calculate avg response time for a transaction using ST03.

    Hi Abap Gurus ,
    I want to develop a report which calculates the average response time(ST 03) for a transaction on hourly basis.
    I have read many threads like in which users are posting which tables/FM to use to extract data such as  dialog step and total response time .
    I am sure many would have created a report like this , would appeciate if you can share pseudo code for same.Any help regarding the same is highly appreciated...
    Cheers,
    Karan

    http://jakarta.apache.org/jmeter/

  • How to capture transaction response time in SQL

    I need to capture  Transaction response time (i.e. ping test) to calculated the peak hours and averaged
    on a daily basis.
    and
    Page refresh time that is calculated no less than every 2 hours for peak hours and averaged on a daily basis. 
    Please assist
    k

    My best guess as to what you are looking for is something like the following (C#):
    private int? Ping()
    System.Data.SqlClient.SqlConnection objConnection;
    System.Data.SqlClient.SqlCommand objCommand;
    System.Data.SqlClient.SqlParameter objParameter;
    System.Diagnostics.Stopwatch objStopWatch = new System.Diagnostics.Stopwatch();
    DateTime objStartTime, objEndTime, objServerTime;
    int intToServer, intFromServer;
    int? intResult = null;
    objConnection = new System.Data.SqlClient.SqlConnection("Data Source=myserver;Initial Catalog=master;Integrated Security=True;Connect Timeout=3;Network Library=dbmssocn;");
    using (objConnection)
    objConnection.Open();
    using (objCommand = new System.Data.SqlClient.SqlCommand())
    objCommand.Connection = objConnection;
    objCommand.CommandType = CommandType.Text;
    objCommand.CommandText = @"select @ServerTime = sysdatetime()";
    objParameter = new System.Data.SqlClient.SqlParameter("@ServerTime", SqlDbType.DateTime2, 7);
    objParameter.Direction = ParameterDirection.Output;
    objCommand.Parameters.Add(objParameter);
    objStopWatch.Start();
    objStartTime = DateTime.Now;
    objCommand.ExecuteNonQuery();
    objEndTime = DateTime.Now;
    objStopWatch.Stop();
    objServerTime = DateTime.Parse(objCommand.Parameters["@ServerTime"].Value.ToString());
    intToServer = objServerTime.Subtract(objStartTime).Milliseconds;
    intFromServer = objEndTime.Subtract(objServerTime).Milliseconds;
    intResult = (int?)objStopWatch.ElapsedMilliseconds;
    System.Diagnostics.Debug.Print(string.Format("Milliseconds from client to server {0}, milliseconds from server back to client {1}, and milliseconds round trip {2}.", intToServer, intFromServer, intResult));
    return intResult;
    Now, while the round trip measurement is fairly accurate give or take 100ms, any measurement of latency to and from SQL Server is going to be subject to the accuracy of the time synchronization of the client and server.  If the server's and client's
    time isn't synchronized precisely then you will get odd results in the variables intToServer and intFromServer.
    Since the round trip result of the test is measured entirely on the client that value isn't subject to the whims of client/server time synchronization.

  • Coherence and EclipseLink - JTA Transaction Manager - slow response times

    A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
    Java 1.7
    Using Spring Framework 4.0.5
    EclipseLink 12.1.2
    TopLink grid 12.1.2
    Coherence 12.1.2
    javax.persistence 12.1.2
    The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
    When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
    Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?

    Hi Volker/Markus,
    Thanks a lot for the response.
    Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
    and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
    I would say the performance of the JAVA part is much better than the ABAP part.
    Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool  of say like 12-13 GB's.
    Will that likely to improve my performance??
    No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
    Should I change it to weekly once on all the systems on this box?  How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
    But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
    So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
    Markus, Did you open up any messages with SAP on this issue.?
    I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
    Let me know guys and once again..thanks a lot for your help and valuable input.
    Abhi

  • How to Tune the Transactions/ Z - reports /Progr..of High response time

    Dear friends,
    in <b>ST03</b> work load anlysis menu.... there are some z-reports, transactions, and some programmes are noticed contineously that they are taking the <b>max. response time</b> (and mostly >90%of time is  DB Time ).
    how to tune the above situation ??
    Thank u.

    Siva,
    You can start with some thing like:
    ST04  -> Detail Analysis -> SQL Request (look at top disk reads and buffer get SQL statements)
    For the top SQL statements identified you'd want to look at the explain plan to determine if the SQL statements is:
    1) inefficient
    2) are your DB stats up to date on the tables (note up to date stats does not always means they are the best)
    3) if there are better indexes available, if not would a more suitable index help?
    4) if there are many slow disk reads, is there an I/O issue?
    etc...
    While you're in ST04 make sure your buffers are sized adequately.
    Also make sure your Oracle parameters are set according to this OSS note.
    Note 830576 - Parameter recommendations for Oracle 10g

  • ELoad - request time / response time / web transaction ?

    Hi there, does anyone know if the test results presented by eLoad should be understood as request time (time between the last byte sent and the first received), response time (first sent, first received) or web transaction (the longest one - first sent, last received)? It really matters when trying to interpret the results properly... In advance thanks for help.

    Hi
    The results presented by eLoad are the overall request/response times together, so since the moment you sent the request until you have received the response and contents.
    Does this help?
    Regards
    Alex

  • SLR - transaction response time for all appl. servers together

    Hi,
    we want to have in SLR response time for defined transaction. We have activated necessary steps for monitoring transactions, started CPH and we can see in SLR average response times.
    We have several application servers and we want to have in SLR TOTAL response time for all appl. servers together.
    Is it possible? What is necessary steps to do it?
    Thanks for all hints.
    Regards,
    Roman

    Hi Andreas,
    Another way to approach this would be to use CCMS transaction monitoring by maintaining table ALTRAMONI (in the satellite system) as described here: http://help.sap.com/saphelp_nw70/helpdata/en/b3/468e3b093d4031e10000000a11402f/content.htm
    In RZ20 you'll be able to see the last 24 hours of performance data in a 1-hour granularity, as well as the last 30 minutes in a higher granularity.  Depending on your requirements you can then configure alerting so that you get an email when performance passes a certain threshold.
    Jon

  • Average response times for VA01 and VA02 transactions

    Hi,
    we have some users complaining about response times, more specific for VA01 and VA02 transactions. We would like to compare our average response times to those of other companies in order to get an idea of what is acceptable to most companies. If you are willing to contribute, can you please send a few hours of STAD data (only for VA01 and VA02 transactions) of an average working day? Perhaps you can download the data into a spreadsheet and mail the zipped file to my account Many thanks.
    Best regards,
    Guido Van Leuven

    Hi,
    This is not the issue. We are investigating everything possible to improve our response times and we believe that that these are ok as well. We just want to build a case where we can prove that our response times are not worse than those at other companies and that we can do only by comparing those statistics.
    Best regards,
    Guido Van Leuven

  • Response time and transaction count

    Any body know, how can i to obtain the response time and count transaction in the database server 9i ?
    regards.
    MDF

    Statspack would be a good starting point. You'll need to define "response time" for your application, but statspack should provide you with the info you need.
    Justin

  • BCS high response time

    Hi Frns,
    We are currently struggling with high response time in our BCS production system, it ranges anywhere in between 3000-15000 ms. I want to ask:
    1) If there is any project running where BCS has been deployed and what response time they are recording?
    2) What is the ideal RT for BCS?
    3) We have also observed that UCMON00 and UCWB_INT00 transaction/report in dialog mode are responsible for such high response time, is there any alternative or solution in this area?
    Environment
    AIX :6.1
    SEM-BW/FINBASIS : 602 patch level 13.
    DB: ORACLE 11.2.0.2.0
    Please let me know your suggestions to turn down response time.
    Regards,
    Mridul Gupta

    Hi Mridul Gupta
    Could you please explain where you find the response time has juge.
    1. If response time on UCMON log in --> depend on Cons unit hierarchy and the task maintained
    2. Response time on tranport --> Depend on the master data and hierarchy maintained in BW and quantity of data moving from Dev to Quality and to Prod
    3. Response time on Process chain --> Which process chain variant tooks more time and we can find the information uncer UC_STAT0.
    Like mentioned above there are so many different scenarios and also check with Basis team to compare with all your system landscape.
    Regards
    Rajesh SVN
    Assigning points is the way of saying thank you on SCN, even if you choose correct answer or helpful answer,

  • Management Services Response Time; per Role or per Deployment? Broken?

    Management Services, Alert on Response Time, appears to be incorrect. 
    Can you help me understand why the Response Time charts for each Role in a deployment are identical? is this a bug?
    My Testing
    I have 3 Roles in my Cloud Service Deployment. A Web Role, a worker Role, and an extra Worker Role for this test.
    Web Role: Light load 0 to 10 requests per hour.
    Worker Role: Generates massive net work traffic. Constantly making web service calls to 3rd party data services & running a Distributed Azure Data Cache.
    Extra Worker Role:  No load. I added it to test this problem. It does nothing other than respond to Management Services Alert pings. 
    I created 3 Alerts to Track response time. One to each of the 3 roles, testing from within the same data center; San Jose to San Jose. Then added 3 more to test from Singapore to San Jose. 
    The Result (over several Months)
    A. The San Jose charts are identical for each role.
    B. The Singapore charts are identical for each role.
    C. The set of Singapore charts are very similar to the San Jose set of charts. With slight variations likely due to internet latency & extra load at Singapore.  
    D. (most Important). The latency is directly correlated to the network load created by the Worker Role. When I turn it off or throttle it, Response Time on all boxes drops. When working at "full speed" network response time rises
    to 10 - 30 secs. (very bad) 
    Conclusion
    The Network Monitoring Response Time Alert is not working as designed. It does not show the Response Time for that Role. But seems to be displaying the Response Time for the whole deployment. 
    My Request
    Can you assist me to understand if this is "By Design". If my testing is flawed? or what is going on. 
    My Goal
    Alert the Azure dev team to this so that maybe they can fix it to function as I expected it to. (or for them to explain why my expectations are unreasonable.) 
    Why is this important?
    Poor Network Perf is a showstopper bug that is preventing us from moving into production. I may be able to solve this with multiple instances. But as I lack the granularity to track even at the Role level. I'm stuck. 

    Thank you for your reply. 
    I forgot to update this with the answer. 
    It turned out that Azure Portal's Create Alert/monitoring UI has a bug. 
    The monitoring service is only capable of pining your public endpoints. ie: your Web Roles. Yet it lets you think you can create an Alert to monitor any of the roles within your solution. To make things worse, it shows a chart that suggests it actually is
    pinging your worker roles, when it is actually only pinging the public Web Role. 
    That is why all the Roles in your solution display identical metrics regardless of their load. Even stopping your worker roles will not affect their ping response times.  
    If your worker role fails, Azure Alerts don't tell you about it.
    NB: Using Application Insights to create alerting does not have this issue. But only because it doesn't attempt to monitor your web roles, not because it can alert you to any issues. 

Maybe you are looking for

  • Questions on Connection Pooling

    I already read the documentations on data-sources.xml, but I still have questions. In the following data-sources.xml (for OC4J 9.0.3): <data-source class="com.evermind.sql.ConnectionDataSource" name="ccf" location="jdbc/ccf" xa-location="jdbc/xa/ccf"

  • How many bus-powered usb external drives per hub?

    I'd like to have two bus-powered USB hard drives (to be used as backups with Time Machine) connected to one powered USB hub. Would that work, or would a powered hub be likely to support only one bus-powered external hard disk? (In that case I could g

  • Problem using instant client with Win 7

    Hi, I have a Powerbuilder application that runs very well with Win XP and Oracle client 8 to 10. With new computers (Win 7), we try to use instant client... but there are problems with accents ! In fact, all accents are replaced by a "¿" in the datab

  • How can i change the name of my Home Folder on my Mac Pro?

    I have only had the thing 2 weeks and have noticed the Home Folder is called Sharon how on earth this happened i do not know, not my name?! How can i go about changing it please?? Help!!

  • The color of the web site looks wrong when switched to full screen mode.

    I was in a game website. When I switched to full screen mode, the black text changed into blue text.