Performance test planning:  timing

Hi folks,
I'm wondering if anyone has suggestions or input on the timing of test planning specific to performance testing. In particular, how far in advance is it recommended that performance test planning would begin in reference to the intended test window(s)

Hi Jack,
My preference is to start planning for a performance test as soon as possible, even before there is an application to test.
The first stage in planning should be an analysis of how the application will be used. This information can be gathered from a business plan if the application is new, or from site usage metrics from tools like Omniture or Web Trends if the application is presently deployed.
From the business plan or metrics you can begin to work out the key transactions (most heavily used and most business critical), as well as planning how users will execute those transaction (what percentages, what think times, etc).
Then as the test date gets closer and the application becomes available/stable/etc you can begin to flesh out the details of the plan. But the overall goals, analysis and high level planning can begin very early in the development cycle.
CMason
Senior Consultant - eLoadExpert
Empirix

Similar Messages

  • Database Performance Testing Planning Material

    Hi All,
    Could anyone please share Performance Testing Planning Material available at any site or anyone used in any projects previously?
    Would appreciate if anyone could help!
    Thanks for your time!
    Regards,

    Hi Forstmann,
    Many thanks for the information.But I require Performance Testing plan materials wher it has been performed in existing systems.
    The material link what you have pasted refers to Performance tuning guide.
    Regards,

  • Log file sync top event during performance test -av 36ms

    Hi,
    During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                       208,327       7,406     36   46.6 Commit
    direct path write                   646,833       3,604      6   22.7 User I/O
    DB CPU                                            1,599          10.1
    direct path read temp             1,321,596         619      0    3.9 User I/O
    log buffer space                      4,161         558    134    3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
    I am not able to figure out why "log file sync" is having such slow response.
    Below is the snapshot from the load profile.
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108127 16-May-13 20:15:22       105       6.5
      End Snap:    108140 16-May-13 23:30:29       156       8.9
       Elapsed:              195.11 (mins)
       DB Time:              265.09 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,136M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,168M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                1.4                0.1       0.02       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          607,512.1           33,092.1
       Logical reads:            3,900.4              212.5
       Block changes:            1,381.4               75.3
      Physical reads:              134.5                7.3
    Physical writes:              134.0                7.3
          User calls:              145.5                7.9
              Parses:               24.6                1.3
         Hard parses:                7.9                0.4
    W/A MB processed:          915,418.7           49,864.2
              Logons:                0.1                0.0
            Executes:               85.2                4.6
           Rollbacks:                0.0                0.0
        Transactions:               18.4Some of the top background wait events:
    ^LBackground Wait Events       DB/Inst: Snaps: 108127-108140
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write         208,563     0      2,528      12      1.0   66.4
    db file parallel write            4,264     0        785     184      0.0   20.6
    Backup: sbtbackup                     1     0        516  516177      0.0   13.6
    control file parallel writ        4,436     0         97      22      0.0    2.6
    log file sequential read          6,922     0         95      14      0.0    2.5
    Log archive I/O                   6,820     0         48       7      0.0    1.3
    os thread startup                   432     0         26      60      0.0     .7
    Backup: sbtclose2                     1     0         10   10094      0.0     .3
    db file sequential read           2,585     0          8       3      0.0     .2
    db file single write                560     0          3       6      0.0     .1
    log file sync                        28     0          1      53      0.0     .0
    control file sequential re       36,326     0          1       0      0.2     .0
    log file switch completion            4     0          1     207      0.0     .0
    buffer busy waits                     5     0          1     116      0.0     .0
    LGWR wait for redo copy             924     0          1       1      0.0     .0
    log file single write                56     0          1       9      0.0     .0
    Backup: sbtinfo2                      1     0          1     500      0.0     .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
    {code}
    Workload Comparison
    ~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
    DB time: 0.78 1.36 74.36 0.02 0.07 250.00
    CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
    Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
    Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
    Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
    Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
    Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
    User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
    Parses: 7.28 24.55 237.23 0.19 1.34 605.26
    Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
    Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
    Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
    Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
    Transactions: 37.99 18.36 -51.67
    First Second Diff
    1st 2nd
    Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
    (ms) %DB time
    SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
    35.6 46.57
    CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
    5.6 22.66
    log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
    12.1 15.90
    log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
    N/A 10.06
    SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
    84.0 4.93
    -direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
    0.0 1.76
    -db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
    0.0 0.41
    {code}
    *To sum it sup:
    1. Why is the IO response getting such an hit during the new perf test? Please suggest*
    2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
    {code}
    select *from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    {code}
    Please let me know if you would like to see any other stats.
    Edited by: Kunwar on May 18, 2013 2:20 PM

    1. A snapshot interval of 3 hours always generates meaningless results
    Below are some details from the 1 hour interval AWR report.
    Platform                         CPUs Cores Sockets Memory(GB)
    HP-UX IA (64-bit)                   4     4       3      31.95
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108129 16-May-13 20:45:32       140       8.0
      End Snap:    108133 16-May-13 21:45:53       150       8.8
       Elapsed:               60.35 (mins)
       DB Time:              140.49 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,168M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,120M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                2.3                0.1       0.03       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          719,553.5           34,374.6
       Logical reads:            4,017.4              191.9
       Block changes:            1,521.1               72.7
      Physical reads:              136.9                6.5
    Physical writes:              158.3                7.6
          User calls:              167.0                8.0
              Parses:               25.8                1.2
         Hard parses:                8.9                0.4
    W/A MB processed:          406,220.0           19,406.0
              Logons:                0.1                0.0
            Executes:               88.4                4.2
           Rollbacks:                0.0                0.0
        Transactions:               20.9
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                        73,761       6,740     91   80.0 Commit
    log buffer space                      3,581         541    151    6.4 Configurat
    DB CPU                                              348           4.1
    direct path write                   238,962         241      1    2.9 User I/O
    direct path read temp               487,874         174      0    2.1 User I/O
    Background Wait Events       DB/Inst: Snaps: 108129-108133
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write          61,049     0      1,891      31      0.8   87.8
    db file parallel write            1,590     0        251     158      0.0   11.6
    control file parallel writ        1,372     0         56      41      0.0    2.6
    log file sequential read          2,473     0         50      20      0.0    2.3
    Log archive I/O                   2,436     0         20       8      0.0     .9
    os thread startup                   135     0          8      60      0.0     .4
    db file sequential read             668     0          4       6      0.0     .2
    db file single write                200     0          2       9      0.0     .1
    log file sync                         8     0          1     152      0.0     .1
    log file single write                20     0          0      21      0.0     .0
    control file sequential re       11,218     0          0       0      0.1     .0
    buffer busy waits                     2     0          0     161      0.0     .0
    direct path write                     6     0          0      37      0.0     .0
    LGWR wait for redo copy             380     0          0       0      0.0     .0
    log buffer space                      1     0          0      89      0.0     .0
    latch: cache buffers lru c            3     0          0       1      0.0     .0     2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
    Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
    We don't know anything about your online redo log configuration
    Below is my redo log configuration.
        GROUP# STATUS  TYPE    MEMBER                                                       IS_
             1         ONLINE  /oradata/fs01/PERFDB1/redo_1a.log                           NO
             1         ONLINE  /oradata/fs02/PERFDB1/redo_1b.log                           NO
             2         ONLINE  /oradata/fs01/PERFDB1/redo_2a.log                           NO
             2         ONLINE  /oradata/fs02/PERFDB1/redo_2b.log                           NO
             3         ONLINE  /oradata/fs01/PERFDB1/redo_3a.log                           NO
             3         ONLINE  /oradata/fs02/PERFDB1/redo_3b.log                           NO
    6 rows selected.
    04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
    04:13:26 perf_monitor@PERFDB1> select *from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS                 FIRST_CHANGE# FIRST_TIME
             1          1      40689  524288000          2 YES INACTIVE              13026185905545 18-MAY-13 01:00
             2          1      40690  524288000          2 YES INACTIVE              13026185931010 18-MAY-13 03:32
             3          1      40691  524288000          2 NO  CURRENT               13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM

  • How to setup the environment for doing the Performance Testing?

    Hi,
    We are planning to start the performance testing for iProcurement Product, Max number of user we are going to use 1000. For this simulation, what are all basic setups need to do for Application Tier, Database Tier,etc... Can anyone suggest what is procedure to setup environment depending upon the load?

    User Guides for thee rv120W are here:
    http://www.cisco.com/en/US/docs/routers/csbr/rv220w/quick_start/guide/rv220w_qsg.pdf
    http://www.cisco.com/en/US/docs/routers/csbr/rv120w/administration/guide/rv120w_admin.pdf
    and theres some more stuff over on my site:
    http://www.linksysinfo.org/index.php?forums/cisco-small-business-routers-and-vpn-solutions.49/

  • Regarding performance test

    Hi
    i need to know how the performance testing is done for Hyperion essbase and planning?
    I would be great help ............
    Thanks
    lakshmi

    Hi
    i need to know how the performance testing is done for Hyperion essbase and planning?
    I would be great help ............
    Thanks
    lakshmi

  • How to create  a test plan with specific transactions (or program)

    Hello,
    I'm a new user in Sol Man !
    How to create  a test plan with specific transactions (or program).
    In my Business Blueprint (SOLAR01) I've created in 'transaction tab' the name of my specific transactions and linked it.
    In my test plan (STWB_2) those specific doesn't appear to be selected !
    Thanks in advance.
    Georges HUYNEN

    Hi 
    In solar01 you have defined but you have to assign the test case in solar02 for this test case in the test cases tab.
    When you do so expand the business sceanario node in test plan generation of STWB_2 transaction and now that will appear.
    Also visit my weblog
    /people/community.user/blog/2006/12/07/organize-and-perform-testing-using-solution-manager
    please reward points.

  • Unable to saver images in a note within a Test Plan - STWB_2

    Hi all,
    While trying to edit a note in Tx STWB_2 (Test Plans), I'm not able to save an image on it. I can add and save text but no images. Other odd thing is that even when I click on "Save Active", the message "Document saved as raw version" appears. In case of text, the data is saved (as mentioned earlier). No mutter save mode I select the note is always saved as a raw.
    Is this behavior the standard one or should I perform some actions? Could this message be linked to the fact that I canu2019t saved images in my notes.
    Your help is highly appreciated
    Solman Version: 7.1 (Windows server 2008 R2/SQL server 2008 R2)
    Windows Microsoft Office 2007

    Hi all,
    While trying to edit a note in Tx STWB_2 (Test Plans), I'm not able to save an image on it. I can add and save text but no images. Other odd thing is that even when I click on "Save Active", the message "Document saved as raw version" appears. In case of text, the data is saved (as mentioned earlier). No mutter save mode I select the note is always saved as a raw.
    Is this behavior the standard one or should I perform some actions? Could this message be linked to the fact that I canu2019t saved images in my notes.
    Your help is highly appreciated
    Solman Version: 7.1 (Windows server 2008 R2/SQL server 2008 R2)
    Windows Microsoft Office 2007

  • Consolidated status overview for all test plan

    Hi all,
    We are performing our Integration test from SolMan. I would like to generate a consolidated report which will give me data same way as the status overview gives for individual test plan.
    This will help me to track the progress at the test package level, which inturn is my IT scenario.
    Any quick help will be highly appreciated.
    Regards,
    Smita

    Thanks Prabhakar,
    I am aware of this, but this gives me details for one test plan.  Even when I use STWB_2, it gives me summary of all the test plans, and for status overview I have to go in each and every test plan.
    I am looking for some way out to generate the status overview of multiple test plans together in same screen, for reporting purpose.
    Any suggestions?
    Thanks,
    Smita

  • Howto make a Performance Test

    Hi,
    I use UCM 11g. I want to make a performance test on the customer side. That is why my plan was to upload 1.5 million files to the ucm before I start with the tests.
    The problem is that I don't know how to upload 1.5 million files. I wanted to use the Archiver to upload a package of 50.000 files over and over again. But I don't see a function that tells the Archiver to create a new version of the file instead of overwriting the old revisiion.
    Has someone experience with performance tests? Has someone an idea how I can upload many test files.
    Any answer regarding performance tests will be highly appreciated.
    Greetings Bodhy

    Easiest way is to generate IdcCommand script and 1 .. n number of documents that you would like to checkin (depending on what you want to teat). Make sure you generate this script with the right metadata (you may want to think out the dispersion of metadata to be production-like). For each checkin, include the parameter "doFileCopy=1" (this will allow you to checkin the same files over and over again, saving you a lot of hassle).
    Depending on what you want to test, you may also want to set the parameter "dConversion=PASSTHRU" to skip Conversion Server step (IBR).
    The generated idcCommand file should look something like this:
    @Properties LocalData
    IdcService=CHECKIN_NEW
    primaryFile=/path/to/file.ext
    doFileCopy=1
    dConversion=PASSTHRU
    dDocType=random_doctype
    dDocTitle=random_title
    dSecurityGroup=random_security_group
    xStorageRule=default
    @end
    <<EOD>>
    @Properties LocalData
    IdcService=CHECKIN_NEW
    primaryFile=/path/to/file.ext
    doFileCopy=1
    dConversion=PASSTHRU
    dDocType=random_doctype
    dDocTitle=random_title
    dSecurityGroup=random_security_group
    xStorageRule=default
    @end
    <<EOD>>
    etc...Check out the Services Reference Guide to see what options are possible.

  • [Help] How you guys do the performance test for Hyperion?

    Dear All,
    Currently, we are building the performance test scripts by using the QALoad. We have identified the following areas for the test.
    * Planning Data Form
    * Financial Report
    * SmartView
    However, we hit a number of question.
    For Financial Report, when the recorded scripts were playback, we cannot see any pdf files generated under folder $BIPLUS_HOME/temp/.
    For SmartView, the recorded scripts just don't work after the Hyperion Services were restarted.
    Does anyone have any experience on doing performance test for Hyperion, especially using QALoad? Is there any technical reference out there we can read?
    Regards,
    Martin

    When we playback the QALoad script for Financial Report, we got the following error response!
    As you know, when opening an Finacial Report, it will auto login the Datasource. There is no need to provide the username/password. We are wondering if this is related to our problem or not. Please kindly help. We have been spending days/weeks to record the testing scripts.
    Does anyone has any idea?
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <BpmResponse type="error" action="">
    <code>5517</code>
    <desc>Logon Failed, User Name must not be empty</desc>
    <trace>com.hyperion.reporting.util.HyperionReportException: Logon Failed, User Name must not be empty
         at com.hyperion.reporting.api.HRReports.Authorize(Unknown Source)
         at com.hyperion.reporting.api.HRReports.Authenticate(Unknown Source)
         at modules.com._hyperion._reporting._web._reportviewer._HRRunDlg._jspService(_HRRunDlg.java:1075)
         at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
         at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:453)
         at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:591)
         at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:515)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
         at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
         at com.hyperion.reporting.webviewer.HRLocaleFilter.doFilter(Unknown Source)
         at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:621)
         at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:368)
         at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:866)
         at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:448)
         at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:302)
         at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:190)
         at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
         at java.lang.Thread.run(Thread.java:797)
    </trace>
    </BpmResponse>
    Regards,
    Martin

  • RMS performance testing using HP Loadrunner

    Hi,
    We are currently planning on how to do our performance testing of Oracle Retail. We are planning to use HP Loadrunner and use different virtual users for Java, GUI, webservices and database requests. Have anyone here done performance testing in RMS using HP Loadrunner and what kind of setup did you use?
    Any tips would be greatly appreciated.
    Best regards,
    Gustav

    Hi Gustav
    How is your performance testing of Oracle Retail ? Did you get good results ?
    I need to start a RMS/RPM performance testing project and I would like to know how to implement an appropriated structure . Any informations about servers , protocols , tools used to simulate a real production environment would be very appreciated.
    Thanks & Regards,
    Roberto

  • Test  scenario and test plan

    Hi  All,
    Can somebody explain what is a test scenario and test plan with an eg ( For eg FB 60). and what are the essential things one needs to incorporate for test scenario and a test plan
    Appreciate your help.
    Thanks
    SM

    Hi,
    I am giving you the information on test plan and test scenario below. Please note that this is with reference to Solution Manager.
    You create a test plan for a project in the SAP Solution Manager if you want to reuse the project structure created in the Business Blueprint for a process-oriented test. You can select all the test cases and transactions which you have assigned to the project structure, for the test plan. These test cases are performed by testers; they can call assigned transactions and run test cases in them.
    Test Scenarios are nothing but the end to end business processes which are to be tested.
    Hope this helps.
    Rgds
    Manish

  • How to write utp(unit test plan)

    how to write utp(unit test plan)

    Hi,
      Steps to be followed for UTP.
    UTP : Unit Test Plan. Testing the program by the developer who developed the program is termed as Unit Test Plan.
    Two aspects are to be considered in UTP.
    1. Black Box Testing
    2. White Box Testing.
    1. Black Box Testing : The program is executed to view the output.
    2. White Box Testing : The code is checked for performance tuning and syntax errors.
    Follow below mentioned steps.
    <b>Black Box Testing</b>
    1. Cover all the test scenarios in the test plan. Test plan is usually prepared at the time of Techincal Spec preparation, by the testing team. Make sure that all the scenarios mentioned in the test plan are coverd in UTP.
    2. Execute your code for positive and negative test. Postive tests - to execute the code and check if the program works as per expected. Negative Test - Execute code to know if the code is working in scenarios in which it is not supposed to work. The code should work only in the mentioned scenarios and not in all cases.
    <b>White Box Testing.</b>
    1. Check the Select statments in your code. Check if any redundant fields are being fetched by the select statements.
    2. Check If there is any redundant code in the program.
    3. Check whether the code adheres to the Coding standards of your client or your company.
    4. Check if all the variables are cleared appropriately.
    5. Optimize the code by following the performance tuning procedures.
    <b>Using tools provided by SAP</b>
    1. Check your program using <b>EXTENDED PROGRAM CHECK</b>.
    2. Use SQL Trace to estimate the performace and the response of the each statement in the code. If changes are required, mention the same in UTP.
    3. Use Runtime Analyser and Code Inspector to test your code.
    4. Paste the screen shots of all the tests in the UTP document. This gives a clear picture of the tests conducted on the program.
    All the above steps are to be mentioned in UTP.
    Regards,
    Vara

  • Payroll - performance testing

    We are gearing up for performance testing for active pay and I have a few questions in this regard.  What are the most expensive, load bearing programs and transactions that need to be thoroughly performance tested?  What is the best way to test performance for an active pay implementation (should tools be used or should users be used to generate load)? 
    Does anyone have a project plan for active pay performance testing that they are willing to share?

    Hello Viviana,
    So far in my implementation we have done payroll and time runs for the whole group.
    Based on the business:
    1. You can prioritize the most common or recurring scenarios. Basically create a test script for every scenario based on the configuration performed.
    2. You can also talk to business SME's to find out several scenarios based on their past experience regarding complex payroll calculation, time data processing or any retro scenarios.
    3. Once you get the detail outline then setup employees for those test. You can run payroll for the whole group but on parallel validation side, you will still use the same PERNR group you have setup and do the compare.
    4. Later on, you can finish postings, TPR and all other subsequent steps for the whole group.
    I am insisting on the whole group just to be on safer side.
    Perform same study for Off cycle payroll and retro active accounting scenario.
    Hope this helps.
    Arti

  • Test Plan for Video Conferencing

    We are planning to set up an MPLS network exclusively for video conferencing. I need to prepare a test plan for the tests to conduct to check the conferencing performance and related parameters after the implementation.
    I am pretty new in Video conferencing. May someone help me to prepare a test plan for this?

    Do you have some more details about the equipment and setup you want to test.

Maybe you are looking for