JMS Grid Load & Performance Testing

Does anyone know if this has been done/ can be done? We are implementing a JMS Grid with JCAPS. None of the proprietary testing tools I've seen support PTP testing of this type of messaging as it doesn't have a JNDI binding.
I have to find some method to apply messages directly into the system to assess the performance characteristics to enable the system to be tuned.
Any suggestions?
Thanks
Alsebub

Does anyone know if this has been done/ can be done? We are implementing a JMS Grid with JCAPS. None of the proprietary testing tools I've seen support PTP testing of this type of messaging as it doesn't have a JNDI binding.
I have to find some method to apply messages directly into the system to assess the performance characteristics to enable the system to be tuned.
Any suggestions?
Thanks
Alsebub

Similar Messages

  • FORMS CRASHES (FRM-92101) ON AS 10.1.2.0.2 DURING LOAD PERFORMANCE TESTING

    Hiya
    We have been doing Load Performance Testing using testing tool QALoad on our Forms 10g application. After about 56 virtual users(sessions) have logged-in into our application, if a new user tries to log-in into our application, the Forms crashes. As soon as we encounter the FRM-92101 error, no more new forms session are able to start.
    The Load Testing software start up each process very quickly, about every 10 seconds.
    The very first form that appears is the login form of our application. So before the login screen appears, we get FRM-92101 error message.
    However, those users who have already logged-in into our application, they are able to carry on their tasks.
    We are using Application Server 10g 10.1.2.0.2. I have checked the status on Application Server through Oracle Enterprise Manager Console. The OC4J Instance is up and running. Also, server's configuration is pretty good. It is running on 2 CPUs (AMD Opteron 3GHz) and has 32GB of memory. The memory used by those 56 sessions is less than 3GB.
    The Applicatin Server is running on a Microsoft Windows Server 2003 64bit Enterprise Edition.
    Any help will be much appreciated.
    Cheers
    Mayur

    Hi Shekhawat
    In Windows Registry go to
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems
    In the right hand side panel, you will find String Value as Windows. Now double click on it (Windows). In the pop up window you will see a string similar to the following one:
    %SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,20480,768 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
    Now if you read it carefully in the above string, you will find this parameter
    SharedSection=1024,20480,768
    Here SharedSection specifies the system and desktop heaps using the following format:
    SharedSection=xxxx,yyyy,zzzz
    The default values are 1024,3072,512
    All the values are in Kilobytes (KB)
    xxxx = System-wide Heapsize. There is no need to modify this value.
    yyyy = IO Desktop Heapsize. This is the heap for memory objects in the IO Desktop.
    zzzz = Non-IO Desktop Heapsize. This is the heap for memory objects in the Non-IO Desktop.
    On our server the values were as follows :
    1024,20480,768
    We changed the size of Non-IO desktop heapsize from 768 to 5112. With 5112 KB we managed to test our application for upto 495 virtual users.
    Cheers
    Mayur

  • Load & Performance testing of VB applications

    How we can perform load & performance testing of VB6 applications? Are there any tools available for same?

    You could look at something like this:
    http://www.aholme.co.uk/Profiler/Install.htm
    However VB6 as a development environment is no longer supported by Microsoft and this forum is not for VB6 questions, but VB.NET questions.
    Matt Kleinwaks - MSMVP MSDN Forums Moderator - www.zerosandtheone.com

  • Load/Performance Testing using ECATT

    Please provide the process to perform Load/Performance Testing using ECATT ASAP. 
    What are the T-Codes are required to fulfill Load/Performance Testing using ECATT.
    Thanks in ADVANCE.

    Hello Colleague,
    Here are the steps that you need to do, for performance testing using ST30.
    Use transaction ST30 to invoke Global Performance Analysis ( Widely used for performance tests of certain transactions ).
    On the eCATT test tab, Key in the folloing data
    Log ID ( needs to be created ONLY for the first run ),
    Performance test ( logically the entries for Perrformance test field are of the format:
    Logid_name/PERF_transaction_name/systemname/client/date ),
    Name of the testconfiguration ( You need to create a test configuration for the eCATT to be used in ST30, use the same name for the created Test
    configuration as that of the test script ),
    No of times the test configuration needs to be run as the preprocessor to create the required backend data, No of times the test configutation needs to
    run as processor ( both these fields are filled with 5 and 5 or 5 and 10 respectively for performance measurements, but in your case you can give a 1 and 1
    or 0 and 1 in these fields for your requirements )
    With all the check boxes in the Programming guidelines and Distributed Statistics Data unchecked ( unless req ). In the data comparison ( use No - option
    for With values ).
    Click on the eCATT test only button to start the performance run using ST30.
    Now the procedure stated above makes the eCATT test configuration execute as many times as the sum of pre and pro given by the user AT ONE STRETCH ONLY. But if there is a requirement of having the eCATT execute after an interval, we follow a different approach.
    We have a VB script that will create a ECA session, call se37, select the required test package and then execute all the required test cases ( eCATTs ) in the
    test package and also ensure the KILL ECA session at the end of the execution.
    We then create a batch file to execute the VB script and call the batch file for our executions
    In you case, please schedule the execution of the batch file for every 30 mins ( or any such time duration ) using the simple scheduler functionality provided by
    Windows.
    The only problem with this is that whenever we have some system messages / Software updates / any new screens the scheduling is bound to fail as the called VB script does not handle the new situation. Please also ensure that the user whose Password has been given in the scheduler has to be the user who has logged into the system during the execution period.
    So, to summarize : ST30 will only allow you to run the eCATT as many times as required, but only at ONE STRETCH, you need to use the second mechanism to make the eCATT run effectively after a predetermined time without any user interaction.
    FYI : A new feature to handle the scheduling of executions is being developed, will post the details when it is available and the usage steps soon. We also have a new command called PERF ENDPERF in eCATT also ( a new development ), kindly go through the documentations for the new developments in eCATTs for the same
    Thanks and best regards,
    Sachin

  • What are the better load/performance testing tools available for Flex Application with BlazeDS RO?

    In my application is designed with Flex3, ActionScript3, BlazeDS Remote Objects.
    Just i tried with OPENSTA but i cant do the dynamic parameterization in their generated scripts because the response of the calls is binary values and also we cant get the response using with SCL language.
    While testing with OPENSTA with HttpService, i can do the dynamic parameterization and got the response.
    can give the information about the below questions
    whether we can do dynamic parameterization with OPENSTA for Flex Remote objects?
    and  what are the better load/performance tools available for Flex Remote Objects?

    Your approach is fine, depending on how many and what type of CFCs you are talking about. If they are "singletons" - that is, only one instance of each CFC is needed to be in memory and can be reused/shared from multiple parts of your application - caching them in the application scope is common.  Just make sure they are thread safe ("var" or local.* all your method variables).
    You might consider taking advantage of a dependency injection framework, such as DI/1 (part of the FW/1 MVC framework), ColdSpring, or WireBox (a module of the ColdBox platform that can be used independently).  They have mechanisms for handling and caching singletons.  Then you wouldn't have to go to the application scope to get your CFC instances.
    -Carl V.

  • Load & Performance Testing In The Cloud - Silverlight Support

    I currently have Visual Studio Premium with MSDN. I know in order to use Load Testing in the cloud I will have to upgrade to Ultimate.
    One of the our web apps that I need to do a load and performance test on uses SilverLight.
    Here is my questions. I want to verify that the provisioned virtual machines in the cloud when I run load testing will support Silverlight. I know it sounds like a silly question but before I go to Management to approve this upgrade I need to verify it will
    work. No need to waste money.
    Thanks

    Hi rgelston iso.com,
    The web tests: Web tests are used to test the functionality of Web applications and to test Web applications under load. Web tests are used both in performance tests and stress tests. It works at the protocol layer by issuing HTTP requests.
    Reference:
    https://msdn.microsoft.com/en-us/library/ff520100%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
    So the real issue is that whether your apps meet the above requirements.
    For example, for the general Silverlight Apps, it would have some limitations if you want to create web performance tests.
    http://blogs.msdn.com/b/anutthara/archive/2010/03/21/testing-support-for-silverlight-apps-in-visual-studio-2010.aspx
    Best Regards,
    Jack
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Load/Performance Testing?

    Hi,
    Has anyone used any load testing tools?  I'd like to try to test the performance of our website and looking to see if anyone has used any tools.  Free ideally, but if not, maybe something affordable.
    Any thoughts appreciated
    -Westside

    Hello Westside,
    You can probably do something like that with Jmeter:
    http://jakarta.apache.org/jmeter/
    HTH, Carl.

  • Load/performance test on oracle

    hi guys,
    we are using oracle10g. can u please tell me the steps involved in doing a performance/load test on the database in detail.
    regards,
    123kid

    Oracle ebook:
    Oracle® Database Performance Tuning Guide
    10g Release 2 (10.2)
    Part Number B14211-01
    and particularly in paragraph
    2.6 Workload Testing, Modeling, and Implementation
    give a detailed description.....
    Greetings...
    Sim

  • Load and performance testing

    How to do Load and performance test after upgrading database to 11gR2 during the dev and acceptance test. How to compare the performance before upgrade(10gR2 with 11.5.01.2) and after upgrade 11Gr2(11gR2 with 11.5.10.2).
    Regards
    appsdba

    Please see old threads for similar discussion -- http://forums.oracle.com/forums/search.jspa?threadID=&q=Performance+AND+Testing&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • UI performance testing of pivot table

    Hi,
    I was wondering if anyone could direct me to a tool that I can use to do performance testing on a pivot table. I am populating a pivot table(declaratively) with a data source of over 100,000 cells and I need to record the browser rendering time of the pivot table using 50 or so parallel threads(requests). I tried running performance tests using JMeter, but that didn't help.
    This is what I tried so far with JMeter:
    I deployed the application in the integratedweblogicserver and specify the Url to hit in JMeter ( http://127.0.0.1:7101/PivotTableSample-ViewController-context-root/faces/Sample) and added a response assertion for the response code 200. Although I am able to hit the url successfully, the response I get is a javascript with a message that says "This is the loopback script to process the url before the real page loads. It introduces a separate round trip". When I checked in firebug, it looks like request redirect of some sort happens from this javascript to another Url (with some randomly generated parameters) which then returns the html response of the pivot table. I am unable to hit that Url directly as I get a message saying "session expired". It looks like a redirect happens from the first request and then session is created for that request and a redirect occurs.
    I am able to check the browser rendering time of the pivot table in firebug (.net tab), but that is only for a single request. I'd appreciate it if anyone could guide me on this.
    Thanks
    Naveen

    I found the link below that explains configuration of JMeter for performance testing of ADF applications(Although I couldn't find a solution to figure out the browser rendering time for parallel threads).
    http://one-size-doesnt-fit-all.blogspot.com/2010/04/configuring-apache-jmeter-specifically.html
    Edited by: Naveen Ramanathan on Oct 3, 2010 10:24 AM

  • Can Web Performance Test work on AJAX or Javascript Project which will show only one URL for all the pages?

    Hi there,
    I'm working on testing a AJAX and JavaScript Project which has several pages but all in the same URL. I need to test some attribute on the page or parameter past by AJAX or Javascript. Can Web Performance Test work to get what I want?
    Thanks,
    

    Hello,
    Thank you for your post.
    Web performance test is used to test if a server responses correctly and the response is consistent with what we expected. And we test the response speed, the stability and scalability.
    The Web Performance Test Recorder records both AJAX requests and requests that were submitted from JavaScript, but
     web test does not execute JavaScript. I am afraid that you can’t use web test to test parameter past by AJAX or JavaScript.
    Please see:
    Web Performance Test Engine Overview
    About JavaScript and ActiveX Controls in Web Performance Tests
    From the first link, “Client-side scripting that sets parameter values or results in additional HTTP requests, such as AJAX, does affect the load on the server and might require you to manually modify the Web Performance Test to simulate the scripting.”
    If you want to execute the function typically performed by script in web test, you need to accomplish it in coded web performance test or a web performance test plugin. Please see:
     How to: Create a Coded Web Performance Test
    How to: Create a Web Performance Test Plug-In
    I am not sure what the ‘some attribute on the page’ is. If you mean that you want to test those controls on the page, you can do coded UI test, which can test that the user interface for an application functions correctly. The coded UI test performs actions
    on the user interface controls for an application and verifies that the correct controls are displayed with the correct values. You can refer to this article for detailed information about code UI test:
    Verifying Code by Using Coded User Interface Tests
    Best regards,
    Amanda Zhu [MSFT]
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • Log file sync top event during performance test -av 36ms

    Hi,
    During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                       208,327       7,406     36   46.6 Commit
    direct path write                   646,833       3,604      6   22.7 User I/O
    DB CPU                                            1,599          10.1
    direct path read temp             1,321,596         619      0    3.9 User I/O
    log buffer space                      4,161         558    134    3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
    I am not able to figure out why "log file sync" is having such slow response.
    Below is the snapshot from the load profile.
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108127 16-May-13 20:15:22       105       6.5
      End Snap:    108140 16-May-13 23:30:29       156       8.9
       Elapsed:              195.11 (mins)
       DB Time:              265.09 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,136M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,168M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                1.4                0.1       0.02       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          607,512.1           33,092.1
       Logical reads:            3,900.4              212.5
       Block changes:            1,381.4               75.3
      Physical reads:              134.5                7.3
    Physical writes:              134.0                7.3
          User calls:              145.5                7.9
              Parses:               24.6                1.3
         Hard parses:                7.9                0.4
    W/A MB processed:          915,418.7           49,864.2
              Logons:                0.1                0.0
            Executes:               85.2                4.6
           Rollbacks:                0.0                0.0
        Transactions:               18.4Some of the top background wait events:
    ^LBackground Wait Events       DB/Inst: Snaps: 108127-108140
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write         208,563     0      2,528      12      1.0   66.4
    db file parallel write            4,264     0        785     184      0.0   20.6
    Backup: sbtbackup                     1     0        516  516177      0.0   13.6
    control file parallel writ        4,436     0         97      22      0.0    2.6
    log file sequential read          6,922     0         95      14      0.0    2.5
    Log archive I/O                   6,820     0         48       7      0.0    1.3
    os thread startup                   432     0         26      60      0.0     .7
    Backup: sbtclose2                     1     0         10   10094      0.0     .3
    db file sequential read           2,585     0          8       3      0.0     .2
    db file single write                560     0          3       6      0.0     .1
    log file sync                        28     0          1      53      0.0     .0
    control file sequential re       36,326     0          1       0      0.2     .0
    log file switch completion            4     0          1     207      0.0     .0
    buffer busy waits                     5     0          1     116      0.0     .0
    LGWR wait for redo copy             924     0          1       1      0.0     .0
    log file single write                56     0          1       9      0.0     .0
    Backup: sbtinfo2                      1     0          1     500      0.0     .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
    {code}
    Workload Comparison
    ~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
    DB time: 0.78 1.36 74.36 0.02 0.07 250.00
    CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
    Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
    Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
    Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
    Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
    Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
    User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
    Parses: 7.28 24.55 237.23 0.19 1.34 605.26
    Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
    Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
    Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
    Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
    Transactions: 37.99 18.36 -51.67
    First Second Diff
    1st 2nd
    Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
    (ms) %DB time
    SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
    35.6 46.57
    CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
    5.6 22.66
    log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
    12.1 15.90
    log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
    N/A 10.06
    SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
    84.0 4.93
    -direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
    0.0 1.76
    -db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
    0.0 0.41
    {code}
    *To sum it sup:
    1. Why is the IO response getting such an hit during the new perf test? Please suggest*
    2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
    {code}
    select *from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    {code}
    Please let me know if you would like to see any other stats.
    Edited by: Kunwar on May 18, 2013 2:20 PM

    1. A snapshot interval of 3 hours always generates meaningless results
    Below are some details from the 1 hour interval AWR report.
    Platform                         CPUs Cores Sockets Memory(GB)
    HP-UX IA (64-bit)                   4     4       3      31.95
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108129 16-May-13 20:45:32       140       8.0
      End Snap:    108133 16-May-13 21:45:53       150       8.8
       Elapsed:               60.35 (mins)
       DB Time:              140.49 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,168M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,120M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                2.3                0.1       0.03       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          719,553.5           34,374.6
       Logical reads:            4,017.4              191.9
       Block changes:            1,521.1               72.7
      Physical reads:              136.9                6.5
    Physical writes:              158.3                7.6
          User calls:              167.0                8.0
              Parses:               25.8                1.2
         Hard parses:                8.9                0.4
    W/A MB processed:          406,220.0           19,406.0
              Logons:                0.1                0.0
            Executes:               88.4                4.2
           Rollbacks:                0.0                0.0
        Transactions:               20.9
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                        73,761       6,740     91   80.0 Commit
    log buffer space                      3,581         541    151    6.4 Configurat
    DB CPU                                              348           4.1
    direct path write                   238,962         241      1    2.9 User I/O
    direct path read temp               487,874         174      0    2.1 User I/O
    Background Wait Events       DB/Inst: Snaps: 108129-108133
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write          61,049     0      1,891      31      0.8   87.8
    db file parallel write            1,590     0        251     158      0.0   11.6
    control file parallel writ        1,372     0         56      41      0.0    2.6
    log file sequential read          2,473     0         50      20      0.0    2.3
    Log archive I/O                   2,436     0         20       8      0.0     .9
    os thread startup                   135     0          8      60      0.0     .4
    db file sequential read             668     0          4       6      0.0     .2
    db file single write                200     0          2       9      0.0     .1
    log file sync                         8     0          1     152      0.0     .1
    log file single write                20     0          0      21      0.0     .0
    control file sequential re       11,218     0          0       0      0.1     .0
    buffer busy waits                     2     0          0     161      0.0     .0
    direct path write                     6     0          0      37      0.0     .0
    LGWR wait for redo copy             380     0          0       0      0.0     .0
    log buffer space                      1     0          0      89      0.0     .0
    latch: cache buffers lru c            3     0          0       1      0.0     .0     2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
    Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
    We don't know anything about your online redo log configuration
    Below is my redo log configuration.
        GROUP# STATUS  TYPE    MEMBER                                                       IS_
             1         ONLINE  /oradata/fs01/PERFDB1/redo_1a.log                           NO
             1         ONLINE  /oradata/fs02/PERFDB1/redo_1b.log                           NO
             2         ONLINE  /oradata/fs01/PERFDB1/redo_2a.log                           NO
             2         ONLINE  /oradata/fs02/PERFDB1/redo_2b.log                           NO
             3         ONLINE  /oradata/fs01/PERFDB1/redo_3a.log                           NO
             3         ONLINE  /oradata/fs02/PERFDB1/redo_3b.log                           NO
    6 rows selected.
    04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
    04:13:26 perf_monitor@PERFDB1> select *from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS                 FIRST_CHANGE# FIRST_TIME
             1          1      40689  524288000          2 YES INACTIVE              13026185905545 18-MAY-13 01:00
             2          1      40690  524288000          2 YES INACTIVE              13026185931010 18-MAY-13 03:32
             3          1      40691  524288000          2 NO  CURRENT               13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM

  • How to setup the environment for doing the Performance Testing?

    Hi,
    We are planning to start the performance testing for iProcurement Product, Max number of user we are going to use 1000. For this simulation, what are all basic setups need to do for Application Tier, Database Tier,etc... Can anyone suggest what is procedure to setup environment depending upon the load?

    User Guides for thee rv120W are here:
    http://www.cisco.com/en/US/docs/routers/csbr/rv220w/quick_start/guide/rv220w_qsg.pdf
    http://www.cisco.com/en/US/docs/routers/csbr/rv120w/administration/guide/rv120w_admin.pdf
    and theres some more stuff over on my site:
    http://www.linksysinfo.org/index.php?forums/cisco-small-business-routers-and-vpn-solutions.49/

  • Initial Load Performance Decrease

    Hi colleagues,
    We noticed a huge decrease initial load performance after installing an
    application in the PDA.
    Our first test we downloaded one data object with nearly 6.6Mb that
    corresponds to 30.000 registries with eight fields each. Initial Load
    to PDA took only 2 minutes.
    We performed a second test with same PDA after a reinstallation and
    new device ID. The difference here is that we installed an MI
    application related to same data object. Same amount of data was sent
    to the PDA. It took 3 hours to download it.
    In third test we change the application in order not to have the
    related data object assigned to it. In this case, download took 2
    minutes again.
    In other words, if we have an application with the data object
    assigned, it results in a huge decrease in initial load.
    In both cases we use direct connection to our LAN.
    Here goes our PDA specs:
    - Windows Mobile 6 Classic
    - Processor: Marvell PXA310 de 624MHz
    - 64MB RAM, 256MB flash ROM (190MB available to user)
    Any similar experiences?
    Thanks.
    Edited by: Renato Petrulis on Jun 1, 2010 4:15 PM

    I am confused on downloading a data object with no application.
    I thought you can only download data if it is associated to a Mobile Component, I guess you just assign the DMSCV manually?
    In any case, I have only experienced scenario two when we were downloading application with mobile component with no packaging of messages.  we had maybe a few thousand records to download and process and it would take an hour or more.
    When we enabled packaging, it would take 15-30 minutes.
    Then I went to Create Setup Package because it was just simpler to install the application and data together with no corruption or failure of DMSCV not going operational and not sending data etc... plus it was a faster download using either FTP or ActiveSync to transfer the install files.

  • Using Test Setting file to run web performance tests in different environments

    Hello,
    I have a set of web performance tests that I want to be able to run in different environments.
    Currently I have csv file containing the url of the load balancer of the particular environment I want to run the load test containing the web performance tests in, and to run it in a different environment I just edit this csv file.
    Is it possible to use the test settings file to point the web performance tests at a particular environment?
    I am using VSTS 2012 Ultimate.
    Thanks

    Instead of using the testsettings I suggest using the "Parameterize web servers" command (found via context menu on the web test, or via one of the icons). The left hand column then suggests context parameter names for the parameterised web server
    URLs. It should be possible to use data source entries instead. You may need to wrap the data source accesses in doubled curly braces if editing via the "Parameterize web servers" window.
    Regards
    Adrian

Maybe you are looking for

  • Unable to sync from my computor

    I am not able to sync games apps from my computer  from not reading the  sync  connection

  • Random iphone 3g battery drain ... appears to be a background mail check

    Phone Info: Model: 16GB/Black, 3g ON, OS 2.0.1, Mail: Push On and Manual Fetch, Location SVCs ON, LCD auto bright, Own this phone for 20 days. I noticed (twice) that my phone battery drained up overnight after a full charge 5 hours earlier. It happen

  • Ipad 1 basic video playback on Apple TV 3

    Hi, I have an iPad 1 running the latest iOS that's available for the iPad 1 (5.0x something) and I have an Apple TV 3rd Gen working perfectly with my iPad 4, iPhone 5s, etc. However, when I try to play video from my iPad 1 to Apple TV, everytime I ch

  • How does labview detect end of file?

    Hello! My question is how labview detects the end of file at the Read from Spreadsheet File VI.  What I would like to do is basically to read from a  text file a number of numeric values and afterwards build an array containing these values. Till now

  • "Unknown device"

    When I connect my ipod to my computer, it can´t recognize it and define it as a "unknown device". I thought that it was a problem with my usb port or something, but I have another ipod of the same model and it works just fine. I´ve looked Apple suppo