JDK Performance tests...interesting results...

In an effort to try and eek out as much performance from a Java application as possible, I decided to conduct a little experiment on various JDKs on Sun Solaris 8. What I found was very interesting, and I thought I would share this with the group.
I tested 2 main areas, Class Generation and Number Crunching. I wrote a little application that does a series of tests a multitude of times, timming each one and the overall, and reporting the time. I tested the 1.2.2, 1.3.1, and 1.4.0 (both 32-bit and 64-bit). Here is what I found...
Class Generation
There has been this argument at the water cooler for sometime that cloning an object is faster than creating a new one. I created two tests...one that creates 1000 objects 10000 times using the constructor, the other creates a single "default" object, then clones 1000 objects 10000 times using the Object.clone() method. The two methods are identical, except the cloning requres a try/catch block around the clone() call and it creates the default class in it's constructor (both use a home-grown class called DummyClass, which implements java.lang.Cloneable). All classes where compiled with the JDK 1.3.1.
What I found is that cloning was about 26.16% <b>slower<b> than actually creating the object, finishing on average in 295822.133ms vs. 218438.8ms. The slowest performers overall was the JDK 1.4.1, 64-bit, and the fastest was the 1.3.1 with a -server flag. Here is the chart:
JDK                             Inst.                Clone
1.3.1                           232874.667           280938.333
1.3.1, -server                  190872.000           252238.000
1.4.0                           206234.333           340177.667
1.4.0 64-bit                    231025.333           302989.333
1.4.0 64-bit, -server           231188.667           302767.333JDK Performance
In this test I pitted the 1.2.2, 1.3.1, and 1.4.0 against one another. This uses the same Class Generation test as above, plus a Fibinochi number test to perform a calculation intensive test. I also tested the code compiled in different means. Here is the chart (it is a link, because the chart is pretty large):
http://www.phuongphoto.com/jdk_tests/
Ironically, the 1.2.2 outperformed the other JDK tests hands down. The only explination I have is that the 1.2.2 is running in native threads, and I cannot figure out how to turn that off the 1.2.2, nor turn it on the other JDK versions.
Another interesting note is that the 64-bit 1.4.0 was out performed in class creation, but did pretty well in raw calculations, almost matching the 1.2.2, even with it's native threads. It also seemed to perform ever-so-slightly better without the -server switch, but all-in-all it didn't make much of a difference. The other JDKs all performed much better with the -server flag on.
I am interested to find what everyone else thinks about this. In particular, can anyone instruct me on how to turn on native threads in the 1.3.1 and 1.4.0 so we can level the playing field? Also, I'd be interested to see some other numbers, if anyone else has any.
Mike Bauer

The reason the -server option didn't make a difference when you used the 64-bit mode is that the -server is implicit when you use 64-bit.
In other words, if you are using java1.4, the options "-d64 -server" and plain "-d64" are the same thing.

Similar Messages

  • LabVIEW Embedded - Performance Testing - Different Platforms

    Hi all,
    I've done some performance testing of LabVIEW on various microcontroller development boards (LabVIEW Embedded for ARM) as well as on a cRIO 9122 Real-time Controller (LabVIEW Real-time) and a Dell Optiplex 790 (LabVIEW desktop). You may find the results interesting. The full report is attached and the final page of the report is reproduced below.
    Test Summary
    µC MIPS
    Single Loop
    Effective MIPS
    Single Loop
    Efficiency
    Dual Loop
    Effective MIPS
    Dual Loop
    Efficiency
    MCB2300
      65
        31.8
    49%
          4.1
      6%
    LM3S8962
      60
        50.0
    83%
          9.5
    16%
    LPC1788
      120
        80.9
    56%
        12.0
      8%
    cRIO 9122
      760
      152.4
    20%
      223.0
    29%
    Optiplex 790
    6114
    5533.7
    91%
    5655.0
    92%
    Analysis
    For microcontrollers, single loop programming can retain almost 100% of the processing power. Such programming would require that all I/O is non-blocking as well as use of interrupts. Multiple loop programming is not recommended, except for simple applications running at loop rates less than 200 Hz, since the vast majority of the processing power is taken by LabVIEW/OS overhead.
    For cRIO, there is much more processing power available, however approximately 70 to 80% of it is lost to LabVIEW/OS overhead. The end result is that what can be achieved is limiting.
    For the Desktop, we get the best of both worlds; extraordinary processing power and high efficiency.
    Speculation on why LabVIEW Embedded for ARM and LabVIEW Real-time performance is so poor puts the blame on excessive context switch. Each context switch typically takes 150 to 200 machine cycles and these appear to be inserted for each loop iteration. This means that tight loops (fast with not much computation) consume enormous amounts of processing power. If this is the case, an option to force a context switch every Nth loop iteration would be useful.
    Conclusion
    LabVIEW Embedded
    for ARM
    LabVIEW Real-time for cRIO/sbRIO
    LabVIEW Desktop for Windows
    Development Environment Cost
    High
    Reasonable
    Reasonable
    Execution Platform Cost
    Very low
    Very High / High
    Low
    Processing Power
    Low (current Tier 1)
    Medium
    Enormous
    LabVIEW/OS efficiency
    Low
    Low
    High
    OEM friendly
    Yes+
    No
    Yes
    LabVIEW Desktop has many attractive features. This explain why LabVIEW Desktop is so successful and is the vast majority of National Instruments’ software sales (and consequently results in the vast majority of hardware sales). It is National Instruments’ flagship product and is the precursor to the other LabVIEW offerings. The execution platform is powerful, available in various form factors from various sources and is competitively priced.
    LabVIEW Real-time on a cRIO/sb-RIO is a lot less attractive. To make this platform attractive the execution platform cost needs to be vastly decreased while increasing the raw processing power. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. A single plug-in board no larger than 75 x 50 mm (3” x 2”) with a single unit price under $180 would certainly make the sb-RIO a viable execution platform. The peripheral connectors would not be part of the board and would be accessible via a connector. A developer mother board could house the various connectors, but these are not needed when incorporated into the final product. The recently released Xilinx Zynq would be a great chip to use ($15 in volume, 2 x ARM Cortex A9 at 800 MHz (4,000 MIPS), FPGA fabric and lots more).
    LabVIEW Embedded for ARM is very OEM friendly with development boards that are open source with circuit diagrams available. To make this platform attractive, new more capable Tier 1 boards will need to be introduced, mainly to counter the large LabVIEW/OS overhead. As before, these target boards would come from microcontroller manufacturers, thereby making them inexpensive and open source. It would also be beneficial to examine why the LabVIEW/OS overhead is so high. What is required now is another Tier 1 boards (eg. DK-LM3S9D96 (ARM Cortex M3 80 MHz/96 MIPS)). Further Tier 1 boards should be targeted every two years (eg. BeagleBoard-xM (ARM Cortex A8 1000 MHz/2000 MIPS board)) to keep LabVIEW Embedded for ARM relevant.
    Attachments:
    LabVIEW Embedded - Performance Testing - Different Platforms.pdf ‏307 KB

    I've got to say though, it would really be good if NI could further develop the ARM embedded toolkit.
    In the industry I'm in, and probably many others, control algorithm development and testing oocurs in labview. If you have a good LV developer or team, you'll end up with fairly solid, stable and tested code. But what happens now, once the concept is validated, is that all this is thrown away and the C programmers create the embedded code that will go into the real product.
    The development cycle starts from scratch. 
    It would be amazing if you could strip down that code and deploy it onto ARM and expect it to not be too inefficient. Development costs and time to market go way down.. BUT, but especially in the industry I presently work in, the final product's COST is extremely important. (These being consumer products, chaper micro cheaper product) . 
    These concerns weight HEAVILY. I didn't get a warm fuzzy about the ARM toolkit for my application. I'm sure it's got its niches, but just imagine what could happen if some more work went into it to make it truly appealing to wider market...

  • Can Web Performance Test work on AJAX or Javascript Project which will show only one URL for all the pages?

    Hi there,
    I'm working on testing a AJAX and JavaScript Project which has several pages but all in the same URL. I need to test some attribute on the page or parameter past by AJAX or Javascript. Can Web Performance Test work to get what I want?
    Thanks,
    

    Hello,
    Thank you for your post.
    Web performance test is used to test if a server responses correctly and the response is consistent with what we expected. And we test the response speed, the stability and scalability.
    The Web Performance Test Recorder records both AJAX requests and requests that were submitted from JavaScript, but
     web test does not execute JavaScript. I am afraid that you can’t use web test to test parameter past by AJAX or JavaScript.
    Please see:
    Web Performance Test Engine Overview
    About JavaScript and ActiveX Controls in Web Performance Tests
    From the first link, “Client-side scripting that sets parameter values or results in additional HTTP requests, such as AJAX, does affect the load on the server and might require you to manually modify the Web Performance Test to simulate the scripting.”
    If you want to execute the function typically performed by script in web test, you need to accomplish it in coded web performance test or a web performance test plugin. Please see:
     How to: Create a Coded Web Performance Test
    How to: Create a Web Performance Test Plug-In
    I am not sure what the ‘some attribute on the page’ is. If you mean that you want to test those controls on the page, you can do coded UI test, which can test that the user interface for an application functions correctly. The coded UI test performs actions
    on the user interface controls for an application and verifies that the correct controls are displayed with the correct values. You can refer to this article for detailed information about code UI test:
    Verifying Code by Using Coded User Interface Tests
    Best regards,
    Amanda Zhu [MSFT]
    MSDN Community Support | Feedback to us
    Develop and promote your apps in Windows Store
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

  • Log file sync top event during performance test -av 36ms

    Hi,
    During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                       208,327       7,406     36   46.6 Commit
    direct path write                   646,833       3,604      6   22.7 User I/O
    DB CPU                                            1,599          10.1
    direct path read temp             1,321,596         619      0    3.9 User I/O
    log buffer space                      4,161         558    134    3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
    I am not able to figure out why "log file sync" is having such slow response.
    Below is the snapshot from the load profile.
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108127 16-May-13 20:15:22       105       6.5
      End Snap:    108140 16-May-13 23:30:29       156       8.9
       Elapsed:              195.11 (mins)
       DB Time:              265.09 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,136M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,168M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                1.4                0.1       0.02       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          607,512.1           33,092.1
       Logical reads:            3,900.4              212.5
       Block changes:            1,381.4               75.3
      Physical reads:              134.5                7.3
    Physical writes:              134.0                7.3
          User calls:              145.5                7.9
              Parses:               24.6                1.3
         Hard parses:                7.9                0.4
    W/A MB processed:          915,418.7           49,864.2
              Logons:                0.1                0.0
            Executes:               85.2                4.6
           Rollbacks:                0.0                0.0
        Transactions:               18.4Some of the top background wait events:
    ^LBackground Wait Events       DB/Inst: Snaps: 108127-108140
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write         208,563     0      2,528      12      1.0   66.4
    db file parallel write            4,264     0        785     184      0.0   20.6
    Backup: sbtbackup                     1     0        516  516177      0.0   13.6
    control file parallel writ        4,436     0         97      22      0.0    2.6
    log file sequential read          6,922     0         95      14      0.0    2.5
    Log archive I/O                   6,820     0         48       7      0.0    1.3
    os thread startup                   432     0         26      60      0.0     .7
    Backup: sbtclose2                     1     0         10   10094      0.0     .3
    db file sequential read           2,585     0          8       3      0.0     .2
    db file single write                560     0          3       6      0.0     .1
    log file sync                        28     0          1      53      0.0     .0
    control file sequential re       36,326     0          1       0      0.2     .0
    log file switch completion            4     0          1     207      0.0     .0
    buffer busy waits                     5     0          1     116      0.0     .0
    LGWR wait for redo copy             924     0          1       1      0.0     .0
    log file single write                56     0          1       9      0.0     .0
    Backup: sbtinfo2                      1     0          1     500      0.0     .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
    {code}
    Workload Comparison
    ~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
    DB time: 0.78 1.36 74.36 0.02 0.07 250.00
    CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
    Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
    Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
    Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
    Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
    Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
    User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
    Parses: 7.28 24.55 237.23 0.19 1.34 605.26
    Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
    Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
    Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
    Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
    Transactions: 37.99 18.36 -51.67
    First Second Diff
    1st 2nd
    Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
    (ms) %DB time
    SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
    35.6 46.57
    CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
    5.6 22.66
    log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
    12.1 15.90
    log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
    N/A 10.06
    SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
    84.0 4.93
    -direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
    0.0 1.76
    -db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
    0.0 0.41
    {code}
    *To sum it sup:
    1. Why is the IO response getting such an hit during the new perf test? Please suggest*
    2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
    {code}
    select *from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    {code}
    Please let me know if you would like to see any other stats.
    Edited by: Kunwar on May 18, 2013 2:20 PM

    1. A snapshot interval of 3 hours always generates meaningless results
    Below are some details from the 1 hour interval AWR report.
    Platform                         CPUs Cores Sockets Memory(GB)
    HP-UX IA (64-bit)                   4     4       3      31.95
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108129 16-May-13 20:45:32       140       8.0
      End Snap:    108133 16-May-13 21:45:53       150       8.8
       Elapsed:               60.35 (mins)
       DB Time:              140.49 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,168M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,120M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                2.3                0.1       0.03       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          719,553.5           34,374.6
       Logical reads:            4,017.4              191.9
       Block changes:            1,521.1               72.7
      Physical reads:              136.9                6.5
    Physical writes:              158.3                7.6
          User calls:              167.0                8.0
              Parses:               25.8                1.2
         Hard parses:                8.9                0.4
    W/A MB processed:          406,220.0           19,406.0
              Logons:                0.1                0.0
            Executes:               88.4                4.2
           Rollbacks:                0.0                0.0
        Transactions:               20.9
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                        73,761       6,740     91   80.0 Commit
    log buffer space                      3,581         541    151    6.4 Configurat
    DB CPU                                              348           4.1
    direct path write                   238,962         241      1    2.9 User I/O
    direct path read temp               487,874         174      0    2.1 User I/O
    Background Wait Events       DB/Inst: Snaps: 108129-108133
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write          61,049     0      1,891      31      0.8   87.8
    db file parallel write            1,590     0        251     158      0.0   11.6
    control file parallel writ        1,372     0         56      41      0.0    2.6
    log file sequential read          2,473     0         50      20      0.0    2.3
    Log archive I/O                   2,436     0         20       8      0.0     .9
    os thread startup                   135     0          8      60      0.0     .4
    db file sequential read             668     0          4       6      0.0     .2
    db file single write                200     0          2       9      0.0     .1
    log file sync                         8     0          1     152      0.0     .1
    log file single write                20     0          0      21      0.0     .0
    control file sequential re       11,218     0          0       0      0.1     .0
    buffer busy waits                     2     0          0     161      0.0     .0
    direct path write                     6     0          0      37      0.0     .0
    LGWR wait for redo copy             380     0          0       0      0.0     .0
    log buffer space                      1     0          0      89      0.0     .0
    latch: cache buffers lru c            3     0          0       1      0.0     .0     2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
    Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
    We don't know anything about your online redo log configuration
    Below is my redo log configuration.
        GROUP# STATUS  TYPE    MEMBER                                                       IS_
             1         ONLINE  /oradata/fs01/PERFDB1/redo_1a.log                           NO
             1         ONLINE  /oradata/fs02/PERFDB1/redo_1b.log                           NO
             2         ONLINE  /oradata/fs01/PERFDB1/redo_2a.log                           NO
             2         ONLINE  /oradata/fs02/PERFDB1/redo_2b.log                           NO
             3         ONLINE  /oradata/fs01/PERFDB1/redo_3a.log                           NO
             3         ONLINE  /oradata/fs02/PERFDB1/redo_3b.log                           NO
    6 rows selected.
    04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
    04:13:26 perf_monitor@PERFDB1> select *from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS                 FIRST_CHANGE# FIRST_TIME
             1          1      40689  524288000          2 YES INACTIVE              13026185905545 18-MAY-13 01:00
             2          1      40690  524288000          2 YES INACTIVE              13026185931010 18-MAY-13 03:32
             3          1      40691  524288000          2 NO  CURRENT               13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM

  • ActiveX Control recording but not playing back in a VS 2012 Web Performance Test

    I am testing an application that loads an Active X control for entering some login information. While recording, this control works fine and I am able to enter information and it is recorded. However on playback in the playback window it has the error "An
    add-on for this website failed to run. Check the security settings in Internet Options for potential conflicts."
    Window 7 OS 64 bit
    IE 8 recorded on 32 bit version
    I see no obvious security conflicts. This runs fine when navigating through manually and recording. It is only during playback where this error occurs.

    Hi IndyJason,
    Thank you for posting in MSDN forum.
    As you said that you could not playback the Active X control successfully in web performance test. I know that the ActiveX controls in your Web application will fall into three categories, depending on how they work at the HTTP level.
    Reference:
    https://msdn.microsoft.com/en-us/library/ms404678%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
    I found that this confusion may be come from the browser preview in the Web test result viewer. The Web Performance Test Results Viewer does not allow script or ActiveX controls to run, because the Web performance test engine does not run the, and for security
    reasons.
    For more information, please you refer to this follwoing blog(Web Tests Can Succeed Even Though It Appears They Failed Part):
    http://blogs.msdn.com/edglas/archive/2010/03/24/web-test-authoring-and-debugging-techniques-for-visual-studio-2010.aspx
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • [Ann] FirstACT 2.2 released for SOAP performance testing

    Empirix Releases FirstACT 2.2 for Performance Testing of SOAP-based Web Services
    FirstACT 2.2 is available for free evaluation immediately at http://www.empirix.com/TryFirstACT
    Waltham, MA -- June 5, 2002 -- Empirix Inc., the leading provider of test and monitoring
    solutions for Web, voice and network applications, today announced FirstACT™ 2.2,
    the fifth release of the industry's first and most comprehensive automated performance
    testing tool for Web Services.
    As enterprise organizations are beginning to adopt Web Services, the types of Web
    Services being developed and their testing needs is in a state of change. Major
    software testing solution vendor, Empirix is committed to ensuring that organizations
    developing enterprise software using Web Services can continue to verify the performance
    of their enterprise as quickly and cost effectively as possible regardless of the
    architecture they are built upon.
    Working with organizations developing Web Services, we have observed several emerging
    trends. First, organizations are tending to develop Web Services that transfer a
    sizable amount of data within each transaction by passing in user-defined XML data
    types as part of the SOAP request. As a result, they require a solution that automatically
    generates SOAP requests using XML data types and allows them to be quickly customized.
    Second, organizations require highly scalable test solutions. Many organizations
    are using Web Services to exchange information between business partners and have
    Service Level Agreements (SLAs) in place specifying guaranteed performance metrics.
    Organizations need to performance test to these SLAs to avoid financial and business
    penalties. Finally, many organizations just beginning to use automated testing tools
    for Web Services have already made significant investments in making SOAP scripts
    by hand. They would like to import SOAP requests into an automated testing tool
    for regression testing.
    Empirix FirstACT 2.2 meets or exceeds the testing needs of these emerging trends
    in Web Services testing by offering the following new functionality:
    1. Automatic and customizable test script generation for XML data types – FirstACT
    2.2 will generate complete test scripts and allow the user to graphically customize
    test data without requiring programming. FirstACT now includes a simple-to-use XML
    editor for data entry or more advanced SOAP request customization.
    2. Scalability Guarantee – FirstACT 2.2 has been designed to be highly scalable to
    performance test Web Services. Customers using FirstACT today regularly simulate
    between several hundred to several thousand users. Empirix will guarantee to
    performance test the numbers of users an organization needs to test to meet its business
    needs.
    3. Importing Existing Test Scripts – FirstACT 2.2 can now import existing SOAP request
    directly into the tool on a user-by-user basis. As a result, some users simulated
    can import SOAP requests; others can be automatically generated by FirstACT.
    Web Services facilitates the easy exchange of business-critical data and information
    across heterogeneous network systems. Gartner estimates that 75% of all businesses
    with more than $100 million in sales will have begun to develop Web Services applications
    or will have deployed a production system using Web Services technology by the end
    of 2002. As part of this move to Web Services, "vendors are moving forward with
    the technology and architecture elements underlying a Web Services application model,"
    Gartner reports. While this model holds exciting potential, the added protocol layers
    necessary to implement it can have a serious impact on application performance, causing
    delays in development and in the retrieval of information for end users.
    "Today Web Services play an increasingly prominent but changing role in the success
    of enterprise software projects, but they can only deliver on their promise if they
    perform reliably," said Steven Kolak, FirstACT product manager at Empirix. "With
    its graphical user interface and extensive test-case generation capability, FirstACT
    is the first Web Services testing tool that can be used by software developers or
    QA test engineers. FirstACT tests the performance and functionality of Web Services
    whether they are built upon J2EE, .NET, or other technologies. FirstACT 2.2 provides
    the most comprehensive Web Services testing solution that meets or exceeds the changing
    demands of organizations testing Web Services for performance, functionality, and
    functionality under load.”
    Learn more?
    Read about Empirix FirstACT at http://www.empirix.com/FirstACT. FirstACT 2.2 is
    available for free evaluation immediately at http://www.empirix.com/TryFirstACT.
    Pricing starts at $4,995. For additional information, call (781) 993-8500.

    Simon,
    I will admit, I almost never use SQL Developer. I have been a long time Toad user, but for this tool, I fumbled around a bit and got everything up and running quickly.
    That said, I tried the new GeoRaptor tool using this tutorial (which is I think is close enough to get the jist). http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=A_Gentle_Introduction:_Create_Table,_Metadata_Registration,_Indexing_and_Mapping
    As I stumble around it, I'll try and leave some feedback, and probably ask some rather stupid questions.
    Thanks for the effort,
    Bryan

  • RMS performance testing using HP Loadrunner

    Hi,
    We are currently planning on how to do our performance testing of Oracle Retail. We are planning to use HP Loadrunner and use different virtual users for Java, GUI, webservices and database requests. Have anyone here done performance testing in RMS using HP Loadrunner and what kind of setup did you use?
    Any tips would be greatly appreciated.
    Best regards,
    Gustav

    Hi Gustav
    How is your performance testing of Oracle Retail ? Did you get good results ?
    I need to start a RMS/RPM performance testing project and I would like to know how to implement an appropriated structure . Any informations about servers , protocols , tools used to simulate a real production environment would be very appreciated.
    Thanks & Regards,
    Roberto

  • Performance Testing OID

    Hi, I have a customer who wants to performance test OID.
    Their actual installed data will be 600,000 users, however they want to only query using a sample of 10-20 different usernames. My question is will caching within the database/and or ldap make the results erroneous.
    Regards
    Kevin

    Kevin,
    what do you mean by '.. make the results erroneous'? If you're talking about a performance test you want to achieve the best possible result, right? So why don't you want to use either the DB cache or the OID server cache to achieve maximum performance?
    What is the use case scenario that you only want to have a very small subset of entries to be used?
    Please take a look at Tuning Considerations for the Directory in http://download-west.oracle.com/docs/cd/B14099_14/idmanage.1012/b14082/tuning.htm#i1004959 for some details.
    You might want to take a look at the http://www.mindcraft.com benchmark to get some other infos.
    regards,
    --Olaf                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Performance testing programs

    I was wondering what program is the best for getting precise info on the performance of my system, and also for getting ture reads of, say, how fast is my RAM really working or how much cache my HDD really has.
    I'm starting to wonder if this kind of general posting is allowed in this forum, i post here because it's the place for the mobo i have.
    emilio

    hi
    you got sisoft sandra but she lies alot.
    then you got performance test which do a test on the entire computer.
    gfx harddrive cd memory and so on and can be saved to a file.
    if you install that game i can do that to and save the result in a file so you have something to compare with.
    i believe this is a question that should be in aus.
    bye

  • How to have continouse performance testing during development phase?

    I understand that for corporate projects there are always requirements like roughy how long it can take for certain process.
    Is there any rough guideline as how much time certain process will take?
    And is there anyway i can have something like JMeter that will do constant monitor of the performance as i start development?
    Can go down to method level, but should also be able to show total time taken for a certain module or action etc.
    I think it is somthing like continuous integration like cruise control..but is more for performance continouse evaluation..
    Any advice anyone

    Just a thought: how useful would continuous performance testing be? First off, I wouldn't have the main build include performance tests. What if the build fails on performance? It isn't necessarily something you'll fix quickly, so you could be stuck with a broken build for quite some time, which means either your devs won't be committing code, or they'll be comitting code on a broken build which kind-of negates the point of CI. So you'd have a nightly build for performance, or something. Then what? Someone comes in in the morning and sees the performance build failed, and fixes it? Hmmm, maybe your corporate culture is different, but we've got a nightly metrics build that sits broken for weeks on end before someone looks at it. As long as the master builds are OK, nobody cares. Given that performance problems might well take several weeks of dedicated time to fix, I reckon they're far more likely to be fixed as a result of failing acceptance tests, rather than the CI environment reporting them
    Just my opinions, of course

  • Object performance,need detail of Oracle's internal performance testing

    hi Geoff,
    from your previous answer:
    From an Oracle's comprehensive internal performance testing,
    object implementation of an application consistently matches
    those of relational implementation.
    For some operations, using objects performed better. can you tell more about the result of the testing ?
    we need to get the comparison between the two implementation,
    in order to know in which case should use object implementation
    and in which case should not.
    thanks
    Ray

    Ray,
    Before I name some cases that objects perform better, I still
    want to emphasize that your application object model comes
    first. Here are the cases:
    1. If you have nested objects (e.g., customer with address
    attribute), querying the nested objects would work faster than
    relational access.
    2. If you have containment objects (e.g., customer with VARRAY
    of phone numbers), querying these contained objects would also
    work faster than relational.
    What is your application? What does your object model look like?
    The more specific your question is, the better I can answer it.
    Regards,
    Geoff
    hi Geoff,
    from your previous answer:
    From an Oracle's comprehensive internal performance testing,
    object implementation of an application consistently matches
    those of relational implementation.
    For some operations, using objects performed better. can you tell more about the result of the testing ?
    we need to get the comparison between the two implementation,
    in order to know in which case should use object implementation
    and in which case should not.
    thanks
    Ray

  • BTW Performance Test

    Hi
    When i do the performance test, further diagnostics at http://speedtest.btwholesale.com/ I get the following
    Your speed test has completed and the results are shown above, however during the test an error occurred while trying to retrieve additional details regarding your service. As a result we are unable to determine if the speed you received during the test is acceptable for your service. Please re-run the test if you require this additional information. 
     I'm using a HH5 (which seems to drop connection at times) and Infinity option 1.
    Any ideas?
    Thanks

    I think the extra step of looking up the profile is a lookup to some management database.  It does seem to go wrong for people sometimes, usually just for a few hours or so, but sometimes for extended periods.
    As far as I know (but ???) it doesn't have anything to do with the equipment in use; and there is not much you can do about it.  At least with an HH5 you can look at the sync speed in your stats; the profile should be just a fraction below the sync speed.  (96.79%?)

  • SAP Performance Testing - Manual or Automated?

    Our organization is attempting to develop a regular performance testing effort.  Everything wehave read points to using a tool, such as LoadRunner, to do performance testing.  However, we're just starting and simply want to baseline several transactions, jobs, programs, etc (less than 30 items).  We have tools to monitor the backend results and grab metrics, but no tools to automate the testing itself.  Does anyone do their performance testing manually?  What are some advantages to doing this?

    Hi Yogi,
    I think HP LoadRunner is one of the best tools for SAP performance testing. I did it for many years. It is now included with Solution Manager. Here is the link for for HP Mercury regarding performance testing.
    https://h10078.www1.hp.com/cda/hpms/display/main/hpms_content.jsp?zn=bto&cp=1-11-126_4000_100__
    Please check this site as well, it has lot of valuable information.
    http://www.wilsonmar.com/1loadrun.htm
    Regards, Nabi.

  • Oracle Lite Performance Testing

    Hi All,
    Is there a way to sync more than one client from the same PC/Laptop to simulate a concurrent sync for performance testing purposes? Greg Rekuonas did mention, some script available by Oracle Support for the same. Oracle (on raising an SR) unfortunately says that they cannot make that available to customers.
    I have found a way out, but it involves changing one of olite's dll's and was wondering if anyone else knew a better solution.(Other than finding 50-100 different laptop's/handhelds and make them all synchronize one by one).
    Cheers,
    Vikrant

    Hi Greg,
    Thanks for the reply. I will try and re-raise it with them, though they have closed my SR with a "Not Entitled" status.
    In case anyone is interested I finally, got the trick done by using the following:-
    1) Open ocapi.dll in your $MOBILE_HOME/bin folder. (Right where the msync_java.dll is there)
    2) Open the above file in a text editor like EditPlus(Note this is a binary file)
    3) Search for the string OracleLiteSync.
    4) Change the string to something else(Do not add any characters, just change any one alphabet).
    5) Save this file as ocapi.dll in a different directory.
    I wrote a standard java program, to programatically trigger a synchronization. To, handle concurrent synchronizations, I create multiple copies of ocapi.dll with different strings for OracleLiteSync.
    Each of my simulation "Clients"(Basically Java Threads) were invoked with a different path each pointing to a different ocapi.dll. This seemed to work, and the "Mutex" error went away.
    The exaplanation is that basically OracleLite uses the Kernel32 API CreateMutexA to create a mutex to ensure that two synchroninzations cannot happen on the same machine. The mutex name is OracleLiteSync and the code is in ocapi.dll.
    By changing the mutex name and using a different version of ocapi.dll (Oracle Lite uses JNI which reads DLL from the PATH environment variable) you can fool Oracle Lite into doing different synchronizations.
    This did give me some errors when it tried to apply the downloaded changes simultaneously in different ODB's, but allowed me to load the mobile server concurrently with syncrhonization sessions.
    Cheers,
    Vikrant

  • First QAZone Webinar of 2008:  Performance Testing Secrets Unveiled

    Greetings,
    In case you haven't noticed our first QAZone Webinar of 2008 (Secrets of Performance Testing Unveiled)...Some very interesting topics were raised in this session, so I definitively recommend to check out this Webinar if you are interested on learning more about performance testing in general, and/or Empirix's e-Load product...
    http://qazone.empirix.com/thread.jspa?threadID=573&tstart=0
    Kind regards,
    ------QAZoneModerator

    Thank for you attending our first QAZone Webinar last week. Also, many thanks again to Jim Bernesser, Min-Gu Lee and Colin Mason for participating in our eLoad Advanced User panel. The feedback that we have received for this event was great, and you all did a great job presenting and sharing your experience and knowledge!
    If you missed this session, or would like to view the recorded event again, you can do so by following this link below. In addition, as we didn?t have the time to answer all the great questions that you had, we have compiled a list with your questions and their corresponding answers, and posted them there as well.
    (Please, note that you will need to use your QAZone account to access this area).
    http://qazone.empirix.com/entry.jspa?categoryID=3&externalID=346
    Also, feel free to reply to this thread if you have any additional questions that you would like to ask, or any other feedback that you would like to share with our QAZone community.
    I am looking forward to our next QAZone event and seeing you there!

Maybe you are looking for

  • Canvas ID not being used when Canvas is created by State change

    After logging into my app I do a State change, which adds a few more buttons on a Linkbar and adds two more sections to a ViewStack. For the initial Canvas sections, I have an ID associated with each. Upon clicking a button, for example, I can alert

  • Error occurred while trying to access framework page

    Hi All, After we applied sps 12 in to our portal we encounter the following error: Error occurred while trying to access framework page: "pcd:portal_content/every_user/general/defaultDesktop/frameworkPages/frameworkpage",The object does not exist or

  • Payments from Greece via ANY credit card are BLOCKED!

    Hello.I am a Premium subscriber, however due to recent capital controls imposed on Greece by the Government, all of my credit cards (Visa, Master and Amex) are being declined. There is money in the cards, however I cannot escape the fact that I live

  • Activating a VOFM Routine

    Hi Folks, I have made a VOFM routine in data tranfer->Sales activities.I have activated the program and also the entry in "Maintain:Data transport sales activities" But still i am getting an error "No frame work was found for the Include RV44A601".I

  • Can I use a garage band music (chelsea loft long song) as a music background in my website?

    Can I use CHELSEA LOFT LONG OR SHORT song (from garage band) as a music background in my videos. These videos will be post in my website, youtube, etc? Is a free copyright song?