Monitoring WebLogic During Performance Test

I am running performance tests on our WebLogic Server and would like to be able
to pull the Thread Queue Length, Thread Queue Throughput and Java Heap % statistics
that are displayed in the WebLogic Console and put them in a text file every so
often during the test. This way I can use those statistics with others I collect.
I assume that if the console is able to display this information that it must
be available. Does anyone know how I could do this?
Thanks In Advance

"Dimitri" == Dimitri Rakitine <[email protected]> writes:
Dimitri> I don't know of a recommended way in 5.1. You may find this (crude)
Dimitri> example helpful in illustrating how to obtain runtime info from 5.1:
Dimitri> http://dima.dhs.org/misc/WLStats.jsp
Wow, thanks! I didn't realize how easy it was to get to some of this
information. This is a real help.
Dimitri> Also, there is a severinfo utility on http://developer.bea.com which
Dimitri> can be used to obtain server runtime information.
I couldn't find this utility. Any tips on how to find it on this site?
Thanks, this stuff is a real help.
-Ben
Dimitri> It is much easier in 6.x with it's JMX architecture.
Dimitri> Benjamin Simon <[email protected]> wrote:
>>>>>>> "Dimitri" == Dimitri Rakitine <[email protected]> writes:
Dimitri> It is certainly possible, but ways of doing this are
Dimitri> different for 5.1 and 6.x - which version do you use?
>> Ooh, this is something I want to be able to do. What's the recommended
>> way to do this for 5.1?
>> Thanks,
>> Ben
Dimitri> Robin Conklin <[email protected]> wrote:
>> >> I am running performance tests on our WebLogic Server and would like
>> >> to be able to pull the Thread Queue Length, Thread Queue Throughput
>> >> and Java Heap % statistics that are displayed in the WebLogic Console
>> >> and put them in a text file every so often during the test. This way
>> >> I can use those statistics with others I collect. I assume that if
>> >> the console is able to display this information that it must be
>> >> available. Does anyone know how I could do this?
>> >> Thanks In Advance
Dimitri> --
Dimitri> Dimitri
Dimitri> --
Dimitri> Dimitri

Similar Messages

  • Log file sync top event during performance test -av 36ms

    Hi,
    During the performance test for our product before deployment into product i see "log file sync" on top with Avg wait (ms) being 36 which i feel is too high.
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                       208,327       7,406     36   46.6 Commit
    direct path write                   646,833       3,604      6   22.7 User I/O
    DB CPU                                            1,599          10.1
    direct path read temp             1,321,596         619      0    3.9 User I/O
    log buffer space                      4,161         558    134    3.5 ConfiguratAlthough testers are not complaining about the performance of the appplication , we ,DBAs, are expected to be proactive about the any bad signals from DB.
    I am not able to figure out why "log file sync" is having such slow response.
    Below is the snapshot from the load profile.
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108127 16-May-13 20:15:22       105       6.5
      End Snap:    108140 16-May-13 23:30:29       156       8.9
       Elapsed:              195.11 (mins)
       DB Time:              265.09 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,136M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,168M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                1.4                0.1       0.02       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          607,512.1           33,092.1
       Logical reads:            3,900.4              212.5
       Block changes:            1,381.4               75.3
      Physical reads:              134.5                7.3
    Physical writes:              134.0                7.3
          User calls:              145.5                7.9
              Parses:               24.6                1.3
         Hard parses:                7.9                0.4
    W/A MB processed:          915,418.7           49,864.2
              Logons:                0.1                0.0
            Executes:               85.2                4.6
           Rollbacks:                0.0                0.0
        Transactions:               18.4Some of the top background wait events:
    ^LBackground Wait Events       DB/Inst: Snaps: 108127-108140
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write         208,563     0      2,528      12      1.0   66.4
    db file parallel write            4,264     0        785     184      0.0   20.6
    Backup: sbtbackup                     1     0        516  516177      0.0   13.6
    control file parallel writ        4,436     0         97      22      0.0    2.6
    log file sequential read          6,922     0         95      14      0.0    2.5
    Log archive I/O                   6,820     0         48       7      0.0    1.3
    os thread startup                   432     0         26      60      0.0     .7
    Backup: sbtclose2                     1     0         10   10094      0.0     .3
    db file sequential read           2,585     0          8       3      0.0     .2
    db file single write                560     0          3       6      0.0     .1
    log file sync                        28     0          1      53      0.0     .0
    control file sequential re       36,326     0          1       0      0.2     .0
    log file switch completion            4     0          1     207      0.0     .0
    buffer busy waits                     5     0          1     116      0.0     .0
    LGWR wait for redo copy             924     0          1       1      0.0     .0
    log file single write                56     0          1       9      0.0     .0
    Backup: sbtinfo2                      1     0          1     500      0.0     .0During a previous perf test , things didnt look this bad for "log file sync. Few sections from the comparision report(awrddprt.sql)
    {code}
    Workload Comparison
    ~~~~~~~~~~~~~~~~~~~ 1st Per Sec 2nd Per Sec %Diff 1st Per Txn 2nd Per Txn %Diff
    DB time: 0.78 1.36 74.36 0.02 0.07 250.00
    CPU time: 0.18 0.14 -22.22 0.00 0.01 100.00
    Redo size: 573,678.11 607,512.05 5.90 15,101.84 33,092.08 119.13
    Logical reads: 4,374.04 3,900.38 -10.83 115.14 212.46 84.52
    Block changes: 1,593.38 1,381.41 -13.30 41.95 75.25 79.38
    Physical reads: 76.44 134.54 76.01 2.01 7.33 264.68
    Physical writes: 110.43 134.00 21.34 2.91 7.30 150.86
    User calls: 197.62 145.46 -26.39 5.20 7.92 52.31
    Parses: 7.28 24.55 237.23 0.19 1.34 605.26
    Hard parses: 0.00 7.88 100.00 0.00 0.43 100.00
    Sorts: 3.88 4.90 26.29 0.10 0.27 170.00
    Logons: 0.09 0.08 -11.11 0.00 0.00 0.00
    Executes: 126.69 85.19 -32.76 3.34 4.64 38.92
    Transactions: 37.99 18.36 -51.67
    First Second Diff
    1st 2nd
    Event Wait Class Waits Time(s) Avg Time(ms) %DB time Event Wait Class Waits Time(s) Avg Time
    (ms) %DB time
    SQL*Net more data from client Network 2,133,486 1,270.7 0.6 61.24 log file sync Commit 208,355 7,407.6
    35.6 46.57
    CPU time N/A 487.1 N/A 23.48 direct path write User I/O 646,849 3,604.7
    5.6 22.66
    log file sync Commit 99,459 129.5 1.3 6.24 log file parallel write System I/O 208,564 2,528.4
    12.1 15.90
    log file parallel write System I/O 100,732 126.6 1.3 6.10 CPU time N/A 1,599.3
    N/A 10.06
    SQL*Net more data to client Network 451,810 103.1 0.2 4.97 db file parallel write System I/O 4,264 784.7 1
    84.0 4.93
    -direct path write User I/O 121,044 52.5 0.4 2.53 -SQL*Net more data from client Network 7,407,435 279.7
    0.0 1.76
    -db file parallel write System I/O 986 22.8 23.1 1.10 -SQL*Net more data to client Network 2,714,916 64.6
    0.0 0.41
    {code}
    *To sum it sup:
    1. Why is the IO response getting such an hit during the new perf test? Please suggest*
    2. Does the number of DB writer impact "log file sync" wait event? We have only one DB writer as the number of cpu on the host is only 4
    {code}
    select *from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE 11.1.0.7.0 Production
    TNS for HPUX: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - Production
    {code}
    Please let me know if you would like to see any other stats.
    Edited by: Kunwar on May 18, 2013 2:20 PM

    1. A snapshot interval of 3 hours always generates meaningless results
    Below are some details from the 1 hour interval AWR report.
    Platform                         CPUs Cores Sockets Memory(GB)
    HP-UX IA (64-bit)                   4     4       3      31.95
                  Snap Id      Snap Time      Sessions Curs/Sess
    Begin Snap:    108129 16-May-13 20:45:32       140       8.0
      End Snap:    108133 16-May-13 21:45:53       150       8.8
       Elapsed:               60.35 (mins)
       DB Time:              140.49 (mins)
    Cache Sizes                       Begin        End
    ~~~~~~~~~~~                  ---------- ----------
                   Buffer Cache:     1,168M     1,168M  Std Block Size:         8K
               Shared Pool Size:     1,120M     1,120M      Log Buffer:    16,640K
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                2.3                0.1       0.03       0.01
           DB CPU(s):                0.1                0.0       0.00       0.00
           Redo size:          719,553.5           34,374.6
       Logical reads:            4,017.4              191.9
       Block changes:            1,521.1               72.7
      Physical reads:              136.9                6.5
    Physical writes:              158.3                7.6
          User calls:              167.0                8.0
              Parses:               25.8                1.2
         Hard parses:                8.9                0.4
    W/A MB processed:          406,220.0           19,406.0
              Logons:                0.1                0.0
            Executes:               88.4                4.2
           Rollbacks:                0.0                0.0
        Transactions:               20.9
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    log file sync                        73,761       6,740     91   80.0 Commit
    log buffer space                      3,581         541    151    6.4 Configurat
    DB CPU                                              348           4.1
    direct path write                   238,962         241      1    2.9 User I/O
    direct path read temp               487,874         174      0    2.1 User I/O
    Background Wait Events       DB/Inst: Snaps: 108129-108133
    -> ordered by wait time desc, waits desc (idle events last)
    -> Only events with Total Wait Time (s) >= .001 are shown
    -> %Timeouts: value of 0 indicates value was < .5%.  Value of null is truly 0
                                                                 Avg
                                            %Time Total Wait    wait    Waits   % bg
    Event                             Waits -outs   Time (s)    (ms)     /txn   time
    log file parallel write          61,049     0      1,891      31      0.8   87.8
    db file parallel write            1,590     0        251     158      0.0   11.6
    control file parallel writ        1,372     0         56      41      0.0    2.6
    log file sequential read          2,473     0         50      20      0.0    2.3
    Log archive I/O                   2,436     0         20       8      0.0     .9
    os thread startup                   135     0          8      60      0.0     .4
    db file sequential read             668     0          4       6      0.0     .2
    db file single write                200     0          2       9      0.0     .1
    log file sync                         8     0          1     152      0.0     .1
    log file single write                20     0          0      21      0.0     .0
    control file sequential re       11,218     0          0       0      0.1     .0
    buffer busy waits                     2     0          0     161      0.0     .0
    direct path write                     6     0          0      37      0.0     .0
    LGWR wait for redo copy             380     0          0       0      0.0     .0
    log buffer space                      1     0          0      89      0.0     .0
    latch: cache buffers lru c            3     0          0       1      0.0     .0     2 The log file sync is a result of commit --> you are committing too often, maybe even every individual record.
    Thanks for explanation. +Actually my question is WHY is it so slow (avg wait of 91ms)+3 Your IO subsystem hosting the online redo log files can be a limiting factor.
    We don't know anything about your online redo log configuration
    Below is my redo log configuration.
        GROUP# STATUS  TYPE    MEMBER                                                       IS_
             1         ONLINE  /oradata/fs01/PERFDB1/redo_1a.log                           NO
             1         ONLINE  /oradata/fs02/PERFDB1/redo_1b.log                           NO
             2         ONLINE  /oradata/fs01/PERFDB1/redo_2a.log                           NO
             2         ONLINE  /oradata/fs02/PERFDB1/redo_2b.log                           NO
             3         ONLINE  /oradata/fs01/PERFDB1/redo_3a.log                           NO
             3         ONLINE  /oradata/fs02/PERFDB1/redo_3b.log                           NO
    6 rows selected.
    04:13:14 perf_monitor@PERFDB1> col FIRST_CHANGE# for 999999999999999999
    04:13:26 perf_monitor@PERFDB1> select *from v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS                 FIRST_CHANGE# FIRST_TIME
             1          1      40689  524288000          2 YES INACTIVE              13026185905545 18-MAY-13 01:00
             2          1      40690  524288000          2 YES INACTIVE              13026185931010 18-MAY-13 03:32
             3          1      40691  524288000          2 NO  CURRENT               13026185933550 18-MAY-13 04:00Edited by: Kunwar on May 18, 2013 2:46 PM

  • How/What can we monitor in DB02 for performance testing?

    We are conducting a performance test/load before golive of HCM in SAP.
    I am looking at DB02 but have NO idea of what to monitor for? there is so MUCH
    Any suggestions, would certainly be appreciated asap
    Thank you so much!
    maria

    Well, rather than trying to explain in the forum, I suggest you download the document "The SAP DBA Cockpit for Microsoft SQL Server" from:
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/1062428c-f1df-2910-b08f-c322feddcd10?quicklink=index&overridelayout=true
    This will explain everything you need to know about DB02 transaction for MS SQL server. You can read through it find out what all monitoring can be done.
    You can check the following Areas:
    Performance (ST04)
    Space (DB02)
    Configuration = Some ST04 detail screens plus new ones
    Diagnostics = Some ST04 and DB02 detail screens plus new ones
    Frankly, Space and performance are the main areas.
    Refer to the guide and this should solve all your doubts.
    Regards,
    Shitij

  • Monitor weblogic cluster server performance

    I am using "System Monitor" got from http://dev2dev.bea.com/utilitiestools/monitoring.html to monitor my weblogic server performance. For single server, it is working well.
    However I can't use it to monitor the cluster server performance. For example:
    server 1 and server 2 have clustered weblogic server - APPS listerning on port 8888. On server 1, I have admin server running on server 1 - ADMIN_SVR1 listerning on port 232.
    I can dump the admin server Mbean information as:
    $java com.iternum.jmx.monitor.SystemMonitor -url t3://localhost:232 -user admin -password xxxxx -mBeanType ExecuteQueueRuntime -of performance.txt
    However I can't dump the cluster server performance data as following:
    $java com.iternum.jmx.monitor.SystemMonitor -url t3://localhost:8888 -user admin -password xxxxxx -mBeanType ExecuteQueueRuntime -of performance.txt
    Did I make any mistake? Do you have better tool to monitor weblogic performance?
    Thanks in advance,
    carl

    how to attach GC file?

  • How to have continouse performance testing during development phase?

    I understand that for corporate projects there are always requirements like roughy how long it can take for certain process.
    Is there any rough guideline as how much time certain process will take?
    And is there anyway i can have something like JMeter that will do constant monitor of the performance as i start development?
    Can go down to method level, but should also be able to show total time taken for a certain module or action etc.
    I think it is somthing like continuous integration like cruise control..but is more for performance continouse evaluation..
    Any advice anyone

    Just a thought: how useful would continuous performance testing be? First off, I wouldn't have the main build include performance tests. What if the build fails on performance? It isn't necessarily something you'll fix quickly, so you could be stuck with a broken build for quite some time, which means either your devs won't be committing code, or they'll be comitting code on a broken build which kind-of negates the point of CI. So you'd have a nightly build for performance, or something. Then what? Someone comes in in the morning and sees the performance build failed, and fixes it? Hmmm, maybe your corporate culture is different, but we've got a nightly metrics build that sits broken for weeks on end before someone looks at it. As long as the master builds are OK, nobody cares. Given that performance problems might well take several weeks of dedicated time to fix, I reckon they're far more likely to be fixed as a result of failing acceptance tests, rather than the CI environment reporting them
    Just my opinions, of course

  • LGWR strange behaviour during the performance test!

    Hi,
    We are doing performance test on our new schema and during the test we see that this LGWR database process is always the highest resource taking process during the test.
    I see that the Commit activity appears continiously on Grid Controller toghegher with System I/O.
    I checked the Redo Logs and we have them in 5 groups each 200MB.
    Any idea about what causes this? I guess the application is doing one commit after each statement, or?
    Any comment about how can we improve this?
    I wanted to attach the screenshot from the Grid Controller but it seems we can't do that on this Forum!
    Thanks in advance for the help!
    / Hes  

    http://docs.oracle.com/cd/E16655_01/server.121/e17633/process.htm#CNCPT89084

  • FORMS CRASHES (FRM-92101) ON AS 10.1.2.0.2 DURING LOAD PERFORMANCE TESTING

    Hiya
    We have been doing Load Performance Testing using testing tool QALoad on our Forms 10g application. After about 56 virtual users(sessions) have logged-in into our application, if a new user tries to log-in into our application, the Forms crashes. As soon as we encounter the FRM-92101 error, no more new forms session are able to start.
    The Load Testing software start up each process very quickly, about every 10 seconds.
    The very first form that appears is the login form of our application. So before the login screen appears, we get FRM-92101 error message.
    However, those users who have already logged-in into our application, they are able to carry on their tasks.
    We are using Application Server 10g 10.1.2.0.2. I have checked the status on Application Server through Oracle Enterprise Manager Console. The OC4J Instance is up and running. Also, server's configuration is pretty good. It is running on 2 CPUs (AMD Opteron 3GHz) and has 32GB of memory. The memory used by those 56 sessions is less than 3GB.
    The Applicatin Server is running on a Microsoft Windows Server 2003 64bit Enterprise Edition.
    Any help will be much appreciated.
    Cheers
    Mayur

    Hi Shekhawat
    In Windows Registry go to
    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems
    In the right hand side panel, you will find String Value as Windows. Now double click on it (Windows). In the pop up window you will see a string similar to the following one:
    %SystemRoot%\system32\csrss.exe ObjectDirectory=\Windows SharedSection=1024,20480,768 Windows=On SubSystemType=Windows ServerDll=basesrv,1 ServerDll=winsrv:UserServerDllInitialization,3 ServerDll=winsrv:ConServerDllInitialization,2 ProfileControl=Off MaxRequestThreads=16
    Now if you read it carefully in the above string, you will find this parameter
    SharedSection=1024,20480,768
    Here SharedSection specifies the system and desktop heaps using the following format:
    SharedSection=xxxx,yyyy,zzzz
    The default values are 1024,3072,512
    All the values are in Kilobytes (KB)
    xxxx = System-wide Heapsize. There is no need to modify this value.
    yyyy = IO Desktop Heapsize. This is the heap for memory objects in the IO Desktop.
    zzzz = Non-IO Desktop Heapsize. This is the heap for memory objects in the Non-IO Desktop.
    On our server the values were as follows :
    1024,20480,768
    We changed the size of Non-IO desktop heapsize from 768 to 5112. With 5112 KB we managed to test our application for upto 495 virtual users.
    Cheers
    Mayur

  • WLS dies during stress testing

    We're using JMeter to send continuous requests to the server. It creates
    1500 threads and each requests Hello.jsp (see below) 500 times.
    The server runs fine for about 6.5 minutes (serviced ~120000 requests)
    before locking up. We let it sit for a while and about 4 minutes later, it
    starts spewing out the following (bottom part of weblogic.log)
    Fri May 19 10:48:37 GMT-04:00 2000:<I> <WebLogicServer> WebLogic Server
    started
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: init
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: param
    verbose initialized to: true
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: param
    packagePrefix initialized to: jsp_servlet
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: param
    compileCommand initialized to: /opt/Solaris_JDK_1.2.2_05a/bin/javac
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: param
    srcCompiler initialized to weblogic.jspc
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: param
    superclass initialized to null
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: param
    workingDir initialized to: /opt/ejbserver/weblogic/myserver/classfiles
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp: param
    pageCheckSeconds initialized to: 1
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:
    initialization complete
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> Generated
    java file:
    /opt/ejbserver/weblogic/myserver/classfiles/jsp_servlet/testpages/hello.java
    Fri May 19 10:51:10 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15123,localport=7111]''
    Fri May 19 10:51:10 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15172,localport=7111]''
    Fri May 19 10:51:10 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=11312,localport=7111]''
    Fri May 19 10:51:20 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=14368,localport=7111]''
    Fri May 19 10:51:20 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=14385,localport=7111]''
    Fri May 19 10:51:24 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15294,localport=7111]''
    Fri May 19 10:51:27 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15291,localport=7111]''
    Fri May 19 10:51:55 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=28068,localport=7111]''
    Fri May 19 10:51:56 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=28385,localport=7111]''
    Fri May 19 10:52:00 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=28421,localport=7111]''
    Fri May 19 10:53:45 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=63711,localport=7111]''
    Fri May 19 10:53:45 GMT-04:00 2000:<W> <ListenThread> Connection rejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=63709,localport=7111]''
    Fri May 19 10:58:26 GMT-04:00 2000:<E> <WebLogicServer> Exception invoking
    weblogic.time.server.ScheduledTrigger@d2573f
    java.lang.RuntimeException: <UTIL> UnsyncCircularQueue was full!
    at java.lang.Throwable.fillInStackTrace(Native Method)
    at java.lang.Throwable.fillInStackTrace(Compiled Code)
    at java.lang.Throwable.<init>(Compiled Code)
    at java.lang.Exception.<init>(Compiled Code)
    at java.lang.RuntimeException.<init>(Compiled Code)
    at weblogic.utils.UnsyncCircularQueue.expandQueue(Compiled Code)
    at weblogic.utils.UnsyncCircularQueue.put(Compiled Code)
    at weblogic.kernel.ExecuteThreadManager.execute(Compiled Code)
    at weblogic.kernel.Kernel.execute(Compiled Code)
    at weblogic.time.common.internal.ScheduledTrigger.private_execute(Compiled
    Code)
    at weblogic.time.server.ScheduledTrigger.private_execute(Compiled Code)
    at weblogic.time.common.internal.TimeTable.execute(Compiled Code)
    at weblogic.time.common.internal.TimeEventGenerator.run(Compiled Code)
    at java.lang.Thread.run(Thread.java:479)
    We get a steady stream of this exact same RuntimeException for about 20
    minutes before the whole thing dies.
    Any ideas would be great!
    Jason
    The setup is the following:
    Sun E250 with 1 CPU at 400MHz, 512 MB
    Solaris 2.6
    WLS 5.1 with SP3
    Solaris Performance Pack is used
    Sun JDK 1.2.2
    min heap size is 128MB and max as 128MB with options
    "-native -verbosegc"
    other changed to the OS:
    rlim_fd_max=4096
    rlim_fd_cur=4096
    tcp_close_wait_interval=10000
    tcp_flush_fin_wait_2=10000
    No paging/swapping takes place during the test.
    Hello.jsp is as follows:
    <html>
    <head>
    <title>A simple JSP file</title>
    </head>
    <body>
    <%
    out.print("<p><b>Hello World!</b>");
    %>
    </body>
    </html>

    Comments inline.
    Cheers - Wei
    Jason <[email protected]> wrote in message
    news:[email protected]...
    Thanks, that definitely sheds some light on the problem.
    We monitored the server and came up with the following. The queue isindeed
    growing and probably accounts for the eventual failure (does anyone know
    what the limit is?). The throughput is constant but just can't keep upwith
    the requests. Another thing is that the heap slowly increases along with
    the queue and it took a while for it to hit 100% and have GC occur.I am not sure about the limit. Either 32K or 64K, I guess. If you monitor
    your queue lenght frequently, you might have a chance to spot that limit.
    Every request need memory to hold. So the heap slowly increases along with
    the queue.
    >
    Now, we duplicated the test but accessed a Servlet directly instead of a
    JSP. This servlet simply did some out.print(...) statements to displaythe
    same page as Hello.jsp. This ran without a hitch. We had about 5 times
    more throughput and the queue wasn't growing out of control. Also, theheap
    would reach 100% much quicker and GC occured more frequently. I can't see
    how handling a JSP request is that much more resource intensive than a
    Servlet request...
    We then tried the original test again (accessing JSP) but this timeremoved
    Service pack 3. We were running straight WLS 5.1 (no service packs).
    Everything behaved nicely (just like the Servlet test did). Could it then
    be that SP3 introduced a bug relating to how JSP requests are handled?Well, if this was the case, you might need to address this with support.
    >
    Finally, all the tests ran without changing the default number of execute
    threads (15). I don't think increasing the number will help seeing how
    short-lived these requests are. I think more execute threads would just
    increase the context-switching.I agree. I've seen 15 to 60 threads in production. Might be worth a try to
    use larger number of threads in your test to see if it alleviated the
    problem.
    >
    Thanks again,My pleasure.
    Jason
    Wei Guan <[email protected]> wrote in message
    news:[email protected]...
    WebLogic has its interal request queue to enqueue all requests. Therequest
    refers to an unit of work for weblogic to process, such as a triggeredtime
    service, http request, etc. The finite number of worker threads defined
    in
    your weblogic.properties file will dequeue the requests inside therequest
    queue and process the work.
    There is an upper limit on the size of this queue. The message you gotmeans
    that your test hit that limit. In practice, if there were lots of
    requests
    inside that queue, every request need to wait for lots of time beforebeing
    processed (the queue is FIFO I believed). Open your WebLogic Console and
    monitor the size of the queue, in your test or in your production, if
    you
    see the size of the queue increases constantly, there is somethingwrong.
    You might need to increase the number of threads to handle the load. If
    tuning up the number of threads doesn't help, you might need useweblogic
    clustering (or add more servers in your clustering) to share the load.
    WebLogic engineers might be able to lift up the upper limit of the sizeof
    the queue. However, the larger upper limit on the queue might not helpyou
    in practice. Tuning your configuration and making sure the number of
    requests inside the queue do not increase constantly might be a good
    practice to follow.
    My 2 cents.
    Cheers - Wei
    Jason <[email protected]> wrote in message
    news:[email protected]...
    We're using JMeter to send continuous requests to the server. It
    creates
    1500 threads and each requests Hello.jsp (see below) 500 times.
    The server runs fine for about 6.5 minutes (serviced ~120000 requests)
    before locking up. We let it sit for a while and about 4 minutes
    later,
    it
    starts spewing out the following (bottom part of weblogic.log)
    Fri May 19 10:48:37 GMT-04:00 2000:<I> <WebLogicServer> WebLogic
    Server
    started
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:init
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:param
    verbose initialized to: true
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:param
    packagePrefix initialized to: jsp_servlet
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:param
    compileCommand initialized to: /opt/Solaris_JDK_1.2.2_05a/bin/javac
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:param
    srcCompiler initialized to weblogic.jspc
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:param
    superclass initialized to null
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:param
    workingDir initialized to: /opt/ejbserver/weblogic/myserver/classfiles
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:param
    pageCheckSeconds initialized to: 1
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General> *.jsp:
    initialization complete
    Fri May 19 10:48:47 GMT-04:00 2000:<I> <ServletContext-General>
    Generated
    java file:
    /opt/ejbserver/weblogic/myserver/classfiles/jsp_servlet/testpages/hello.java
    Fri May 19 10:51:10 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15123,localport=7111]''
    Fri May 19 10:51:10 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15172,localport=7111]''
    Fri May 19 10:51:10 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=11312,localport=7111]''
    Fri May 19 10:51:20 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=14368,localport=7111]''
    Fri May 19 10:51:20 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=14385,localport=7111]''
    Fri May 19 10:51:24 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15294,localport=7111]''
    Fri May 19 10:51:27 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=15291,localport=7111]''
    Fri May 19 10:51:55 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=28068,localport=7111]''
    Fri May 19 10:51:56 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=28385,localport=7111]''
    Fri May 19 10:52:00 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=28421,localport=7111]''
    Fri May 19 10:53:45 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=63711,localport=7111]''
    Fri May 19 10:53:45 GMT-04:00 2000:<W> <ListenThread> Connectionrejected:
    'Login timed out after: '5000' ms on socket:
    'Socket[addr=47.187.230.161/47.187.230.161,port=63709,localport=7111]''
    Fri May 19 10:58:26 GMT-04:00 2000:<E> <WebLogicServer> Exceptioninvoking
    weblogic.time.server.ScheduledTrigger@d2573f
    java.lang.RuntimeException: <UTIL> UnsyncCircularQueue was full!
    at java.lang.Throwable.fillInStackTrace(Native Method)
    at java.lang.Throwable.fillInStackTrace(Compiled Code)
    at java.lang.Throwable.<init>(Compiled Code)
    at java.lang.Exception.<init>(Compiled Code)
    at java.lang.RuntimeException.<init>(Compiled Code)
    at weblogic.utils.UnsyncCircularQueue.expandQueue(Compiled Code)
    at weblogic.utils.UnsyncCircularQueue.put(Compiled Code)
    at weblogic.kernel.ExecuteThreadManager.execute(Compiled Code)
    at weblogic.kernel.Kernel.execute(Compiled Code)
    atweblogic.time.common.internal.ScheduledTrigger.private_execute(Compiled
    Code)
    at weblogic.time.server.ScheduledTrigger.private_execute(Compiled
    Code)
    at weblogic.time.common.internal.TimeTable.execute(Compiled Code)
    at weblogic.time.common.internal.TimeEventGenerator.run(CompiledCode)
    at java.lang.Thread.run(Thread.java:479)
    We get a steady stream of this exact same RuntimeException for about20
    minutes before the whole thing dies.
    Any ideas would be great!
    Jason
    The setup is the following:
    Sun E250 with 1 CPU at 400MHz, 512 MB
    Solaris 2.6
    WLS 5.1 with SP3
    Solaris Performance Pack is used
    Sun JDK 1.2.2
    min heap size is 128MB and max as 128MB with options
    "-native -verbosegc"
    other changed to the OS:
    rlim_fd_max=4096
    rlim_fd_cur=4096
    tcp_close_wait_interval=10000
    tcp_flush_fin_wait_2=10000
    No paging/swapping takes place during the test.
    Hello.jsp is as follows:
    <html>
    <head>
    <title>A simple JSP file</title>
    </head>
    <body>
    <%
    out.print("<p><b>Hello World!</b>");
    %>
    </body>
    </html>

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

  • MDM performance test

    Hi all,
    Anyone has done a performance test on MDM? Any test script would be really helpful!
    Thanks in advance.

    Hi,
    MDM Performance need to be monitored in basically two main MDM activities when it come to working in MDM real time:
    1) MDM Importing
    2) MDM Syndicatiion
    MDM Importing:
    - When ever you are using MDM in an IT Landscape there will be many source systems from where data will be inputted into MDM.
    - This data may either be imported in a Manual fashion or Automatically as in most case using the MDM import server.
    - In either case you will have to deal with thousands of records.
    - Importing such a large set of records afftects the performance as very stage till importing means(at the record matching steps etc till teh final import)
    - The performance gets affected slowing the process of importing right from opening a saved map ar even Field or value mapping.
    - And above all if you have an exception due to which a single record in the import chunk fails then the entire  set fails.
    - So care must be taken to improve the performance of importing records.
    - Which is done taking into consideration the following:
    Areas to focus on improving performance during Importing:
    1) The chunk size which defines the number of records to be imported at a time
    2) The number of records processed parallely (MDIS settings)
    3) The number of Fields Mapped
    4) Number of Matching field used
    5) The number of Validations and assignemnts running on the records etc
    MDM Syndication:
    - In the similar lines MDM performance also should be monitored when Harmonising data to the target systems
    - Selecting the records to be syndicated using search criteria,Suppressing Unchanged records,Key mapping etc all afftects the MDM performance adversely.
    You can in general create test scripts that will monitor MDM system performance in thes two prime areas as well as in MDM generak activities as well.Using different OS and different sizing.As MDM performance will differ according to the Software used in each case which includes the disk space,Cache and RAM as well.
    You can Create test scripts on the following activities using differnt Configurations and thus compare and test teh performance of teh MDM system under different conditions with the expected output as against the actual output
    - Testing mounting/unmounting repoisitory
    - Loading/Unloading Rep
    - Export/Import Schema
    - Archieving/Unarchieving rep
    - Creating Master/Slave rep
    - Importing different sets of Records using manual and automatic method
    - Syndicating different sets of rec using Manual and automatic method
    Hope it helped
    Thanks & Regards
    Simona Pinto

  • Best practice to monitor 10gR3 OSB performance using JMX API?

    Hi guys,
    I need some advice on the best practice to monitor 10gR3 OSB performance using JMX API.
    Jus to show I have done my home work, I managed to get the JMX sample code from
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/example.html#wp1109828
    working.
    The following is the list of options I am think about:
    * Set up: I have a cluster of one 1 admin server with 2 managed servers, which managed server runs an instance of OSB
    * What I try to achieve:
    - use JMX API to collect OSB stats data periodically as in sample code above then save data as a record to a
         database table
    Options/ideas:
    1. Simplest approach: Run the modified version of JMX sample on the Admin Server to save stats data to database
    regularly. I can't see problems with this one ...
    2. Use WLI to schedule the Task of collecting stats data regularly. May be overkill if option 1 above is good for production
    3. Deploy a simple web app on Admin Server, say a simple servlet that displays a simple page to start/stop and configure
    data collection interval for the timer
    What approach would you experts recommend?
    BTW, the caveats os using JMX in http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/jmx_monitoring/concepts.html#wp1095673
    says
         Oracle strongly discourages using this API in a concurrent manner with more than one thread or process. This is because a reset performed in
         one thread or process is not visible to another threads or processes. This caveat also applies to resets performed from the Monitoring Dashboard of
         the Oracle Service Bus Console, as such resets are not visible to this API.
    Under what scenario would I be breaking this rule? I am a little worried about its statement
         discourages using this API in a concurrent manner with more than one thread or process
    Thanks in advance,
    Sam

    Hi Manoj,
    Thanks for getting back. I am afraid configuring aggregation interval from Dashboard doesn't solve problem as I need to collect stats data of endpoint URI or in hourly or daily basis, then output to CSV files so line graphs can be drawn for chosen applications.
    Just for those who may be interested. It's not possible to use SQL to query database tables to extract OSB stats for a specified time period, say 9am - 5pm. I raised a support case already and the response I got back is 'No'.
    That means using JMX API will be the way to go :)
    Has anyone actually done this kind of OSB stats report and care to give some pointers?
    I am thinking of using 7 or 1 days as the aggregation interval set in Dashboard of OSB admin console then collects stats data using JMX(as described in previous link) hourly using WebLogic Server JMX Timer Service as described in
    http://download.oracle.com/docs/cd/E12840_01/wls/docs103/jmxinst/timer.html instead of Java's Timer class.
    Not sure if this is the best practice.
    Thanks,
    Regards,
    Sam

  • Errors encountered during perfromance testing

    While doing some stress testing on our WLS server we kept seeing the
    following errors:
    1) NullPointerException on NTSocketMuxer, error log follows:
    Tue Feb 22 05:08:28 EST 2000:<E> <NTSockMux> failure in processSockets()
    loop: GetData: fd=16764 numBytes=23
    Tue Feb 22 05:08:28 EST 2000:<E> <NTSockMux>
    java.lang.NullPointerException: null native pointer - socket was closed
    at weblogic.socket.NTSocketMuxer.initiateIO(Native Method)
    at weblogic.socket.NTSocketMuxer.processSockets(NTSocketMuxer.java,
    Compiled Code)
    at
    weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:19)
    at weblogic.t3.srvr.ExecuteThread.run(ExecuteThread.java, Compiled
    Code)
    This error doesn't seem to have any immediate impact, although it can't
    be a good thing and it seems to happen even at times when we are not
    doing stress testing. Also, it is very frequent that we see the above
    messages. We are running WLS 4.51 w/ Service Pack 5. I am wondering if
    this is a problem w/ Service Pack 5 since I don't see this on another
    machine running WLS 4.51 with Service Pack 4. Both are using JDK 1.2.2.
    2) Connection Rejected, error log follows:
    Tue Feb 22 07:08:16 EST 2000:<W> <ListenThread> Connection rejected:
    Login timed out after 15000 msec. The socket came from
    [host=207.17.47.141,port=11929,localport=443] See property
    weblogic.login.readTimeoutMillis.
    I know how to adjust this for this one. My questions on this are:
    If I set my readTimeoutMillisSSL (SSL in this case) from 15000 to 30000
    what does this exactly mean. Does this mean that instead of allowing a
    max 15 seconds for a connection to be established, now I am allowing 30
    seconds? Also, is this only for the initial connection establishment (ie
    user login), or does this parameter effect other aspects of the
    connection later on? What negative side effects would I encounter if I
    set this to 60000 (1 minute)?
    Finally, what can I do so that a connection does not take over 15
    seconds to establish? Note this is not the norm, just happens more often
    during stress testing.
    3) Creating & Closing connection & DGCserver, log follows:
    Tue Feb 22 07:08:44 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:45 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:45 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Creating connection to
    144.14.157.204/144.14.157.204 2864292845294268830
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:51 EST 2000:<I> <DGCserver> tried to renew lease for
    lost ref: 902
    Tue Feb 22 07:08:52 EST 2000:<I> <RJVM> Heartbeat/PublicKey resend
    detected
    Tue Feb 22 07:08:54 EST 2000:<I> <DGCserver> tried to renew lease for
    lost ref: 904
    Tue Feb 22 07:08:58 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Why the constant trying to create connection, close connection?
    What is DGCserver?
    4) Ignoring message from a previous JVM, log follows:
    Tue Feb 22 08:29:50 EST 2000:<E> <RJVM> Ignoring message from a previous
    generation: JVMMessage from 8729925219143181234C138.8.222.21 to
    -3465797227003769874C192.168.100.61 with CMD_ONE_WAY, prtNum=6, ack=103,
    seqNum=1384
    Tue Feb 22 08:30:15 EST 2000:<I> <HTTPTunneling> Sending DEAD response
    What does this mean?
    5) PeerGoneExceptions
    What causes these?
    Our environment is set up as follows:
    WLS Server
    WLS 4.51 w/ Service Pack 5
    NativeIO = true
    ExecuteThreadCount = 40
    readTimeoutMillis=5000
    readTimeoutMillisSSL=10000
    Dell Pentium III 600 w/ 512 MB memory
    NT 4.0
    JavaSoft 1.2.2
    -ms128 -mx350
    WLS Client
    Java Application
    t3s and https (using WLS RMI)
    JavaSoft 1.1.7b
    typically Pentium 200 MHz or better w/ 64MB or more
    Basically our clients connect to our WLS server using RMI. Each client
    also has a callback object where the server sends event notification
    back to the clients. Most of the communication is back through these
    client callback objects. Its similar to a stock trading application in
    that 1 client incoming requests will generate 200 outgoing events (if
    for example there are 200 users on the system). The above observations
    where made while 25 very active users where on the system.
    Thanks very much for any and all help,
    Edwin Marcial
    Continental Power Exchange

    Hi Kim,
    Thanks for the response, but which problem in particular did you solve, I've
    listed a couple here.
    Edwin
    kim hyun chan wrote:
    hi,
    I met the problem like you before.
    so I reduce executeThreadCount from 50 to 20 , and then I solved my proplem.
    "Edwin Marcial" <[email protected]> wrote in message
    news:[email protected]...
    While doing some stress testing on our WLS server we kept seeing the
    following errors:
    1) NullPointerException on NTSocketMuxer, error log follows:
    Tue Feb 22 05:08:28 EST 2000:<E> <NTSockMux> failure in processSockets()
    loop: GetData: fd=16764 numBytes=23
    Tue Feb 22 05:08:28 EST 2000:<E> <NTSockMux>
    java.lang.NullPointerException: null native pointer - socket was closed
    at weblogic.socket.NTSocketMuxer.initiateIO(Native Method)
    at weblogic.socket.NTSocketMuxer.processSockets(NTSocketMuxer.java,
    Compiled Code)
    at
    weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:19)
    at weblogic.t3.srvr.ExecuteThread.run(ExecuteThread.java, Compiled
    Code)
    This error doesn't seem to have any immediate impact, although it can't
    be a good thing and it seems to happen even at times when we are not
    doing stress testing. Also, it is very frequent that we see the above
    messages. We are running WLS 4.51 w/ Service Pack 5. I am wondering if
    this is a problem w/ Service Pack 5 since I don't see this on another
    machine running WLS 4.51 with Service Pack 4. Both are using JDK 1.2.2.
    2) Connection Rejected, error log follows:
    Tue Feb 22 07:08:16 EST 2000:<W> <ListenThread> Connection rejected:
    Login timed out after 15000 msec. The socket came from
    [host=207.17.47.141,port=11929,localport=443] See property
    weblogic.login.readTimeoutMillis.
    I know how to adjust this for this one. My questions on this are:
    If I set my readTimeoutMillisSSL (SSL in this case) from 15000 to 30000
    what does this exactly mean. Does this mean that instead of allowing a
    max 15 seconds for a connection to be established, now I am allowing 30
    seconds? Also, is this only for the initial connection establishment (ie
    user login), or does this parameter effect other aspects of the
    connection later on? What negative side effects would I encounter if I
    set this to 60000 (1 minute)?
    Finally, what can I do so that a connection does not take over 15
    seconds to establish? Note this is not the norm, just happens more often
    during stress testing.
    3) Creating & Closing connection & DGCserver, log follows:
    Tue Feb 22 07:08:44 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:45 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:45 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Creating connection to
    144.14.157.204/144.14.157.204 2864292845294268830
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:50 EST 2000:<I> <RJVM> Closing connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Tue Feb 22 07:08:51 EST 2000:<I> <DGCserver> tried to renew lease for
    lost ref: 902
    Tue Feb 22 07:08:52 EST 2000:<I> <RJVM> Heartbeat/PublicKey resend
    detected
    Tue Feb 22 07:08:54 EST 2000:<I> <DGCserver> tried to renew lease for
    lost ref: 904
    Tue Feb 22 07:08:58 EST 2000:<I> <RJVM> Creating connection to
    138.8.81.19/138.8.81.19 5634583086356155101
    Why the constant trying to create connection, close connection?
    What is DGCserver?
    4) Ignoring message from a previous JVM, log follows:
    Tue Feb 22 08:29:50 EST 2000:<E> <RJVM> Ignoring message from a previous
    generation: JVMMessage from 8729925219143181234C138.8.222.21 to
    -3465797227003769874C192.168.100.61 with CMD_ONE_WAY, prtNum=6, ack=103,
    seqNum=1384
    Tue Feb 22 08:30:15 EST 2000:<I> <HTTPTunneling> Sending DEAD response
    What does this mean?
    5) PeerGoneExceptions
    What causes these?
    Our environment is set up as follows:
    WLS Server
    WLS 4.51 w/ Service Pack 5
    NativeIO = true
    ExecuteThreadCount = 40
    readTimeoutMillis=5000
    readTimeoutMillisSSL=10000
    Dell Pentium III 600 w/ 512 MB memory
    NT 4.0
    JavaSoft 1.2.2
    -ms128 -mx350
    WLS Client
    Java Application
    t3s and https (using WLS RMI)
    JavaSoft 1.1.7b
    typically Pentium 200 MHz or better w/ 64MB or more
    Basically our clients connect to our WLS server using RMI. Each client
    also has a callback object where the server sends event notification
    back to the clients. Most of the communication is back through these
    client callback objects. Its similar to a stock trading application in
    that 1 client incoming requests will generate 200 outgoing events (if
    for example there are 200 users on the system). The above observations
    where made while 25 very active users where on the system.
    Thanks very much for any and all help,
    Edwin Marcial
    Continental Power Exchange

  • ActiveX Control recording but not playing back in a VS 2012 Web Performance Test

    I am testing an application that loads an Active X control for entering some login information. While recording, this control works fine and I am able to enter information and it is recorded. However on playback in the playback window it has the error "An
    add-on for this website failed to run. Check the security settings in Internet Options for potential conflicts."
    Window 7 OS 64 bit
    IE 8 recorded on 32 bit version
    I see no obvious security conflicts. This runs fine when navigating through manually and recording. It is only during playback where this error occurs.

    Hi IndyJason,
    Thank you for posting in MSDN forum.
    As you said that you could not playback the Active X control successfully in web performance test. I know that the ActiveX controls in your Web application will fall into three categories, depending on how they work at the HTTP level.
    Reference:
    https://msdn.microsoft.com/en-us/library/ms404678%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
    I found that this confusion may be come from the browser preview in the Web test result viewer. The Web Performance Test Results Viewer does not allow script or ActiveX controls to run, because the Web performance test engine does not run the, and for security
    reasons.
    For more information, please you refer to this follwoing blog(Web Tests Can Succeed Even Though It Appears They Failed Part):
    http://blogs.msdn.com/edglas/archive/2010/03/24/web-test-authoring-and-debugging-techniques-for-visual-studio-2010.aspx
    Best Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Monitoring WebLogic Portal Components

    Hi All,
    Consider me a novice to Portal land; hence pardon me for any irrelevant or obvious queries.
    My requirement is to monitor the health/performance of WebLogic Portal Server and its components such as Desktops, Entitlements, Content Management, Portlets, Java Page Flows. This has to be accomplished using Java.
    The metrics that I want to capture for each of these components are Response Time and # of times invoked.
    Inputs on how I can write an application that can retrieve all this information and also insights to any other metrics, components I should be considering in order to be able to monitor a Portal Server holistically.
    Please provide your inputs.
    Thanks!

    I am also a newbie to portal performance. I would like to extend this a bit
    further:
    1- does it make sense to monitor portlet performance by user?
    2- is there anyway to correlate the portlet performance by a particular
    transaction? Does this even make sense or would there be just too many
    transactions?
    3- what are the important poral server metrics to watch (ie. cache metrics)?
    Thanks - Raj
    <Arvind N> wrote in message news:[email protected]..
    Hi All,
    Consider me a novice to Portal land; hence pardon me for any irrelevant or
    obvious queries.
    My requirement is to monitor the health/performance of WebLogic Portal
    Server and its components such as Desktops, Entitlements, Content
    Management, Portlets, Java Page Flows. This has to be accomplished using
    Java.
    The metrics that I want to capture for each of these components are
    Response Time and # of times invoked.
    Inputs on how I can write an application that can retrieve all this
    information and also insights to any other metrics, components I should be
    considering in order to be able to monitor a Portal Server holistically.
    Please provide your inputs.
    Thanks!

  • [Ann] FirstACT 2.2 released for SOAP performance testing

    Empirix Releases FirstACT 2.2 for Performance Testing of SOAP-based Web Services
    FirstACT 2.2 is available for free evaluation immediately at http://www.empirix.com/TryFirstACT
    Waltham, MA -- June 5, 2002 -- Empirix Inc., the leading provider of test and monitoring
    solutions for Web, voice and network applications, today announced FirstACT™ 2.2,
    the fifth release of the industry's first and most comprehensive automated performance
    testing tool for Web Services.
    As enterprise organizations are beginning to adopt Web Services, the types of Web
    Services being developed and their testing needs is in a state of change. Major
    software testing solution vendor, Empirix is committed to ensuring that organizations
    developing enterprise software using Web Services can continue to verify the performance
    of their enterprise as quickly and cost effectively as possible regardless of the
    architecture they are built upon.
    Working with organizations developing Web Services, we have observed several emerging
    trends. First, organizations are tending to develop Web Services that transfer a
    sizable amount of data within each transaction by passing in user-defined XML data
    types as part of the SOAP request. As a result, they require a solution that automatically
    generates SOAP requests using XML data types and allows them to be quickly customized.
    Second, organizations require highly scalable test solutions. Many organizations
    are using Web Services to exchange information between business partners and have
    Service Level Agreements (SLAs) in place specifying guaranteed performance metrics.
    Organizations need to performance test to these SLAs to avoid financial and business
    penalties. Finally, many organizations just beginning to use automated testing tools
    for Web Services have already made significant investments in making SOAP scripts
    by hand. They would like to import SOAP requests into an automated testing tool
    for regression testing.
    Empirix FirstACT 2.2 meets or exceeds the testing needs of these emerging trends
    in Web Services testing by offering the following new functionality:
    1. Automatic and customizable test script generation for XML data types – FirstACT
    2.2 will generate complete test scripts and allow the user to graphically customize
    test data without requiring programming. FirstACT now includes a simple-to-use XML
    editor for data entry or more advanced SOAP request customization.
    2. Scalability Guarantee – FirstACT 2.2 has been designed to be highly scalable to
    performance test Web Services. Customers using FirstACT today regularly simulate
    between several hundred to several thousand users. Empirix will guarantee to
    performance test the numbers of users an organization needs to test to meet its business
    needs.
    3. Importing Existing Test Scripts – FirstACT 2.2 can now import existing SOAP request
    directly into the tool on a user-by-user basis. As a result, some users simulated
    can import SOAP requests; others can be automatically generated by FirstACT.
    Web Services facilitates the easy exchange of business-critical data and information
    across heterogeneous network systems. Gartner estimates that 75% of all businesses
    with more than $100 million in sales will have begun to develop Web Services applications
    or will have deployed a production system using Web Services technology by the end
    of 2002. As part of this move to Web Services, "vendors are moving forward with
    the technology and architecture elements underlying a Web Services application model,"
    Gartner reports. While this model holds exciting potential, the added protocol layers
    necessary to implement it can have a serious impact on application performance, causing
    delays in development and in the retrieval of information for end users.
    "Today Web Services play an increasingly prominent but changing role in the success
    of enterprise software projects, but they can only deliver on their promise if they
    perform reliably," said Steven Kolak, FirstACT product manager at Empirix. "With
    its graphical user interface and extensive test-case generation capability, FirstACT
    is the first Web Services testing tool that can be used by software developers or
    QA test engineers. FirstACT tests the performance and functionality of Web Services
    whether they are built upon J2EE, .NET, or other technologies. FirstACT 2.2 provides
    the most comprehensive Web Services testing solution that meets or exceeds the changing
    demands of organizations testing Web Services for performance, functionality, and
    functionality under load.”
    Learn more?
    Read about Empirix FirstACT at http://www.empirix.com/FirstACT. FirstACT 2.2 is
    available for free evaluation immediately at http://www.empirix.com/TryFirstACT.
    Pricing starts at $4,995. For additional information, call (781) 993-8500.

    Simon,
    I will admit, I almost never use SQL Developer. I have been a long time Toad user, but for this tool, I fumbled around a bit and got everything up and running quickly.
    That said, I tried the new GeoRaptor tool using this tutorial (which is I think is close enough to get the jist). http://sourceforge.net/apps/mediawiki/georaptor/index.php?title=A_Gentle_Introduction:_Create_Table,_Metadata_Registration,_Indexing_and_Mapping
    As I stumble around it, I'll try and leave some feedback, and probably ask some rather stupid questions.
    Thanks for the effort,
    Bryan

Maybe you are looking for

  • Quicktime won't play MPEG-4

    Hello, am I missing something, as I have Quicktime Pro and i want to play a clip with the following codecs: codec: MPEG-4 Video, AMR Narrowband, MPEG-4 SDSM, MPEG-4 ODSM channel count: 1 I get the sound, but image is pixalated. Do I have to tweak or

  • Meet The Experts Questions: Partnering ISV Forum / J2EE Migration

    Hi SDN Members, In Walldorf we are having Meet the Expert sessions, where SAP developers will answer your questions to a particular topic. <b>SDN Meets Labs Walldorf Agenda</b>: https://www.sdn.sap.com/sdn/index.sdn?page=sdnmeetslabs_walldorf_agenda.

  • Remote Delta Link vs Direct Link - Federated Portal

    Hi, We couldn't get our Remote Delta Links to work on our Enterprise Portal so we just built a direct link to our IViews.  This link goes through the Enterprise Portal and to the IView on the BI Portal. (Points to the Producer on the Consumer) This w

  • Install Presenter 7.0.7 upgrade for eLS

    Hello everyone, For those who are not aware of it and do have a license for one of the eLearning Suites (1.0, 2.0, 2.5) there is a new update for Presenter 7. This update addresses a lot of issues (see: Free Update to Presenter ) and one of the most

  • Pixellated vs. Anti-Aliased (Smoothed) Graphics

    I have created a document that contains two types of graphics, screen captures of dialog boxes (sometimes resized if too large), and vector PDF exports from AutoCAD. The main document is created in Word and the screen captures are pasted into Word, b